Medical Disclaimer | my-lghealth
How MyLGHealth Works?
We didn’t just plug in an AI — we trained one with real doctors inputs and ethically sourced Reports
Your lab report wasn't written for you
You get a blood test done, and a few days later a PDF lands in your inbox or patient portal. You open it expecting clarity and instead you're staring at abbreviations — CBC, BUN, GFR, ALT, TSH — surrounded by numbers and reference ranges that assume you already know what they mean. Unless you went to medical school, most of that document is essentially unreadable.
And this isn't a small problem affecting a handful of people. According to the US Department of Health & Human Services, only 12% of American adults have the health literacy needed to properly interpret medical documents. That leaves 88% of the population guessing, worrying, or Googling their way through results they can't actually read. A University of Michigan study put this to the test and found that only 38% of people with lower literacy skills could even tell whether a value fell inside or outside the normal range.
What makes it worse is that people want their results immediately. A JAMA Network Open study surveyed over 8,000 patients and 96% of them said they preferred seeing results right away, even before their doctor had reviewed them. But then 57% of those same people went searching online for help understanding what they'd just read, because the numbers alone weren't enough.
I built MyLGHealth after watching this happen for years. Patients would come in for follow-ups and they'd been sitting with their lab results for a week, anxious, confused, sometimes convinced something was terribly wrong when it wasn't — or thinking everything was fine when it wasn't. The gap between receiving a report and understanding it is where the real harm happens.
Dr. Lynn A. Brody — Founder & CEO, MyLGHealthTrained by doctors, not developers
The vast majority of AI tools that say they read medical reports are not as sophisticated as you may think. They feed a general-purpose language model, the same one that composes emails and summarizes articles, your document, and tell it to write you a summary. The model is able to read the words, yes. However, it is unaware that when the creatinine is high, it would be entirely different in a 65-year-old man with a history of kidney disease than it would be in a dehydrated college athlete who has just run through a marathon. A general-purpose model would consider both cases equally since it was never trained to distinguish between them.
The AI of MyLGHealth was not constructed in that manner. The system was trained using thousands of real medical records such as blood panels, metabolic panels, urine samples, CBCs, prescriptions, diagnostic reports and each one was evaluated one-on-one by a qualified physician who wrote his or her own clinical interpretation. Then when a doctor told us that a given set of liver enzyme increases combined with a high bilirubin level might indicate the presence of cholestasis and not just hepatitis, that logic was incorporated into the way the AI process similar patterns. The AI learned that when another doctor commented that a borderline HbA1c in a 28-year-old woman with PCOS requires different context than the same number in a 55-year-old man with a family history of diabetes.
There's a measurable difference between trained and untrained AI in medicine. Stanford researchers tested how well AI performed on clinical diagnostic reasoning and found it scored around 92% — roughly an A grade — when it was properly calibrated. Meanwhile, a 2025 meta-analysis in Nature Digital Medicine that reviewed 83 separate studies found generic, untrained AI models averaged about 52% accuracy on diagnostic tasks. That's basically a coin flip. The gap between a trained medical AI and one that's winging it is enormous, and it's the reason we spent months on physician-reviewed training data instead of shipping a generic wrapper over an off-the-shelf model.
We didn't take a shortcut. Every document our AI learned from was reviewed by a real physician who said 'this is what this means, this is what to flag, this is how to explain it to a patient.' That's the foundation. The AI handles the speed and the scale — the doctors provided the clinical judgment it's built on.
Dr. Lynn A. Brody — Founder & CEO, MyLGHealthWhat happens when you upload a document
The whole thing takes about 10 seconds from start to finish, but there's a lot happening underneath. Here's the actual sequence:
You upload your file — drag and drop or browse for a lab report or prescription. Accepts PDF, JPG, and PNG up to 5MB, maximum 5 pages. No account, no registration, no personal information collected.
The AI reads everything on the page — not just the text but also tables, graphs, reference range columns, and even handwritten notes. It interprets the layout and structure of the document, figuring out which values belong to which tests and how they relate to each other, similar to how a doctor's eyes actually scan a report rather than reading it top to bottom like an article.
Clinical patterns get matched — the AI cross-references what it reads against the physician-reviewed training data. If your report shows an elevated white blood cell count alongside a high CRP value, the system doesn't treat those as two unrelated numbers. It recognizes the combination and explains what it may indicate, the same way a doctor would connect those dots when reading the same panel.
Your summary gets generated — a plain-language explanation of every value on your report, covering what's normal, what's outside the reference range, what each test actually measures, and why any of it matters. If you selected a preferred language before uploading, the entire summary arrives in that language. Currently supports English, Urdu, Hindi, Arabic, French, and several others.
Your file gets deleted — the moment the summary is ready, your uploaded document is permanently wiped from server memory. Never written to disk, never saved in a database, never accessible to anyone including the MyLGHealth team. The summary on your screen is all that exists, and even that disappears when you close the page unless you download it.
The entire process takes about 10 seconds. But what's happening in those 10 seconds is built on months of doctor reviews, thousands of training documents, and continuous quality monitoring. The speed comes from the technology. The accuracy comes from the medicine.
Dr. Lynn A. Brody — Founder & CEO, MyLGHealthThe doctors behind the monitoring
Building the AI was the first part. Keeping it accurate is the part that never ends, and it's honestly the part most people don't think about when they hear "AI-powered." They assume it was built once and runs on autopilot. That's not how this works.
When medical standards shift — and they do, more often than people realize — the AI's training data gets updated. New reference ranges for cholesterol, revised guidelines for interpreting thyroid panels in pregnant women, updated HbA1c thresholds from the American Diabetes Association — all of that feeds back into the system. Speaking of which, the community sharing feature ties into this process too. When users voluntarily share their AI summaries (and only the summaries, never the original documents), those get reviewed by the medical team to spot patterns. If a batch of thyroid panel summaries shows the AI explaining TSH levels less clearly than it should, the doctors catch it and the training gets refined. It's a feedback loop that only works because users choose to participate in it.
I think people assume that once you build AI, it runs on its own forever. That's not how this works. Our doctors review outputs regularly. They catch things that need adjusting. They update training when clinical guidelines change. The AI is the engine, but the doctors are still driving.
Dr. Lynn A. Brody — Founder & CEO, MyLGHealthWhat we are — and what we're not
MyLGHealth explains your medical documents in plain language. That's what it does, and it does it well. But it doesn't diagnose you, it doesn't tell you what medication to take or stop taking, and it doesn't replace a doctor who has access to your full medical history, can physically examine you, and knows what questions to ask about your symptoms and lifestyle. No AI tool should be doing that, and we're not pretending to.
If something in your summary looks concerning — a value that's significantly outside normal range, or a pattern across multiple markers that the AI flags — the next step is always your doctor. What MyLGHealth gives you is the ability to walk into that appointment already understanding what your report says, which means you can ask sharper questions and have a more useful conversation in what's usually a pretty short visit. For users who want more immediate guidance, the Consult a Doctor feature connects you with a verified specialist matched to your condition who can review your AI summary and help you figure out what to do next.
I never wanted MyLGHealth to replace doctors. I wanted it to make patients better prepared when they see one. A patient who understands their report asks better questions, makes more informed decisions, and gets more out of a 15-minute appointment. That's the whole goal.
Dr. Lynn A. Brody — Founder & CEO, MyLGHealthMyLGHealth is a free AI-powered health document reader. Upload your lab report or prescription and get an instant plain language summary, privately and securely.
2026 @ my-lghealth