How AI Medical Research Helped Save My Husband’s Life When Doctors Weren’t Accessible

by | Nov 17, 2025 | Featured, Narrative and Neural Nets, Real Talk | 0 comments

Content warning: This post discusses complex medical conditions, system failures in healthcare, and life-threatening situations.

A personal note from the author: I’ve been sitting on this blog for a while now, tweaking it for days because I know it’s going to be controversial. I’ve decided to publish it because I think it’s an important perspective that hasn’t been shared yet. It’s long, but hand in there, it’s worth the read. This article also explains why there’s a big gap in my recent posts. I encourage you to share any similar experiences you might have in the comments.

TLDR:

When my husband’s rare diseases caused a dangerous phosphorus deficiency on a Sunday night, Perplexity AI helped me identify it in minutes by synthesizing peer-reviewed medical literature with proper citations. This post explains how I safely use AI as a research assistant for medical decision-making, why tool choice and prompting strategy matter, and what safeguards are essential. I’m not replacing our doctors, but AI has become my research partner when the healthcare system fails and we have no other options.

Without AI intervention, my husband could have gone into cardiac arrest on a Sunday night. Perplexity Research was able to help me identify a phosphorus deficiency I didn’t even know was possible, and instead of spending the weekend in the hospital, my husband was recovering within thirty minutes.

My husband Kyle has two rare diseases and is recovering from a hypovolemic state so severe that his endocrinologist once told us she wasn’t sure how he’s still alive. He didn’t have enough blood in his body for thirty-plus years. Now, as his body finally builds blood volume, he’s going through refeeding syndrome—a dangerous condition where sudden nutritional replenishment after chronic depletion causes electrolyte shifts that can be fatal.

Managing this at home means I’ve become his de facto medical research team. His body demands sudden, unpredictable increases in vitamins, minerals, and electrolytes hour by hour. What worked yesterday might not work today. Remembering which symptom corresponds to which deficiency and what interacts with what is impossible to track manually. Especially when Kyle’s brain fog means he can’t always remember what supplements he’s already taken.

That’s how AI became my research partner.

Shortly after dinner, Kyle’s fatigue suddenly worsened to the point where it was taking twice as much effort to stand. A bath had made it worse. His brain fog increased. He mentioned feeling like the blood pressure in his temples had spiked after eating honey.

I ran through my list of usual interventions—the deficiencies I’d learned to recognize over years of caregiving. More sodium. Checking for vitamin E, vitamin D, magnesium, sodium. Nothing helped beyond minor, temporary relief.

I opened Perplexity, switched it to research mode, and carefully described his symptoms without poisoning the results with my own assumptions about what might be wrong:

“I have a patient who is recovering from a 25 year long hypovolemic state. As he recovers, his body has demanded increases in several vitamins and minerals to keep up with blood production and electrolyte imbalances. He is currently dealing with extreme fatigue that has worsened throughout the day. Taking a bath made it worse. We have ruled out vitamin E, vitamin D, and magnesium as the cause. Sodium isn’t helping much (only very minor improvement). Honey made him hungry and gave him a headache that made him feel, ‘like the blood pressure increased in my temples.’ His diet is high protein, and meat and fish have not helped with the fatigue either. He has no additional symptoms beyond some minor brain fog. What are some possibilities we might be missing?”

Within seconds, Perplexity returned a comprehensive analysis citing peer-reviewed medical literature. At the top of the list: phosphorus deficiency (hypophosphatemia).

I could rule out several other possibilities immediately based on Kyle’s recent supplement intake and how his body typically responds to different deficiencies. But phosphorus? We hadn’t been tracking that at all. And the symptom profile matched perfectly, especially the way his body had responded to eating phosphorus-rich foods earlier in the day with a crash several hours later as his cells pulled everything out of his bloodstream.

I wasn’t sure, though, and B1 and B12 deficiencies could still be the culprit too. I had him take those supplements while I continued researching. His immediate response was puzzling. He now had pain in his ribs and spine, but his strength was returning and the fatigue was becoming intermittent instead of constant. He started craving milk and sugar.

I reported back to Perplexity:

“Patient reacted to vitamins B1 and B12 with pain in multiple places, primarily in his ribs and spine. His strength returned and his fatigue is coming and going. He is craving both milk and sugar at this point. Any thoughts on what could be going on here?”

Perplexity explained, citing a dozen medical papers specific to refeeding syndrome, that this specific response pattern likely confirmed severe phosphorus depletion. The pain indicated phosphorus being pulled into cells as the B12 I’d given him stimulated blood cell production and nerve repair. Without enough phosphorus to support that process, his body was trying to correct the deficiency. Milk is high and phosphorus and sugar from the insulin surge the refeeding process had created.

I got the milk, fortified it with a supplement specially designed to add the right ratios of other electrolytes and salt to any drink, and gave him a cookie dipped in peanut butter to stop the insulin cycle. Within thirty minutes of identifying the deficiency and adjusting accordingly, Kyle was recovering.

Without Perplexity’s ability to quickly synthesize medical literature, I would not have figured things out in time. Phosphorus deficiency can get dangerous fast. Kyle could have easily progressed to cardiac arrest, especially since it wasn’t on my radar at all.

This is not a hypothetical. This is what AI medical research looks like in practice when you’re managing rare diseases and the healthcare system fails you.

How We Got Here: A System That Failed at Every Level

I didn’t choose to become my husband’s de facto medical research team. The system forced my hand.

Kyle has two rare diseases: sucrase-isomaltase deficiency and Autoimmune Voltage-Gated Potassium Channel Encephalitis. He also lives with the lasting effects of parental neglect: long-term Lyme disease that caused brain damage to his autonomic system and a rather extreme case of chronic hypovolemia—low blood volume. The latter has likely been present since infancy and was untreated for thirty-two years. Fixing it would mean a huge improvement in Kyle’s quality of life.

In January 2024, I accepted a federal position at the National Institutes of Health specifically because the contracting company I’d been working for was switching to an insurance provider that didn’t cover Kyle’s specialists and would dramatically increase his medication costs. I had no choice.

Just over a year later, in early 2025, I was terminated in the mass firing of federal probationary employees. I was literally locked out of my computer mid-workday while helping colleagues prepare for return-to-office logistics. My coworkers cried. My bosses sent messages for months saying how much they missed me.

We transitioned to Medicaid during the three-month legal hold on the firings. Then another gut punch: the literal day after we enrolled we discovered that my state’s Medicaid program had stopped covering Kyle’s most critical medication, Sucraid. It’s an orphan drug that my husband needs to be able to digest all foods except meat. Without it, he can’t get enough nutrients from his food and starts wasting away. And it costs $14,000 per month at retail prices. With no other option, we’re now dependent on the drug company’s low-income compassionate use program.

Kyle’s conditions are so rare and interconnected that only a place like the NIH Clinical Center would have the infrastructure and know-how to manage his care properly. In 2021, for example, he nearly died when a medication to treat one disease unexpectedly interacted with another. But “capable” doesn’t mean “ideal.” Round-the-clock clinical care would cost Kyle virtually his independence and quality of life. Being at home means he can maintain a sense of purpose and autonomy.

So we manage his care ourselves. We always have, to some degree. All rare disease patients or their family become their own health experts by necessity. But rehydrating and pulling someone out of lifelong hypovolemia has more complex, sudden changes than anything we’ve dealt with before.

Refeeding Syndrome After a Lifetime of Hypovolemia

Kyle’s hypovolemic state was so severe that his endocrinologist once told us she wasn’t really sure how he is still alive. He didn’t have enough blood in his body for thirty-plus years, so his body learned to shut down as many functions as it could to keep him going. Bringing him back from the “dead” is also dangerous given how extreme his condition was. As his body finally builds blood volume, he’s going through refeeding syndrome—a dangerous condition where sudden nutritional replenishment after chronic depletion causes electrolyte shifts that can be fatal.

His body demands sudden, unpredictable increases in vitamins, minerals, and electrolytes to keep up with blood production. He’ll be stable one moment and unable to stand the next. Symptoms change hour by hour. What worked yesterday might not work today. It’s clinically complex, and there’s no playbook for Kyle’s specific combination of conditions.

In a functional healthcare system, Kyle would have access to round-the-clock monitoring with specialists who could adjust his treatment in real time. Instead, we have an extensive at-home setup (which we already had for managing his existing conditions anyhow) and my ability to rapidly research and synthesize medical information to figure out which sudden deficiency he’s experiencing and how it will interact with his current state.

Remembering which symptom corresponds to which deficiency, what interacts with what, which supplements he’s already taken is something we were already doing before the refeeding syndrome. The added chaos of refeeding syndrome has made it impossible to track manually, especially when Kyle’s brain fog means he can’t always remember what he’s taken in a given day.

This is where AI became my research and management assistant.

How I Use AI for Healthcare Research (And Why It Works)

I use Perplexity specifically its citations are easy to verify. It cites the National Library of Medicine heavily too, which is a huge bonus for me. I used to work at the NIH, have a professional background that includes SEO and AI systems, and ghostwrite for tech companies building and leveraging large language models. I already knew Perplexity AI’s strengths and weaknesses for medical research because I understand how these systems work at a technical level. It also isn’t a conversational AI, a difference I’ll explain the importance of later.

With that in mind, I need to be clear: I don’t trust Perplexity. And I don’t trust most of our doctors either.

Kyle’s conditions are so rare that only two of his physicians have earned my full trust. Those two are the ones who’ve been with us from day one, know his complete medical history, go out of their way to be the only person in their practice handling his case, and have dedicated hours of their own time to research solutions for us. The rest of the time, it’s my job to educate our doctors. And our doctors have made mistakes despite having the best intentions.

Likewise, I don’t trust Perplexity to make medical decisions. I chose it because I can monitor it easily. It behaves like a junior researcher pulling information from highly vetted academic sources. It saves me time hunting down papers myself. That’s all. Just like with our doctors, I fully expect it to make mistakes. And just like with our doctors, I need to give it the right information to minimize those mistakes and give it the highest chance of turning up helpful information I can work from.

The Technical Strategy: How I Prompt for Medical Research

My prompting approach is deliberate and systematic. Look at that initial prompt again:

“I have a patient who is recovering from a 25 year long hypovolemic state…”

Not “my husband.” A patient.

This framing tells the AI to respond as if I’m a medical professional. I don’t need answers dumbed down. I need accuracy and efficiency. Using “patient” triggers responses that include proper medical terminology, recommend specific lab tests, and provide the level of technical detail I need to fact-check later. If I’d said “my husband,” Perplexity would have given me watered-down explanations and told me to consult a doctor. That is useless in a crisis, and it usually results in Perplexity making more mistakes since it’s looking for more generic answers.

I also said the patient is recovering from a 25 year long hypovolemic state. Remember how I said it was actually 32 years earlier, though? I do this because I’ve found that 25 years seems to be the sweet spot for Perplexity to pull real case studies that are at least partially applicable to Kyle’s situation. Most people don’t survive as long as he has with his severity of conditions, so that time scale prompts the LLM to look for extreme-but-survivable situations. I do give it the actual timeline when researching not-so-urgent medical information, though, to up the chances of it citing the rarest of the rare papers for me.

In my initial prompt provided comprehensive medical history upfront and list what I’ve already ruled out. This reduces bias while giving the AI enough context to search effectively, but I’m selective about what I include. I didn’t mention Kyle’s CSID (sucrase-isomaltase deficiency) until much later in the conversation when I wanted specific dietary recommendations. If I had, Perplexity would zero in on it as a potential cause and given it too much weight by including it in its searches. Kyle’s CSID is well-managed and was extremely unlikely to be the cause here.

The goal is to introduce the minimum context needed while preserving the AI’s ability to surface possibilities I haven’t considered.

What I Verify (And How)

Perplexity shows citation titles when you hover over them. Since I have pretty intimate knowledge of Kyle’s conditions, I always double-check:

  • Statements that represent new information
  • Anything that feels “off” or contradicts what I already know
  • If the cited academic papers actually address what Perplexity says they do (sometimes it pulls a chunk of text within the paper, such as a differential diagnosis, and gives it the wrong weight, misrepresenting the paper subject matter)
  • Any citations that look like random internet blogs instead of peer-reviewed research

A note about that last one: for most people, I’d recommend locking Perplexity to academic sources only. The problem is that Kyle’s conditions are so unusual that personal blogs sometimes give Perplexity the keyword hint it needs to find the academic paper backing things up. This increases error rate, so I watch closely.

Testing Hypotheses in Real Time

After Perplexity identified phosphorus as the likely culprit, my first instinct wasn’t to just accept it. I had B1 and B12 supplement powders on hand and knew Kyle wouldn’t overdose on them in tiny amounts, so I had him take those while I researched further. When it didn’t work, that made phosphorus the most likely cause and next thing to test in my mind.

I continued to send Perplexity into the medical research rabbit hole as Kyle’s symptoms shifted:

  • “Patient reacted to vitamins B1 and B12 with pain in multiple places, primarily in his ribs and spine. His strength returned and his fatigue is coming and going. He is craving both milk and sugar at this point. Any thoughts on what could be going on here?”
  • “He already had salmon and peanuts today shortly after the initial fatigue started to set in. Having the milk and honey happens to be making him feel very cold too.”
  • “Update: after 2 hours, the patient is starting to see an increase in symptoms again. Same symptoms as before, but they had calmed down after the ORS.”

Each time Perplexity turned up more information about potentially affected systems, interactions between electrolytes, and more chunks of information I needed to make the right call on how to proceed. I was using it as a research assistant while I made clinical judgment calls. This only works because I can evaluate the information critically and test hypotheses systematically.

I don’t recommend people without experience do this for themselves. My husband, for example, couldn’t have done this in the middle of his crisis. That being said, I’m putting this information out there because I know how utterly unrealistic it is for me to say, “Don’t try this at home!” when you’re in a position of needing to try ANYTHING at home out of pure desperation. Desperate people will use AI for medical decision making, so I want to at least explain how I minimize the chance of things going wrong.

Why ChatGPT Would Have Failed

Speaking of things going wrong, I will never advise people use a more general, conversational LLM like ChatGPT for AI medical research in critical situations.

When writing this article, I tested the same initial prompt with ChatGPT to see what it would return. The response was completely worthless. ChatGPT gave me vague categories organized by body systems (which is useless when multiple systems are overlapping), didn’t break down specific symptoms for each deficiency, and essentially said “talk to a doctor, but here’s some general stuff that could be involved.”

The formatting alone would make it impossible to scan quickly in a crisis. Perplexity’s more structured approach—listing deficiencies by priority, citing specific symptoms, providing lab test recommendations— makes it better for real-time medical research.

Tool choice matters when conducting medical research with artificial intelligence. Different AI systems are optimized for different tasks. Perplexity’s Research mode is built for research and academic paper synthesis with citations. ChatGPT is built for conversation and doesn’t have the structure needed to support this kind of research. I’ve also found it also tends to over-answer, and it introduces assumptions and tangents that poison subsequent responses pretty easily.

I’m going to be realistic: no matter what we do, people will use AI for healthcare decisions. That’s why I want people to know that not all AI tools are created equal, and using the wrong tool for medical research can be dangerous. The choice between research-optimized AI versus conversational AI is often the difference between finding accurate, cited medical literature and getting general health advice that sounds confident but lacks verifiable sources.

Key Takeaways from This Section

  • Rapid diagnosis: AI-assisted research identified a life-threatening phosphorus deficiency within minutes when traditional resources weren’t accessible
  • Citation-based research matters: Perplexity’s ability to cite peer-reviewed sources from the National Library of Medicine enabled safe verification
  • Human judgment is essential: I verified every claim, tested hypotheses systematically, and maintained medical oversight with 13 specialists
  • The healthcare system gap is real: Rare disease patients already manage complex care by necessity when the system can’t provide adequate support
  • Tool choice is critical: Different AI systems are optimized for different tasks; research requires citation capability

AI Also Helped Us Find Connections that Changed Kyle’s Diagnosis

That phosphorus crisis wasn’t the first time Perplexity helped us make critical connections. It was actually Perplexity Research that helped me pinpoint hypovolemia as a inter-linking factor for Kyle’s health issues in the first place.

Perplexity didn’t make the connection—I did—but it gave me the info to make the connection and helped with figuring out how extensive the issue might be. While generating a report about POTS (Postural Orthostatic Tachycardia Syndrome) for me, it mentioned something about hypovolemia’s relationship with POTS in passing. It specifically mentioned a few POTS symptoms that present differently when hypovolemia is involved, and just seeing that association made things click for me. I sent it off to research ways it might be interacting with Kyle’s other existing conditions. The pattern became clear.

It was also Perplexity that helped me pull the extremely hard-to-find papers that let us build a timeline for how his condition got so bad in the first place. Did you know that a campylobacter jejuni infection before six months old might destroy your gut’s ability to produce some enzymes? I didn’t. Perplexity didn’t either. But it did find papers from the researchers who are working on the subject.

We already knew Kyle had that infection as a young child, but the level of neglect he experienced meant we had no idea what the timeline was. That connection also explained another mystery symptom his estranged mother had once mentioned he’d had as a baby.

If someone who grew up neglected to the point my husband was can have hope and real quality of life thanks to AI helping make connections that specialists missed, that matters. That’s a genuinely positive way that AI can shape the world around it for the better.

How AI Caregiving Tools Could Transform Care for Complex Conditions

I am not suggesting everyone should do what I’m doing. What I’m doing is dangerous if you don’t have the knowledge base and critical evaluation skills to do it safely.

But what if AI-assisted medical research was integrated into healthcare properly, with appropriate oversight and guardrails?

Custom Health Tracking for Complex Conditions

One of Kyle’s most common symptoms is brain fog. It’s also just hard to remember whether you’ve taken a supplement when you have a dozen of them to manage. He’ll tell me he’s having an issue, and I’ll ask if he’s taken a medication already. He won’t remember.

I’m seriously considering building an app that pesters him into checking off what he’s taken and when. It would help both of us, I have the AI-augmented skills to make it myself, and I could make it specific to his needs and his condition.

Now imagine you can expand this concept to any doctor who needs custom tracking for patients with non-standard conditions. There are plenty of diabetes trackers out there, but what do you do when a patient has too many intersecting diseases? Or has a rare disease with no existing tools?

Here’s how it could work: A doctor identifies that a patient needs customized tracking. They send a request to their organization’s IT or AI Office, and that team uses AI to generate a custom app based on clinically-regulated parameters. You could ensure better health tracking that’s more responsive than anything currently on the market, built specifically for that patient’s needs. Patients and caregivers could input updates about the patient’s condition into a database that a HIPAA compliant, research-based AI could pull from. The AI could help spot potential issues and flag them for medical review. This could make the caregiving load easier while keeping human oversight in the loop.

I know several ways organizations could architect this based on their existing tech stack, but I haven’t had time to build a prototype just yet. I’ll update on my blog if I do.

Pairing Medical Expertise with Patient-Led, AI-Assisted Research

The biggest risk here is obviously bad medical advice. That’s why this concept works best when doctors are involved. I’m not suggesting patients replace doctors with AI. I’m suggesting we pair them. AI as infrastructure for research synthesis and pattern recognition. Doctors providing clinical expertise and oversight. Patients and caregivers contributing lived experience and day-to-day observations that no fifteen-minute appointment could capture.

A dose of reality for anyone appalled at the idea: Most rare disease patients already manage their own care. Heck, they often joke about new-to-them doctors asking where in the medical field they work during appointments.

“Where do you work? Are you a doctor or a nurse?” “Neither, I just have a rare/chronic disease.” ”Oh. Yeah that explains it.”

There are also plenty of examples of doctors making things worse for rare disease patients too. All you need is one doctor who doesn’t listen to make things much worse. AI and doctor input together could reduce mistakes all around.

Kyle is a great example. He would be dead by now if we only relied on doctors. He would also be dead if we didn’t rely on doctors. Both things are true.

I’ve been in situations where I’ve literally begged doctors to treat my husband. Kyle has also been dismissed after a sixteen-hour emergency room visit where they refused to call the on-call infectious disease specialist because “they didn’t think he’d die in the next forty-eight hours.” We went to that emergency room because Kyle had just lost the ability to speak. The infectious disease division at that same hospital were the ones that told us to come in if something like that happened over the weekend because they knew how precarious things truly were. I honestly wish an AI-powered system like what I’m describing had been in place at the time to flag how dangerous that specific situation was for this specific patient at the time. And I do get the plight of the emergency room staff; Kyle’s bloodwork looked only a little off to them, and they were dealing with multiple shooting victims at the same time. It was an egregiously bad call regardless, and it definitely made Kyle’s condition worse.

Nothing radicalizes you quite like watching the system fail your husband when his life is on the line.

The Warning Label: What Could Go Wrong

Here’s my warning label for this post:

Do not attempt this without understanding both how AI works and how to critically evaluate medical information. You can kill someone if you get this wrong.

I’m not worried about desperate caregivers or rare disease patients. The people most likely to misuse this approach are overenthusiastic developers who will try to simplify things too much or people who aren’t used to medical research and don’t understand how to avoid misleading the AI.

You know how you can Google cold symptoms and get told you have cancer? The AI version of that is asking an LLM that doesn’t do research (like ChatGPT in conversational mode) and getting an overly dismissive or overly alarming answer because the model isn’t optimized or the prompt wasn’t clean enough.

Here’s the minimum you should have to use AI for medical research safely:

  1. Ability to critically evaluate sources. You must be able to read citations and identify when the AI has misinterpreted a paper or pulled something out of context.
  2. Baseline medical knowledge. You need enough understanding to recognize when an answer doesn’t make sense or contradicts established facts about the condition.
  3. Systematic thinking. You have to approach this methodically: form hypotheses, test them, gather data, adjust, open new chat threads when needed. Not panic and ask leading questions.
  4. Understanding of AI limitations. You must know when AI is likely to hallucinate, when it’s working from outdated information, and when you need to verify everything it says.
  5. Access to medical oversight. Even with all of the above, you need doctors in the loop. I consult with Kyle’s physicians regularly. AI helps me come to appointments prepared and ask better questions.

Without these competencies, you will make dangerous mistakes.

Should You Use AI for Medical Research?

✅ Consider Using AI When:

  • You have baseline medical knowledge and research evaluation skills
  • You can verify citations from peer-reviewed sources
  • Doctors are actively involved in your care plan
  • You need rapid literature synthesis for complex conditions
  • Time is critical but you still have capacity to verify information
  • Traditional resources (doctors, specialists) are temporarily inaccessible

❌ Do NOT Use AI When:

  • You cannot evaluate sources critically or identify misinterpretations
  • You’re looking for quick answers without verification
  • You’re attempting to bypass medical care entirely
  • You don’t understand the underlying condition
  • You’re in an acute emergency (call 911 instead)
  • You lack the systematic thinking needed to test hypotheses safely

💡 The Core Principle:

AI should function as a research assistant that accelerates literature review—never as a replacement for medical judgment. Think of it like having a junior researcher who can quickly find papers, but whose findings you must verify before acting.

What Needs to Change

The system failures that forced me into this position are unconscionable.

Doctors need to stop passing patients along when they encounter conditions outside their narrow specialty. Rare disease patients often get ping-ponged between specialists who each treat one symptom without understanding how everything interconnects. Our gastroenterologist is wonderful and trustworthy, but he’s useless for conditions that aren’t part of his specialty, even if they affect the gut. He isn’t the issue. The system is.

AI development has massive potential to support rare disease caregiving and serve traditionally underserved populations, and we’re underutilizing it. Doctors need AI education. Patients need access to research tools with appropriate guardrails. Healthcare systems need to recognize that AI can fill gaps—not replace humans, but support them where the system is failing.

The entire U.S. healthcare system is broken. Fourteen thousand dollars a month for one medication to keep someone alive is a symptom of how deeply broken it is. There’s no support for people to get back on their feet. The support programs that do exist are often designed to trap people inside—like the impossible income cliff I’m navigating, where Kyle can either get poverty-level state support or I need to make over $100k annually to afford the health insurance we need to cover his care. There’s no in-between.

The Hope

Kyle case of hypovolemia so severe there is absolutely no way it would have been safe for us try and tackle treating it without AI help. We can’t get him into round-the-clock clinical care, and I’m only one person. Without AI, we would have had to actively choose to keep my husband sick because we couldn’t manage the strain of recovery.

That’s unacceptable.

But with AI as a research partner—used carefully, systematically, with human judgment and medical oversight—Kyle is recovering. He’s gaining independence. He’s living more like a normal person than he ever has in his entire life.

If someone who grew up as neglected as Kyle was, whose conditions are so rare and interconnected that specialists struggle to treat him, can have quality of life because AI helped make connections that humans missed…that’s worth investing in.

I shouldn’t have to be using AI to manage my husband’s healthcare. But I’m grateful I can.

For Healthcare Organizations Developing LLM Healthcare Applications

If you’re building AI tools for healthcare, here’s what I need you to understand:

The gap between “AI capabilities in ideal conditions” and “AI capabilities when the system has failed and you have no other options” is enormous. I’m not replacing medical professionals by choice. I’m filling a void created by systemic failures.

That’s a different conversation than most AI-in-healthcare discussions are having.

Build tools that support rare disease patients and their caregivers. Build research assistants that cite sources transparently. Build systems that help doctors collaborate with patients instead of dismissing them. Build with the understanding that the people who need these tools most are the ones the system has already failed.


What This Case Reveals About AI’s Role in Healthcare

This experience demonstrates three critical realities about AI in medical care:

1. AI excels at rapid literature synthesis

What would have taken hours of manual research through medical databases happened in seconds. Perplexity synthesized peer-reviewed literature on refeeding syndrome, phosphorus deficiency, and electrolyte interactions faster than any human could, providing cited sources I could verify.

2. Human judgment remains essential

I verified every claim, tested hypotheses systematically, and ruled out alternative explanations. The AI provided research direction; I provided clinical reasoning, pattern recognition from years of caregiving experience, and the ability to integrate Kyle’s complete medical history.

3. The healthcare gap for rare diseases is real and urgent

Rare disease patients already manage their own care by necessity. The system forces this reality on us. AI tools don’t create this problem—they offer a way to manage it more safely when proper oversight isn’t available.

The Critical Question:

The question isn’t whether patients will use AI for medical research. Desperate people are already doing it. The question is whether we’ll build proper guardrails, education, and healthcare system integration to make it safer. Right now, we’re leaving vulnerable populations to figure it out alone.

Key Takeaways for Different Audiences

For Caregivers of Rare Disease Patients:

  • Perplexity’s Research mode with citations offers the best balance of speed and verifiability for medical literature review.
  • Frame prompts as “patient” rather than personal relationships to get professional-level responses.
  • Always verify citations, test hypotheses systematically, and maintain medical oversight.
  • Lock AI tools to academic sources only unless you have expertise to evaluate mixed sources.

For Healthcare Professionals:

  • Rare and chronic disease patients are already conducting their own research by necessity. AI makes it faster but doesn’t change this reality.
  • Consider how AI research tools could supplement (not replace) fifteen-minute appointment constraints.
  • Patients using AI to prepare for appointments often ask better questions and provide more complete symptom tracking.
  • The alternative to AI-assisted patient research isn’t no research, it’s just slower that can sometimes include more hearsay.

For AI Developers Building Healthcare Tools:

  • Citation transparency is non-negotiable—tools must show sources and tie them to specific statements, not just generate answers.
  • Design for the reality that patients are already managing complex care without adequate system support.
  • Build tools that help doctors collaborate with patients, not systems that encourage bypassing medical care.
  • Consider rare and chronic disease patients as a primary user group, not an edge case.

For Policy Makers:

  • The healthcare system’s failure to support rare disease patients creates dangerous gaps that patients fill however they can.
  • AI tools can make these gaps safer to navigate, but only with proper education, oversight, and secure integration policies.
  • Address the root problem: medication costs, specialist access, and support for complex chronic conditions.

FAQ

Q: Isn’t this dangerous? Shouldn’t you leave medical decisions to doctors?

Rare disease patients are already their own health experts by necessity. Doctors can do just as much harm as AI. There are countless examples of this, and most rare disease patients have personal experience. That’s why we pair them together. AI is a tool. With the right parameters and guardrails, it works. My husband would be dead by now if we only relied on doctors. He would also be dead if we didn’t rely on doctors. Both things are true.

Q: How often are you consulting Perplexity?

A couple times a week, usually—whenever Kyle has an issue and I either can’t remember which cause I’m dealing with or we’re seeing something totally new. Last week caring for my husband was basically a full-time job because of how dangerous the refeeding process is. Usually it’s more like part-time hours.

Q: What happens when you get sick or exhausted? Who’s the backup?

There is no backup. It’s been that way for years. We know it’s a problem, but our society isn’t designed to fix it. I do have my own therapist for my mental health.

Q: Could someone without your background do this safely?

Probably not. You need to understand how AI works, how to critically evaluate AI medical research outputs, and how to think systematically under pressure. Most importantly, you need doctors in the loop. I’m not working alone; we’ve had 13 specialists involved in managing my husband’s condition. I’m using AI to come to appointments better prepared and ask better questions.

Q: What would you tell someone who wants to try this?

Lock your AI tool to academic sources only. Learn to read citations. Never trust a single answer—verify everything. Work with your doctors, not around them. And if you don’t have the background to evaluate medical information critically, don’t attempt this. The stakes are too high.


If you’re interested in how AI medical research and AI tools can be used responsibly in complex problem-solving contexts, check out my other posts on AI workflow automation and AI limitations. I write about practical applications of AI for people who need real solutions to real problems.

Archives

0 Comments

Submit a Comment