On Tuesday, Robert F. Kennedy Jr.’s Department of Health and Human Services sent every employee an email. The message was clear: start using ChatGPT right now. Not next month. Not after training. Today.
But here’s what should really worry conservatives: this isn’t the end goal. It’s just the beginning. HHS has also announced plans to roll out AI through Medicare and Medicaid Services that will determine whether patients are eligible to receive certain treatments. We’re talking about AI systems making life-and-death decisions about your healthcare.
From Document Summary to Life-and-Death Decisions
The current email says ChatGPT will help “promote rigorous science, radical transparency, and robust good health.” It’s particularly good at “summarizing long documents.” Sounds harmless, right?
But once every government worker is comfortable using AI for routine tasks, the next step becomes inevitable. Why not let AI decide who gets expensive treatments? Why not automate benefit approvals? It’s more “efficient.”
The email says workers can input “most internal data” and “routine non-sensitive personally identifiable information.” But then it warns them not to put in Social Security numbers or bank accounts. Where’s the line between “routine” and “sensitive” when we’re talking about your health records? The government doesn’t say.
Clark Minor, a former employee of the surveillance company Palantir, is running this show. Palantir made its money helping government agencies spy on people. Now Minor wants every HHS worker using AI tools as a stepping stone to automated healthcare decisions.
Why This Matters to Conservatives
Limited government means government does less, not that it does the same things faster with robots. When we hand over decision-making to AI systems, we’re not shrinking government. We’re just making it less accountable.
Think about it this way: if a government worker denies your Medicare coverage, you can file a complaint. You can vote out their boss. But if an AI system decides you can’t get that heart surgery, who do you blame? The computer? The company that made it? Good luck getting answers.
The HHS email admits that AI can be biased. It tells workers to “be skeptical” and check original sources. But here’s the reality: most people won’t do that extra work. They’ll trust the computer, especially when their boss is pushing them to use it.
The Track Record Speaks for Itself
AI systems in healthcare have already shown serious problems. A Stanford University study found that ChatGPT’s accuracy on a simple math problem dropped from 98% to just 2.4% over three months. We’re talking about basic arithmetic here – the kind of calculations that determine drug doses and benefit eligibility.
Research shows ChatGPT’s accuracy on math problems is below 60%, which is about the same as an average middle school student. Now imagine that error rate applied to deciding whether you qualify for life-saving treatments.
The bias problems are even worse. A widely used algorithm affecting millions of patients showed significant racial bias: Black patients had to be much sicker than White patients to receive the same care recommendations. AI systems can fail to diagnose or treat entire patient groups, including ethnic minorities, immigrants, children, the elderly, and people with disabilities.
A study of 1.7 million AI responses found that race, gender, and income influenced treatment recommendations even when patients had identical health conditions. That’s not smart technology. That’s automated discrimination.
These types of AI systems have been shown to be biased when they’ve been tried, and result in fewer patients getting the care they need. Yet HHS wants to expand their use.
Even AI Enthusiasts See the Problems
I’m not anti-technology. I use AI tools regularly, including to help code websites and solve complex problems. AI can be incredibly useful when you understand its limitations and use it properly.
But that’s exactly why this HHS rollout is so concerning. As someone who works with these tools daily, I know their weaknesses. AI will confidently give you wrong answers. It makes up facts. It can’t do basic math reliably. And most importantly, it requires constant human oversight from people who understand how it works.
The average government employee doesn’t stand a chance. They’re being told to use a tool that even experienced programmers handle carefully. When I use ChatGPT for coding, I test every line of code it gives me. I never trust it completely. But HHS is telling workers to use it for processing health information affecting real people – and plans to let it make treatment decisions.
We’ve all seen what happens when government rushes to adopt new technology without understanding it. Remember Healthcare.gov? The government spent billions on a website that didn’t work. Now they want to put AI in charge of deciding who gets medical treatments.
The responsible approach would be extensive training, clear guidelines, and gradual implementation with lots of human oversight. Instead, we’re getting “start using it today” with a warning to “be skeptical.” That’s not how you deploy powerful technology safely.
The Slippery Slope Is Real
Today it’s document summaries. Tomorrow it’s treatment eligibility decisions. Once government workers are used to trusting AI for routine tasks, expanding its role becomes natural. Each step seems reasonable on its own.
Tech companies like OpenAI change their privacy policies whenever they want. They’ve done it before, and they’ll do it again. When you’re dealing with health information and treatment decisions, that’s not just inconvenient – it’s dangerous.
What Comes Next
This HHS rollout is just the beginning. The Trump administration wants AI in every government agency. That means AI making decisions about your taxes, your benefits, and your family’s safety.
Some agencies are already using AI to decide who gets government assistance. Others want to use it for law enforcement. The pattern is clear: more automation, less human judgment, less accountability.
What Conservatives Can Do
First, contact your representatives in Congress. Tell them you want oversight of government AI programs. Demand transparency about how these systems work and what data they collect.
Second, support legislation that requires human review of AI decisions. If a computer says you can’t get Medicare coverage, a real person should have to agree and be held accountable for that decision.
Third, push for sunset clauses on AI programs. Make agencies prove these systems work before renewing them. Government loves programs that never end. Make this one different.
Finally, remember that efficient government isn’t always better government. Sometimes slow and careful beats fast and wrong. When it comes to your healthcare and personal information, accuracy matters more than speed.
The founders didn’t create our system of government to be efficient. They created it to be accountable. AI threatens that accountability. That’s why every conservative who believes in limited government should be paying attention to what’s happening at HHS.
The opinions expressed by contributors are their own and do not necessarily represent the views of Nevada News & Views. This article was written with the assistance of AI. Please verify information and consult additional sources as needed.