Ivermectin, COVID, and the Politics of Medicine: A Case Study in How Science Works (and Sometimes Doesn’t)

I’ve avoided writing about politics—who needs the tsurris? But when science and policy collide, I can’t look away. Case in point: a group of New Hampshire legislators is pushing a bill to allow people to buy ivermectin without a prescription, presumably to treat COVID.

So let’s take a closer look. What is ivermectin? Why did people start promoting it for COVID? And more importantly, what does this whole story reveal about how science works—and how it’s often misunderstood?

What Is Ivermectin?

I started where most people start: Google. Not the be-all and end-all, but a decent place to begin. And yes, I looked at Wikipedia. High school teachers may cringe, but when I check entries against topics I know, it usually holds up.

Here’s what I found:

Ivermectin is an antiparasitic drug. After its discovery in 1975, its first uses were in veterinary medicine to prevent and treat heartworm and acariasis. Approved for human use in 1987, it is used to treat infestations including head lice, scabies, river blindness (onchocerciasis), strongyloidiasis, trichuriasis, ascariasis, and lymphatic filariasis. It works through many mechanisms to kill the targeted parasites and can be taken by mouth or applied to the skin for external infestations. It belongs to the avermectin family of medications.

William Campbell and Satoshi Ōmura were awarded the 2015 Nobel Prize in Physiology or Medicine for their discovery and applications. It is on the World Health Organization’s List of Essential Medicines and is approved by the US Food and Drug Administration (FDA) as an antiparasitic agent. In 2022, it was the 314th most commonly prescribed medication in the United States, with more than 200,000 prescriptions. It is available as a generic medicine. Ivermectin is available in a fixed-dose combination with albendazole.

Sounds like pretty good stuff, and who knew the discoverers won the Nobel Prize? If I get head lice or scabies, sign me up. Haven’t needed it so far, but you’ll have to take my word on that. But where did the idea that it was a cure or helpful for COVID come from? If you are interested, it’s easy enough to look up the information online. But I think the issue highlights important points about how science is done and how claims that pop up on the internet should be evaluated.

Where Did the COVID Claim Come From?

The ball got rolling in 2020 with a study by Caly et al. entitled “The FDA-approved drug ivermectin inhibits the replication of SARS-CoV-2 in vitro” in the journal Antiviral Research. Caly and his fellow researchers used ivermectin on cell cultures, which is a common way to start looking at new drugs and their effects. This makes sense; if you use cell cultures, you can get an initial read on whether it is worthwhile to move on to animals and eventually to people. The researchers found that one dose of the drug significantly lowered the replication of Covid for 48 hours after it was administered. So far, so good, but there were some issues that arose.

One of these issues is that you don’t go from using the drug on a petri dish full of cells to giving it to humans. The authors warned that more animal and human trials were needed before prescribing the drug. Also, if you translated the amount of ivermectin needed to produce this effect on humans, it far exceeded what was considered a safe dose—50–100 times higher. If I took that kind of dose of ivermectin, there is a good chance I’d develop seizures, brain swelling, or a coma. That’s if I were lucky; at very high doses, I might develop irreversible brain swelling or simply die. But you do have to start somewhere; these preliminary results were promising, but there was obviously a good deal of work to be done before the medication was ready for prime time.

Then the plot started to thicken. In 2021, a study was published by Popp M, Reis S, Schießer S, Hausinger RI, Stegemann M, Metzendorf MI, Kranke P, Meybohm P, Skoetz N, and Weibel S, entitled Ivermectin for Preventing and Treating COVID-19. As best I can tell, it was published in the Cochrane Database of Systematic Reviews, which is a highly respected, peer-reviewed journal that publishes meta-analyses and systematic reviews of medical research. But I hear you ask, what is meta-analysis?

Good question. It is a way of combining different studies to get a better idea of how the data from those studies trend. Let’s say I’m interested in whether yoga helps with high blood pressure. I go to several databases and look up all the articles I can find. I find 20 show a statistically significant improvement in blood pressure for those who practice it regularly, 5 show no difference and 5 show that it seems to make blood pressure a little worse. How do I make sense of this? (Stay tuned for a future post about how the results of a study could reach statistical significance and have absolutely no practical benefit. One example? Some studies show certain benefits from taking multivitamins in studies, but from a practical standpoint, healthy adults who take a vitamin every day have the same rates of cancer, heart disease, or death from all causes as those who don’t. Given that about 30% of the US adult population takes vitamins, that’s about $10,000,000,000 dollars a year.)

Part of the challenge is developing a system to grade the quality of the research. One way of doing this would be to develop a yardstick for describing the quality of a study, like this:

Grade LevelMeaning
HighVery confident of the results
ModerateLikely Accurate, but not certain
LowLimited confidence
Very LowEvidence is very uncertain

So how do you decide how to grade a particular study and weight it along with other studies? There are many ways. Some of the things to consider would be how the study was conducted. Here’s a helpful graphic:

So how do we apply this grading system? Let’s pretend I am a medical researcher. I think that garlic supplements may lower blood pressure. So I find 100 folks over 60 with high blood pressure. I randomly assign 50 participants to the treatment group, which receives garlic capsules, while the other 50 participants receive capsules that look identical but are actually placebos. They take their capsules for 10 weeks and get their blood pressure monitored daily. But the folks in the study don’t know whether they are getting garlic or a placebo. If the study was double-blinded, neither the researchers nor the subjects would know who was getting which capsule. Double-blind studies are the best kind since they provide more objective information. When the 10 weeks pass, the researchers take blood pressure readings and see if they are on to something. This would be an example of a high-quality study.


Some examples of lower-quality studies? Well, let’s keep the garlic/high blood pressure example. 

  1. The researchers could interview 50 people who take garlic supplements or eat lots of garlic and check their blood pressure. They have lower blood pressure than average folks in their age group. Why is this lower quality?

These folks are self-reporting their garlic consumption. Who knows how much garlic they really eat? (Spoiler alert: This is why you should be skeptical about a good deal of the news about different diets and their effect on health.) They are mostly based on people self-reporting what they eat, and trust me, that is really dicey. Remember Dr. House (“Everybody lies.”) Also, there is no comparison group of people who don’t eat lots of garlic. How do you know if the garlic works?

The subjects self-selected when they saw the advertisement for a garlic study. Maybe they eat so much garlic because they are health conscious and also take supplements and work out 4 times a week. Is it the garlic or lifestyle choices? This kind of study is called “observational” because I’m not controlling who takes garlic or assigning people to groups—I’m just observing what’s already happening. If the garlic group has lower blood pressure, that’s interesting, but I can’t say garlic caused it. Maybe they also eat less salt, exercise more, or are just more health-conscious in general. These other factors are called confounding variables, and they make it hard to tell what’s really going on. That doesn’t mean these studies have no value, as they could point to the need for more rigorous research about garlic and blood pressure, but it would be a mistake for physicians to start prescribing garlic to their patients with high blood pressure on the basis of an observational study.

Now imagine someone says, “I started eating a clove of garlic every day, and a month later my blood pressure went down!” Maybe they even post about it online and other people comment, saying it worked for them too.

That’s not really a study—it’s anecdotal evidence. It’s based on personal stories, not systematic research. We don’t know what else those people were doing. Maybe they also started walking more, eating better, or taking medication. There’s no control group, no consistent way of measuring anything, and no way to know if the garlic actually made a difference. Plus, most of us would be skeptical of many anecdotal claims without getting into the scientific weeds, to wit (BTW, I just looked it up and “to wit” and found out that it comes from the Old English “witten” which means “to know.” To wit is a conjugation of the verb that came to mean “that is to say,” but again, I digress.)

  • “I drove 120 mph up Interstate 95 without my seatbelt fastened, and I’m still alive.”
  • “My grandmother smoked a pack of cigarettes a day for 60 years and lived to be 95, so smoking can’t be that bad for your health.” 
  • “I caught a cold and took an herbal supplement; the cold went away in 3 days, so it must be a cure for the common cold.”

Anecdotes can be interesting, and sometimes they lead to good research questions—but they’re not reliable evidence on their own. We need proper studies to figure out what’s really effective.

So you want to use high-quality studies if you are going to do a meta-analysis; otherwise, you have just combined a bunch of dubious data together and taken an average. So what do high-quality studies say about the efficacy of ivermectin for Covid?

Mortality and Severe Outcomes

  • No significant reduction in mortality: Multiple randomized controlled trials (RCTs) and meta-analyses found no statistically significant difference in all-cause mortality between ivermectin and placebo or standard care
  • Limited impact on hospitalization: A 2024 meta-analysis of 33 RCTs (10,489 participants) showed ivermectin reduced mechanical ventilation requirements, but subsequent studies found no clear reduction in hospitalizations or urgent care visits

Symptom Duration and Recovery

  • Minimal or no improvement:
    • A 2023 U.S. trial found ivermectin shortened time to sustained recovery by just 1 hour compared to placebo. In this high-quality study of 1,432 adults in the US who had COVID-19, people were randomly given either ivermectin or a placebo (a fake treatment). Most of the participants (83%) were already vaccinated. People who took ivermectin recovered in a median of 11 days, while people who took the placebo recovered in a median of 12 days. So, on average, ivermectin only made people feel better about 1 day sooner than the placebo. However, the study found there was less than a 0.1% chance that ivermectin could actually shorten symptoms by more than 1 day. In other words, it’s extremely unlikely that ivermectin makes a meaningful difference in how quickly people recover from COVID-19.
    • A 2024 U.K. trial reported a statistically insignificant reduction in symptom duration (14 vs. 15 days). The PRINCIPLE trial was a large, well-designed study in the UK. It tested whether existing drugs, including ivermectin, could help people with COVID-19 recover faster, avoid hospital, or reduce their risk of dying—especially those at higher risk. Here is what they found in simple terms:
  • People who took ivermectin recovered from COVID-19 in about 14 days.
  • People who got usual care (no ivermectin) recovered in about 16 days.
  • This 2-day difference was not considered meaningful for patients’ health.
  • Ivermectin did not reduce hospital admissions or deaths.
  • There were no improvements in longer-term health for those who took ivermectin.
  • Although ivermectin looked promising in early lab studies and small trials, this large, careful study showed it does not make a real difference for people with COVID-19, especially in a mostly vaccinated population.
  • The researchers recommend against using ivermectin to treat COVID-19, stating that further trials are unnecessary for this purpose. They reached this conclusion because their study demonstrated that ivermectin does not help people recover faster, avoid hospitalization, or survive COVID-19. We should not prescribe it for COVID-19 treatment.

Guidelines and Recommendations

  • WHO, FDA, and IDSA advise against ivermectin’s use outside clinical trials due to insufficient evidence
  • NIH guidelines state there is no proven benefit, emphasizing risks of self-medication

Why Are We Still Debating This?

So that would seem to be pretty good evidence that ivermectin does little or nothing to help with covid. So why are people still advocating for its use? Their arguments fall into a couple of main themes:

  1. Positive evidence for the use of ivermectin for covid is being suppressed by a coalition of forces that includes big pharma, the deep state, and mainstream medical associations. They are doing this for several reasons:
  • Ivermectin is cheap, and pharmaceutical companies make more money pedaling vaccines, so its the profit motive
  • Government regulatory agencies are in cahoots
  • More and more, people seem to distrust the medical system and experts. Many believe that these experts are part of an “elitist” system that dismisses or censors alternative viewpoints and evidence. These folks often frame the issue as a battle for “medical freedom.”
  • Then there is the whole conspiracy theory theme. Everybody who is not brainwashed by the elites knows that there is a coordinated conspiracy among politicians, pharmaceutical executives, scientists, and media to keep the public from learning about the supposed positive effects of ivermectin. They dismiss any new studies that fail to demonstrate the effectiveness of ivermectin for treating covid as just another example of this conspiracy to deny us the drug
  • Some folks have just made up their minds:
    • “Ivermectin works. There’s nothing that will persuade me.” (Dr. Tess Lawrie, British Ivermectin Recommendation Development Group). I don’t know about you, but I would be very reluctant to consult a physician who stated openly that nothing (not further studies, consensus, or a vision sent from the Almighty) would move them to change their views. What other potentially concerning ideas might they hold? They might still believe that gastric ulcers aren’t caused by H. pylori and have me eating nothing but milk toast. 

Enter the Cato Institute

Then there are the organizations that “cook the books” for their own reasons. An example of this in my opinion, is an opinion piece on ivermectin published by the Cato Institute. In 2022, they published an article entitled Ivermectin and the TOGETHER Trial. The authors were Charles Hooper and David Henderson. Hooper holds a B.S. in computer science engineering from Santa Clara University and an M.S. in engineering-economic systems from Stanford University and Henderson has a Ph.D. in economics from UCLA. Not physicians, but that doesn’t mean they can’t analyze data. Their article (not peer-reviewed) was a response to a study that has been referred to as the TOGETHER trial. 

The results of the TOGETHER trial were originally published in The New England Journal of Medicine (NEJM) on May 5, 2022. The article is titled “Effect of Early Treatment with Ivermectin among Patients with Covid-19” and was a large, randomized, placebo-controlled trial conducted in Brazil that studied over 1,300 adults in Brazil who had contracted Covid and had at least 1 risk factor for severe disease. They were given ivermectin for 3 days. The researchers came to a number of conclusions:

1, They did not rule out the possibility that there was a small, but clinically meaningless difference between the two groups

2. There was no significant difference in mortality, viral clearance, or any other studied variables between those patients who were given ivermectin and those that did not receive it.

The article by Hooper and Henderson was highly critical of the TOGETHER study for various reasons:

  • They claimed that the TOGETHER study was being unfairly cited as proof that ivermectin didn’t help with covid
  • They concluded that some data showed that ivermectin might reduce hospitalizations, ventilator use and mortality
  • They used a novel, Bayesian analysis to show that there was a high chance that ivermectin was effective for treatment of covid

I can almost hear you respond to the last point, “What the hell is a Bayesian analysis?” Bayesian probability is a way of thinking about probability as a measure of how strongly you believe something is true, based on both what you already know and any new evidence you get. When new information comes in, you update your belief to reflect this evidence. In simple terms, Bayesian probability is about updating your beliefs as you learn new facts. A simple example will help:

Let’s say you’re sitting at home and your smoke alarm starts beeping. What’s your first thought?

Is there a fire—or is the battery just dying?

 You might ask yourself a few questions:

– Do I smell smoke?

– Is the oven on?

– Has the alarm been chirping lately?

Even without realizing it, you’re doing Bayesian reasoning. You’re updating your beliefs based on new information.

 Let’s break it down.

 Step 1: Your Prior Belief (before the alarm went off)

You know that actual fires are rare in your house, but dead batteries are common.

 So maybe you think:

– 10% chance it’s a fire

– 90% chance it’s just the battery

Step 2: You Get New Evidence (the beeping starts)

Beeping alarms happen both during fires and with bad batteries. But now you walk into the kitchen and smell smoke.

 Step 3: You Update Your Belief

Now the odds shift. You think:

– “Huh, smoke plus the alarm? That makes it more likely there really is a fire.”

That’s how Bayesian probability works; we do it all the time, even if we aren’t aware of it and don’t always do it well.

 Now imagine they already believed, going in, that ivermectin probably helps. That’s like starting with:

– “I think there’s a good chance this smoke alarm is legit.”

So when they saw some evidence pointing in ivermectin’s favor (like a small drop in hospitalization), they updated their belief to:

– Now I’m even more confident this drug works.”

 Hence the “91% chance it helped.

 But if someone else started more with a bit more skepticism—say, believing ivermectin was unlikely to help—the same data might only move their belief a little. They might go from “10% likely” to “20%,” not 91%. 

The Point?

Bayesian reasoning is a great tool—but your conclusion depends on where you start. And in the Cato article, they don’t tell us where they started. That makes their “91% chance” sound more solid than it really is.

I don’t want to set up a straw man argument here. The Cato Institute critique isn’t the only study or opinion piece that suggests that ivermectin might be helpful in treating COVID or is critical of more mainstream studies that suggest that it isn’t worth taking. But it is a great example of how science can be subverted by motivated individuals with an agenda.

The Cato authors raise some fair concerns about how the trial was done. But their own arguments have serious flaws, and in my opinion, they know or should know better:

1. Highlighting Weak Results

– They focused on data showing small improvements — but those improvements weren’t statistically reliable

– In science, if a result isn’t clearly strong enough (not “statistically significant”), it might just be luck.

2. Using a Complex Method Without Explaining It

– They used a method that estimates *probabilities* of benefit — which can be helpful.

– But they didn’t explain the assumptions behind it, which means their results could be overly optimistic.

3. Ignoring Standard Rules

– They criticized the usual way scientists test whether something works.

– But they didn’t offer a clear alternative, which risks picking only the results they like.

4. Suggesting Bias Without Solid Proof

– They pointed to connections between researchers and big companies or foundations.

– It’s fair to ask questions, but implying bias without evidence can hurt trust in science overall.

5. Complaining About Trial Changes — Then Doing the Same

– They said the trial dropped some outcomes midway.

– But they themselves focus on results that weren’t the trial’s main goals which is the same mistake.

It’s reasonable to say ivermectin deserves more research — but it’s not accurate to claim the TOGETHER trial proved it works.

I’ve mentioned Paul Meehl, one of the giants of American psychology, in an earlier post. He wrote a monograph back in 1973 entitled “Why I do not attend case conferences” that discussed problematic or weak reasoning he had seen in years of attending such conferences. Some of them are pretty informative while still being very funny. One example is what he called “Uncle George’s pancakes fallacy.” Let’s let the man speak for himself:

This is a variant of the “me too” fallacy, once removed; rather than referring to what anybody would do or what you yourself would do, you call to mind a friend or relative who exhibited a sign or symptom similar to that of the patient. For example, a patient does not like to throw away leftover pancakes and he stores them in the attic. A mitigating clinician says, “Why, there is nothing so terrible about that—I remember good ole Uncle George from my childhood, he used to store uneaten pancakes in the attic.” The proper conclusion from such a personal recollection is, of course, not that the patient is mentally well but that good ole Uncle George—whatever may have been his other delightful qualities—was mentally aberrated. The underlying premise in this kind of fallacious argument seems to be the notion that none of one’s personal friends or family could have been a psychiatric case, partly because the individual in question was not hospitalized or officially diagnosed and partly because (whereas other people may have crazy friends and relatives) I obviously have never known or been related to such persons in my private life. Once this premise is made explicit, the fallacy is obvious.

But one of Meehl’s other clinical fallacies is right on point when it comes to reviews such as the one done by the Cato authors. This is something he referred to as “Double standard of evidential morals.” Again, let’s let Dr. Meehl speak for himself:

I have no objection if professionals choose to be extremely rigorous about their standards of evidence, but they should recognize that if they adopt that policy, many of the assertions made in a case conference ought not to be uttered because they cannot meet such a tough standard. Neither do I have any objection to freewheeling speculation; I am quite willing to engage in it myself … You can play it tight, or you can play it loose. What I find objectionable in staff conferences is a tendency to shift the criterion of tightness so that the evidence offered is upgraded or downgraded in the service of polemical interests.

I think that commentaries such as the Cato study are exactly what Dr. Meehl was talking about.

Let’s summarize. To date, there is little or no evidence that ivermectin is helpful in the treatment of COVID. This has been shown again and again by high-quality studies and meta-analyses. Does that mean you shouldn’t take it if you come down with COVID? That’s a different question. If it is taken properly, it probably won’t harm you, and there is some anecdotal evidence that it might possibly help, but even if it is true, it doesn’t look like it helps very much. The research from large, carefully constructed studies shows that the COVID vaccines really do help with reducing infection, illness duration, and death from all causes, although they are not a miracle cure.

Here is a table that breaks this all down:

OutcomeCOVID-19 VaccinesIvermectin
Infection Prevention33% effectiveness against ED/UC visits (first 3–4 months) No statistically significant benefit in RCTs; early observational claims of 44–86% reduction later debunked 
Hospitalization45–46% effectiveness in adults 65+ (first 3–4 months) ; 84–86% post-booster No reduction in hospitalizations in large RCTs (PRINCIPLE trial: 0% benefit) 
Death (COVID-19)84–93% effectiveness post-booster (first 3 months) No mortality benefit in rigorous RCTs (Oxford study: 1 death vs. 0 in placebo) 
Symptom DurationVaccinated patients recover faster; lower risk of Long COVID (3–4% vs. 10% baseline) 1–2 day faster recovery (not clinically meaningful) in some trials 
All-Cause Mortality61% lower risk vs. unvaccinated (IRR 0.39) No data showing reduction; studies focused only on COVID-19 outcomes
Study QualityLarge real-world data (millions) + RCTs Most supportive studies retracted/flawed; large RCTs (n=8,000+) show no benefit 
Current ConsensusRecommended by WHO/CDC for all ages; reduces strain on healthcare systems Not recommended by any major health agency; “no meaningful benefit” per Oxford 

Ultimately, if you decide you want to skip the vaccines and take ivermectin, that’s up to you. I just think you should have some idea of the odds, the data, and where that data is coming from. If you still think that Bill Gates, George Soros, the US government, and big pharma are engaged in a conspiracy, we’ll just have to disagree. But I feel I must ask, if you are leaning toward that idea, where are you getting that information? YouTube? Laura Ingraham? Joe Rogan? At least Joe Rogan pointed out the limitations of his knowledge by saying, “I’m not a doctor; I’m a f—ing moron,” and repeatedly reminded his audience to be skeptical and to consult with medical professionals rather than relying on his or his guests’ advice. So that’s one thing Joe and I agree on (consulting with your doctor, not necessarily the “f—ing moron thing; I haven’t met the man).


Discover more from Wandering Shrink

Subscribe to get the latest posts sent to your email.

Published by furthernewsfromtheshire

I'm a forensic psychologist/neuropsychologist based in Portsmouth, New Hampshire. My interests include travel, literature, martial arts, ukulele, blues harp, and sleight of hand. My blog started as a way to write about my trip to Japan in 2025; I discovered I like blogging about topics that catch my interest and decised to keep at it.

Leave a comment