It's Worms All the Way Down: The Ivermectin Thread

Drinking definitely helps with covid

5 Likes

I am so dumb that I actually feel like this is a resolvable point of contention here. I really do think you are missing a subtle point here. Experiments are a kind of statistical inference, where you observe one thing and try to tease out what the observation means for the distribution of results generated by the process you’re observing. Concretely, you set up a process that “generates” patients with COVID who get dosed with a placebo and another process that generates COVID patients with alt-COVID cure #5. (To be specific, the process here is enrolling patients in the trial, giving them the pills, and watching them.) Then you observe whatever you actually observed and then do complicated stats math to figure out what it means for the distributions.

Now, when you do the math, there are two mistakes you could make. If the distributions are the same, you could mistakenly conclude they are different. This is a Type I error. P-values are a way of judging how much risk you are at of making a Type I error if you reject the null hypothesis based on the data in front of you. If p is high, then the difference you are observing would be likely even if the distributions were the same, so it’s not good evidence that the null hypothesis is true.

There are also Type II errors, which are failing to reject the null hypothesis when the null hypothesis is false. This is also an error! I completely take and agree with your points that there are good reasons to be cautious about medicine, and I am in no way suggesting that anyone should take ivermectin for COVID, because it probably doesn’t work. However, in the unlikely event that it does work, it’s an error not to be prescribing it, because if it works, then giving it to your patients would make them get better. It would be a justified error, but if a huge high-quality study came out tomorrow that said “Giving ivermectin to COVID patients gets them out of the hospital 1 day earlier,” you would look back with regret and wish that you had had this information earlier, even if you made perfect decisions with the information you had at the time.

The nuance with Type II errors is that you can’t sum up your risk of making them in a single number. The null hypothesis and the alternative hypothesis are not symmetrical. The null hypothesis is that the two distributions are the same, while the alternate is that they differ somehow (or maybe a bounded but still unquantified variant of “somehow,” like the mean of distribution A is lower than the mean of distribution B). That means you can’t come up with a single number to tell you how likely the experimental results are if the null hypothesis is false, because there are different answers depending on what the particular non-null distribution is.

In theory though, you could compute a big table of “reverse p-values” that tell you how likely you are to see your experimental results by chance if the real distribution is actually X. And scanning down that table would show you which concrete alternative hypotheses are “disproven” by your experiment (i.e., which hypotheses your results are strong evidence against) and which are consistent with your results, even if you don’t reject the null in favor of them. Generally speaking, a high-powered experiment will “disprove” (“disevidence”?) a broader range of alternative hypotheses when you’re not rejecting the null (or equivalently, it will usually reject the null when the truth is in a broader range).

I don’t actually know anything, or care at all, about ivermectin. My only point is that it’s possible and potentially useful to speak with a bit more nuance about what a particular experiment is evidence against rather than simply concluding that failure to reject the null => there is no effect. I completely accept that there may be other reasons to think this drug doesn’t work at all or that you apply somewhat different standards for the use of evidence in the practice of medicine. But I don’t really get why any of this is controversial at all. It’s literally all just Bayes’ Theorem.

6 Likes

So if every study shows no statistically significant benefit from a therapy, are you saying we should:

  • Not state that is has no benefit, because we might still be wrong,
  • State something more descriptive, like a statistical upper limit on its possible benefit, or
  • Something else I’m not understanding?
1 Like

Nuance makes sense when talking to other scientists. Nuance is a horrible line if we’re talking about trying to accomplish things within the context of American politics.

It sounds vaguely like you’re trying to re-invent a t-test.

How about when chatting on a forum where there’s zero chance anyone is taking ivermectin?

1 Like

It sort of depends. If a bunch of studies show a small but not significant benefit, you might be able to aggregate them into evidence that there is a benefit. Or maybe they show a bunch of noise and you can conclude that there really is no effect of any meaningful size. Or maybe all you can say is that there’s no evidence that the treatment works (and, as you say, you might be able to also note that the evidence rules out a large benefit).

Only if they understand that nuance is bad in a political context.

@bobman0330

Do you understand that you’re not drafting a PSA for the CDC here?

1 Like

Yessir.

1 Like

I don’t have any experience with medical stuff, but I have a lot of experience with empirical research in other settings, and I feel like there are a bunch of commonalities. And I have had the very-not-fun experience of trying to get a “no statistical results” paper through the publication process, so I’ve actually thought about this a decent amount.

IMO, there are 2 views you could take with regard to research and evidence in the context of non-significant results, and I think those two views are driving the divide in this thread:

The first view is, in my experience, the standard one. You ask, “Does this treatment work?”. So you’re asking if there’s any evidence that the treatment conveys some benefit (call it b), and statistically you’re asking whether you can confidently say that b>0. Then you estimate b in some kind of experiment. But you recognize that this is just an estimate, so you’re not concluding you have precisely determined the true (unobservable) b. Instead, you specify some certainty threshhold (say, 95%), and you say that you’re reasonably confident b is within some range of [b(low) to b(high)]. If b(low) is less than 0, you say, “We cannot conclude that there is a net benefit to this treatment.” And maybe your decision rule is that you don’t take costly action without such evidence.

But you shouldn’t stop there - there’s more information to be had! The second view would say, "There is definitely some treatment effect b. Maybe that effect is negative (and the treatment is actually harmful), maybe it’s good, or maybe it’s very close to 0. But there’s definitely some true effect b, and it would be useful to know that value. The question is what we can learn about that effect b from the experiment even if b(low) is less than 0. The way that I think about this situation is by saying, “Yes, you’ve estimated an insignificant effect, but how precisely have you estimated that insignificant effect?”

So here’s where you have to make a subjective judgment about what would constitute a meaningful practical (as opposed to a statistically significant) effect. And this obviously depends on your context. Let’s say you’re measuring recovery time. You might say something like, "If I knew this treatment would reduce recovery time by a full day, I would definitely pursue it. On the other hand, if I knew it would reduce recovery time by 40 seconds, I wouldn’t pursue it, even if I was confident that b(low) was greater than 0 (i.e., there was a statistically significant reduction in recovery time).

So now the new question is, “What have you ruled out from your experiment?” Is your experiment so under-powered that your b(high) estimate is greater than your level of practical significance? If so, your experiment wasn’t particularly useful because all you can conclude is, “We can’t confidently say that this treatment is beneficial, but we also can’t confidently say that it’s not.” And that’s the kind of unhelpful statement that just implies further testing and can be used opportunistically by anyone to justify anything. BUT, if you have a well-powered test you can make the much more powerful statement of, “We can’t confidently say that this treatment is beneficial. Moreover, if there IS a benefit to this treatment, we can confidently say that the benefit is so small that the treatment is not worth prescribing or studying further.”

So if your prior/decision rule is “take no action without good evidence”, you fall in the first camp and you can make your decision once you know that “there is no statistical evidence that this treatment is beneficial”.

But if your prior/decision rule is “do not prohibit a particular treatment without good evidence”, you fall in the second camp, and you wouldn’t ban treatment or engage in mass communication advising against that treatment unless you know that “there is statistical evidence that there is no meaningful benefit to the treatment.”

7 Likes

Micro, I realize you really want to get that swipe at me. Here’s the problem. The nuance attempted is wrong.

3 Likes

You’re wrong about both of those things. How is that a swipe at you? The forum is full of people who pretend that what is said here is some kind of political statement and end up claiming “What about the lurkers? Won’t anyone think about the lurkers?”. Maybe you fit in that group a little, dunno, but you’re hardly the main proponent of this-forum-is-a-really-important-political-platform.

Not everything is about you. (that was a shot at you)

The medical bias against unproven therapies is a core ethical principle and a pillar of evidence-based medicine. You don’t just go giving people strange substances in the hope that they work. You either arrange a proper clinical trial, which is big and expensive and complicated so you’d better have something from animals and/or cell culture or something to make people believe the odds of success are high enough that it’s worth it, or you don’t give them things. The number of substances that are harmless but ineffective are countless, and the number of things that charlatans will sell you that are ineffective without regard for safety is even higher.

If you think people flipping from “lol horse paste” to “huh, looks like ivermectin actually works” in the face of a conclusive result from a randomized, controlled clinical trial undermines their credibility, that reflects poorly on you, not on them. That’s exactly what the process is supposed to be: skepticism and refusal to administer the substance up to the point that a proper trial shows a conclusive result, and then changing one’s mind in light of new information.

This is nonsesne. the mRNA vaccines were put into an RCCT to determine if they worked, based on ample preliminary data suggesting the odds were good. It would definitely be unethical to start jabbing people with vaccine candidates outside of that context before they were shown to work.

6 Likes

I don’t think Johnny was suggesting the vaccines should be given before rigorous safety testing, although admittedly I’m having increasing difficulty figuring out wtf anyone is talking about here lately.

6 Likes

Post full of strawmanning.

3 Likes

IOW, post full of strawmanning.

The practical difference between “not proven to work” and “proven not to work” is zero from an end of the line, patient care perspective, unless you’re setting up a clinical trial. Waxing about the epistemological difference between the two is just masturbatory.

1 Like

I don’t think Wookie is strawmanning anyone, it just seems like an honest miscommunication.

This site is masturbatory. It’s not a PSA for the CDC. People can wax about stuff here without being told to shut up and stop killing people with their pro-ivermectin posting or whatever whosnext said.

1 Like