Drinking definitely helps with covid
I am so dumb that I actually feel like this is a resolvable point of contention here. I really do think you are missing a subtle point here. Experiments are a kind of statistical inference, where you observe one thing and try to tease out what the observation means for the distribution of results generated by the process youâre observing. Concretely, you set up a process that âgeneratesâ patients with COVID who get dosed with a placebo and another process that generates COVID patients with alt-COVID cure #5. (To be specific, the process here is enrolling patients in the trial, giving them the pills, and watching them.) Then you observe whatever you actually observed and then do complicated stats math to figure out what it means for the distributions.
Now, when you do the math, there are two mistakes you could make. If the distributions are the same, you could mistakenly conclude they are different. This is a Type I error. P-values are a way of judging how much risk you are at of making a Type I error if you reject the null hypothesis based on the data in front of you. If p is high, then the difference you are observing would be likely even if the distributions were the same, so itâs not good evidence that the null hypothesis is true.
There are also Type II errors, which are failing to reject the null hypothesis when the null hypothesis is false. This is also an error! I completely take and agree with your points that there are good reasons to be cautious about medicine, and I am in no way suggesting that anyone should take ivermectin for COVID, because it probably doesnât work. However, in the unlikely event that it does work, itâs an error not to be prescribing it, because if it works, then giving it to your patients would make them get better. It would be a justified error, but if a huge high-quality study came out tomorrow that said âGiving ivermectin to COVID patients gets them out of the hospital 1 day earlier,â you would look back with regret and wish that you had had this information earlier, even if you made perfect decisions with the information you had at the time.
The nuance with Type II errors is that you canât sum up your risk of making them in a single number. The null hypothesis and the alternative hypothesis are not symmetrical. The null hypothesis is that the two distributions are the same, while the alternate is that they differ somehow (or maybe a bounded but still unquantified variant of âsomehow,â like the mean of distribution A is lower than the mean of distribution B). That means you canât come up with a single number to tell you how likely the experimental results are if the null hypothesis is false, because there are different answers depending on what the particular non-null distribution is.
In theory though, you could compute a big table of âreverse p-valuesâ that tell you how likely you are to see your experimental results by chance if the real distribution is actually X. And scanning down that table would show you which concrete alternative hypotheses are âdisprovenâ by your experiment (i.e., which hypotheses your results are strong evidence against) and which are consistent with your results, even if you donât reject the null in favor of them. Generally speaking, a high-powered experiment will âdisproveâ (âdisevidenceâ?) a broader range of alternative hypotheses when youâre not rejecting the null (or equivalently, it will usually reject the null when the truth is in a broader range).
I donât actually know anything, or care at all, about ivermectin. My only point is that itâs possible and potentially useful to speak with a bit more nuance about what a particular experiment is evidence against rather than simply concluding that failure to reject the null => there is no effect. I completely accept that there may be other reasons to think this drug doesnât work at all or that you apply somewhat different standards for the use of evidence in the practice of medicine. But I donât really get why any of this is controversial at all. Itâs literally all just Bayesâ Theorem.
So if every study shows no statistically significant benefit from a therapy, are you saying we should:
- Not state that is has no benefit, because we might still be wrong,
- State something more descriptive, like a statistical upper limit on its possible benefit, or
- Something else Iâm not understanding?
Nuance makes sense when talking to other scientists. Nuance is a horrible line if weâre talking about trying to accomplish things within the context of American politics.
It sounds vaguely like youâre trying to re-invent a t-test.
How about when chatting on a forum where thereâs zero chance anyone is taking ivermectin?
It sort of depends. If a bunch of studies show a small but not significant benefit, you might be able to aggregate them into evidence that there is a benefit. Or maybe they show a bunch of noise and you can conclude that there really is no effect of any meaningful size. Or maybe all you can say is that thereâs no evidence that the treatment works (and, as you say, you might be able to also note that the evidence rules out a large benefit).
Only if they understand that nuance is bad in a political context.
Do you understand that youâre not drafting a PSA for the CDC here?
Yessir.
I donât have any experience with medical stuff, but I have a lot of experience with empirical research in other settings, and I feel like there are a bunch of commonalities. And I have had the very-not-fun experience of trying to get a âno statistical resultsâ paper through the publication process, so Iâve actually thought about this a decent amount.
IMO, there are 2 views you could take with regard to research and evidence in the context of non-significant results, and I think those two views are driving the divide in this thread:
The first view is, in my experience, the standard one. You ask, âDoes this treatment work?â. So youâre asking if thereâs any evidence that the treatment conveys some benefit (call it b), and statistically youâre asking whether you can confidently say that b>0. Then you estimate b in some kind of experiment. But you recognize that this is just an estimate, so youâre not concluding you have precisely determined the true (unobservable) b. Instead, you specify some certainty threshhold (say, 95%), and you say that youâre reasonably confident b is within some range of [b(low) to b(high)]. If b(low) is less than 0, you say, âWe cannot conclude that there is a net benefit to this treatment.â And maybe your decision rule is that you donât take costly action without such evidence.
But you shouldnât stop there - thereâs more information to be had! The second view would say, "There is definitely some treatment effect b. Maybe that effect is negative (and the treatment is actually harmful), maybe itâs good, or maybe itâs very close to 0. But thereâs definitely some true effect b, and it would be useful to know that value. The question is what we can learn about that effect b from the experiment even if b(low) is less than 0. The way that I think about this situation is by saying, âYes, youâve estimated an insignificant effect, but how precisely have you estimated that insignificant effect?â
So hereâs where you have to make a subjective judgment about what would constitute a meaningful practical (as opposed to a statistically significant) effect. And this obviously depends on your context. Letâs say youâre measuring recovery time. You might say something like, "If I knew this treatment would reduce recovery time by a full day, I would definitely pursue it. On the other hand, if I knew it would reduce recovery time by 40 seconds, I wouldnât pursue it, even if I was confident that b(low) was greater than 0 (i.e., there was a statistically significant reduction in recovery time).
So now the new question is, âWhat have you ruled out from your experiment?â Is your experiment so under-powered that your b(high) estimate is greater than your level of practical significance? If so, your experiment wasnât particularly useful because all you can conclude is, âWe canât confidently say that this treatment is beneficial, but we also canât confidently say that itâs not.â And thatâs the kind of unhelpful statement that just implies further testing and can be used opportunistically by anyone to justify anything. BUT, if you have a well-powered test you can make the much more powerful statement of, âWe canât confidently say that this treatment is beneficial. Moreover, if there IS a benefit to this treatment, we can confidently say that the benefit is so small that the treatment is not worth prescribing or studying further.â
So if your prior/decision rule is âtake no action without good evidenceâ, you fall in the first camp and you can make your decision once you know that âthere is no statistical evidence that this treatment is beneficialâ.
But if your prior/decision rule is âdo not prohibit a particular treatment without good evidenceâ, you fall in the second camp, and you wouldnât ban treatment or engage in mass communication advising against that treatment unless you know that âthere is statistical evidence that there is no meaningful benefit to the treatment.â
Micro, I realize you really want to get that swipe at me. Hereâs the problem. The nuance attempted is wrong.
Youâre wrong about both of those things. How is that a swipe at you? The forum is full of people who pretend that what is said here is some kind of political statement and end up claiming âWhat about the lurkers? Wonât anyone think about the lurkers?â. Maybe you fit in that group a little, dunno, but youâre hardly the main proponent of this-forum-is-a-really-important-political-platform.
Not everything is about you. (that was a shot at you)
There is a media bias, not a scientific or medical bias, for anything pro vaccine and anti alternative.
The medical bias against unproven therapies is a core ethical principle and a pillar of evidence-based medicine. You donât just go giving people strange substances in the hope that they work. You either arrange a proper clinical trial, which is big and expensive and complicated so youâd better have something from animals and/or cell culture or something to make people believe the odds of success are high enough that itâs worth it, or you donât give them things. The number of substances that are harmless but ineffective are countless, and the number of things that charlatans will sell you that are ineffective without regard for safety is even higher.
If you think people flipping from âlol horse pasteâ to âhuh, looks like ivermectin actually worksâ in the face of a conclusive result from a randomized, controlled clinical trial undermines their credibility, that reflects poorly on you, not on them. Thatâs exactly what the process is supposed to be: skepticism and refusal to administer the substance up to the point that a proper trial shows a conclusive result, and then changing oneâs mind in light of new information.
So yeah, unproven means doesnât work. When it comes to the real world thatâs clearly not true. mRNA vaccines worked before the first trial began, before there was evidence, because they work.
This is nonsesne. the mRNA vaccines were put into an RCCT to determine if they worked, based on ample preliminary data suggesting the odds were good. It would definitely be unethical to start jabbing people with vaccine candidates outside of that context before they were shown to work.
I donât think Johnny was suggesting the vaccines should be given before rigorous safety testing, although admittedly Iâm having increasing difficulty figuring out wtf anyone is talking about here lately.
It would definitely be unethical to start jabbing people with vaccine candidates outside of that context before they were shown to work.
Post full of strawmanning.
I donât think Johnny was suggesting the vaccines should be given before rigorous safety testing, although admittedly Iâm having increasing difficulty figuring out wtf anyone is talking about here lately.
IOW, post full of strawmanning.
The practical difference between ânot proven to workâ and âproven not to workâ is zero from an end of the line, patient care perspective, unless youâre setting up a clinical trial. Waxing about the epistemological difference between the two is just masturbatory.
IOW, post full of strawmanning
I donât think Wookie is strawmanning anyone, it just seems like an honest miscommunication.
The practical difference between ânot proven to workâ and âproven not to workâ is zero from an end of the line, patient care perspective, unless youâre setting up a clinical trial. Waxing about the epistemological difference between the two is just masturbatory.
This site is masturbatory. Itâs not a PSA for the CDC. People can wax about stuff here without being told to shut up and stop killing people with their pro-ivermectin posting or whatever whosnext said.