Don’t forget that a study showed a statistically insignificant “benefit” in a study that changed their primary outcome after their original outcome failed.
Literally everyone here agrees on this. Chris, Bob, Ikes, etc… everyone.
I initially read the whole thing as mostly messaging argument, but apparently that’s not quite it either.
https://twitter.com/ddale8/status/1434991544306704395
this is how 95% of news works
Trumpers engage in disdainful mockery that is alienating rather than persuasive. I say, tit for tat.
There is a kernel of truth to that in that uninsured people who can’t afford healthcare and medication in our hellscape of a healthcare system do try to substitute in cheaper animal medication.
It is, uh, not a validation of conservative governance or culture nor is it applicable for this example.
People do take fish antibiotics. However, ime it is usually in response to a bacterial infection. Not to ward off the measles or what have you.
s. It is extremely common. Disdainful mockery of that practice by urban elites is alienating, not persuasive. It is also udderly at odds with how
You trailed off so I FYP
I’m pretty sure this point had already been made. Or is irrelevant.
My take of the CN vs Bobman, someone else and Microbet derail.
CN: studies show no significant evidence that something works, therefore it doesn’t work.
Others: Studies don’t show conclusively that it doesn’t work.
CN: There are tons of studies that don’t confirm something doesn’t work, it’s a waste of resources to show that. All that matters is if it shows that the treatment does work.
Others: But we can’t say conclusively that it doesn’t work.
CN: Yes, we can. Nothing works until it can be proven to work. Drinking beer doesn’t help with covid and neither does Ivermectin.
Others: You can’t prove…
CN: Go fuck yourself
Drinking definitely helps with covid
I am so dumb that I actually feel like this is a resolvable point of contention here. I really do think you are missing a subtle point here. Experiments are a kind of statistical inference, where you observe one thing and try to tease out what the observation means for the distribution of results generated by the process you’re observing. Concretely, you set up a process that “generates” patients with COVID who get dosed with a placebo and another process that generates COVID patients with alt-COVID cure #5. (To be specific, the process here is enrolling patients in the trial, giving them the pills, and watching them.) Then you observe whatever you actually observed and then do complicated stats math to figure out what it means for the distributions.
Now, when you do the math, there are two mistakes you could make. If the distributions are the same, you could mistakenly conclude they are different. This is a Type I error. P-values are a way of judging how much risk you are at of making a Type I error if you reject the null hypothesis based on the data in front of you. If p is high, then the difference you are observing would be likely even if the distributions were the same, so it’s not good evidence that the null hypothesis is true.
There are also Type II errors, which are failing to reject the null hypothesis when the null hypothesis is false. This is also an error! I completely take and agree with your points that there are good reasons to be cautious about medicine, and I am in no way suggesting that anyone should take ivermectin for COVID, because it probably doesn’t work. However, in the unlikely event that it does work, it’s an error not to be prescribing it, because if it works, then giving it to your patients would make them get better. It would be a justified error, but if a huge high-quality study came out tomorrow that said “Giving ivermectin to COVID patients gets them out of the hospital 1 day earlier,” you would look back with regret and wish that you had had this information earlier, even if you made perfect decisions with the information you had at the time.
The nuance with Type II errors is that you can’t sum up your risk of making them in a single number. The null hypothesis and the alternative hypothesis are not symmetrical. The null hypothesis is that the two distributions are the same, while the alternate is that they differ somehow (or maybe a bounded but still unquantified variant of “somehow,” like the mean of distribution A is lower than the mean of distribution B). That means you can’t come up with a single number to tell you how likely the experimental results are if the null hypothesis is false, because there are different answers depending on what the particular non-null distribution is.
In theory though, you could compute a big table of “reverse p-values” that tell you how likely you are to see your experimental results by chance if the real distribution is actually X. And scanning down that table would show you which concrete alternative hypotheses are “disproven” by your experiment (i.e., which hypotheses your results are strong evidence against) and which are consistent with your results, even if you don’t reject the null in favor of them. Generally speaking, a high-powered experiment will “disprove” (“disevidence”?) a broader range of alternative hypotheses when you’re not rejecting the null (or equivalently, it will usually reject the null when the truth is in a broader range).
I don’t actually know anything, or care at all, about ivermectin. My only point is that it’s possible and potentially useful to speak with a bit more nuance about what a particular experiment is evidence against rather than simply concluding that failure to reject the null => there is no effect. I completely accept that there may be other reasons to think this drug doesn’t work at all or that you apply somewhat different standards for the use of evidence in the practice of medicine. But I don’t really get why any of this is controversial at all. It’s literally all just Bayes’ Theorem.
So if every study shows no statistically significant benefit from a therapy, are you saying we should:
- Not state that is has no benefit, because we might still be wrong,
- State something more descriptive, like a statistical upper limit on its possible benefit, or
- Something else I’m not understanding?
Nuance makes sense when talking to other scientists. Nuance is a horrible line if we’re talking about trying to accomplish things within the context of American politics.
It sounds vaguely like you’re trying to re-invent a t-test.
How about when chatting on a forum where there’s zero chance anyone is taking ivermectin?
It sort of depends. If a bunch of studies show a small but not significant benefit, you might be able to aggregate them into evidence that there is a benefit. Or maybe they show a bunch of noise and you can conclude that there really is no effect of any meaningful size. Or maybe all you can say is that there’s no evidence that the treatment works (and, as you say, you might be able to also note that the evidence rules out a large benefit).
Only if they understand that nuance is bad in a political context.