I don’t get your math here. Are you suggesting that 5% there is half of the 10% rather than 5% of the 10%?
If the test gives a 5% false positive, the true positive rate is 5%, and the sample tested is representative of the population, then the results would show 10% positive. Unless there was some significant percent of false negatives too. But I think false negatives for the antibody tests are not a concern for some reason.
Are you sure that’s what “false postive” means. Seems like it would mean that 5% of the positive results are false. eg, If the true positive rate were 5% and it had a 5% false positive rate then 5.26% would have tested positive. (5.26 x .95 (the percentage of accurate tests) = 5)
Interestingly, I haven’t seen the conspiracy theory banded about by the Chinese Foreign Affairs Minister, being debunked yet - the one where the virus came from a US government facility which was closed down ~September 2019, just before the US germ warfare games in SK during ~Oct 2019 (to which many online US articles about closure of US facility due to lax security have been removed).
Like couldn’t someone say the test has 5% false positives while 1% of the population has tested positive? That would be impossible if you took it to mean 5% of the population got a false positive, which is the way you (and I think jman) are taking it.
Pretty sure, yes. But not 100%. I see what you’re saying though so I will try to confirm with certainty.
My understanding of how these tests report a false positive rate is that a 5 percent false positive rate equates to 5 people out of 100 testing who have taken the test testing positive for antibodies who have not really had COVID. Is that not what that number signifies?
Edit: If it instead signifies that 5 percent of the positive tests are false, then obviously there’s a big difference.
That’s the way I understand it, which would mean that 95% of the people who test positive get an accurate positive. Which would mean that if 5.26% of people tested positive that 5% would actually be positive.
According the wikipedia’s definition of false positive rate in statistics, that is wrong. It’s false positives versus actual negatives, and not false positives versus actual positives.
Rabbit hole…
Hundreds of athletes from the US military were in Wuhan for the Military World Games in October 2019.
Part of the problem is misleading semantics when it comes to accuracy. As an extreme example, one could tout a test as “detecting 100% of cases” if the test simply reports “Yes” 100% of the time. One needs to look a true positive % and true negative % and false positive % to fully gauge how accurate the test is. The wikipedia definition seems reasonable.
Looks like the vast majority of students at my school will not be returning this school year. Graduating students will have to return. However, I finish teaching them before the day they have to go back. Unless I’m informed otherwise, I only have to go back for the 1st and 2nd week of June to administer school-leaving exams for the graduates. Everything else I can do online.
Know what it’s time for?
Thanks. There was an article today about the Lazeretto house near the Philly airport. So we took a drive to check it out. it was built in 1799 and served as a quarantine house for 100 years for ship passengers coming up the Delaware. Yellow fever and typhus patients
mostly. Think today it’s township offices.
Here is the river facing side.
Here is covid patient Dan
Hmm, I didn’t see this post, but I guess you and jman were right. Seems weird.
10000 tested
1000 tested positive (10% testing positive)
x actual positive
(1000 - x)/(10000 - x) = .05 (ratio of negative events wrongly categorized as positive to actual negatives = 5%)
1000 - x = 500 - .05x
500 = .95x
x = 526
I saw you deleted your last post, so I deleted mine saying you were wrong.
(I posted it and then noticed you had deleted.)
Let’s stick to posting US propaganda
I figured that. I had a hard time accepting that the denominator was the true negatives rather than the true positives.
True/false positive/negative are just what they sound like. False positive is test positive but person is negative.
Short of DS dropping in to explain (and misspell) Bayes’ rule, there is a tweet with a decent explanation of the issue up thread.
It’s maybe counterintuitive, but in a nutshell - - the smaller the prevalence (of what you’re testing for) in the whole population is, the more likely positives in the tested sample are false.
So then, is there any reason to believe the testing is so inaccurate that if 1000/10000 people test positive only 526 people are actually positive?
Well in any case, your posts made me think about it from a different perspective rather than just assuming. And that caused me to look it up and know for sure, so that’s a good thing.