A couple of things about this study that I don’t think were brought up last time.
First, I’m almost certain that there’s no way to compare the post-COVID measurements to the pre-COVID measurements, because there are no pre-COVID imaging-based measurements for these individuals. So it’s possible that these individuals already had elevated measures of whatever they’re measuring, and that COVID didn’t actually make those matters worse. Of course, this is why they use a control group that they attempt to match on relevant risk characteristics, so they can say, “Well, the COVID patients would probably look like this control group in the absence of having experienced COVID.” But that’s why random testing is so powerful, and why causal inference in the presence of selection is so difficult: it seems plausible to me that underlying heart measures (i.e., elevated heart measures that weren’t reported as symptomatic, but would have been detectable with the imaging used in this study) would be higher among the group of patients that got tested.
Here’s how that might play out: COVID interacts with heart/lungs in some way such that people with a previously-undiagnosed (and obviously minor) respiratory issue are more likely to experience COVID symptoms and/or have other reasons to get tested for COVID. If true, you’d expect to observe exactly what they find in this study - that heart measures are elevated for the COVID group. But it wouldn’t be because COVID caused the heart issues. Instead, it would be because the heart issues interacted with COVID to lead to over-sampling of the high-heart issue group.
I’m not saying that this is definitely what happened. (I think it’s likely that COVID does have some kind of long-term effects among some proportion of people who test positive.) What I’m saying is that this stuff is hard. I routinely have papers rejected at academic journals precisely because of this kind of selection issue, where the conclusions that you might want to draw simply aren’t possible based on the data you have.
The second issue is that there are two ways to interpret this “duration since diagnosis” statistic. One is scary, one not so much.
The scary interpretation is that if the detectable damage is not correlated with the time since diagnosis, it means the damage is permanent. That’s scary. What you’d like to have is evidence that the damage is high immediately after diagnosis, but then lessens over time.
The less-scary version forces you to (again) remember the selection issue. Who in this sample is going to have the longest period of time since diagnosis? The people that were most severely hospitalized. That’s because the study requires that the patients had to have experienced a resolution of their symptoms. In other words, hospitalized individuals will have taken the longest to recover (and thus have higher days since diagnosis), so it isn’t terribly surprising if those kinds of patients experience greater damage. But that doesn’t rule out the possibility that damage does actually decline/reverse over time. It’s just that in this study, an over-time decline would be offset by the higher proportion of more serious cases in the higher days since diagnosis group.
Short story of my opinion, before it gets misinterpreted: COVID is scary. Probably has long-term consequences for some people. There’s absolutely no way we can quantify those consequences with the current data.