2022 LC Thread—New Year, New Thread

Fucking hell

As they stood on her porch, the officers told Wallace that her son could have been kidnapped and sex trafficked. “‘You don’t see much sex trafficking where you are, but where I patrol in downtown Waco, we do,’” said one of the cops, according to Wallace.

This statement struck her as odd.

“They were basically admitting that this is a safe neighborhood,” she says.

The officer then asked Wallace whether she would let her son walk home again, now that she knew about the sex trafficking.

“I still didn’t know it was illegal and I said, ‘I don’t know,’” says Wallace. “That’s when the cop replied, ‘Okay, I’m going to have to arrest you.’”

Clearly her big crime was not just deferring to his authority and saying yes. Cops HATE that. It’s the whole reason they became a cop in the first place.

Also right-wing cops buying into the whole myth sex-trafficking that random kids are being kidnapped and sex-trafficked (when it’s almost always someone known to the child, usually a relative, and the child is usually in their teens).

A woman one block away had called the cops to report a boy walking outside alone. That lady had actually asked Aiden where he lived, verified that it was just down the street, and proceeded to call nonetheless. The cops picked up Aiden on his own block.

Child services had the family agree to a safety plan, which meant Wallace and her husband could not be alone with their kids for even a second. Their mothers—the children’s grandmothers—had to visit and trade-off overnight stays in order to guarantee the parents were constantly supervised. After two weeks, child services closed Wallace’s case, finding the complaint was unfounded.

4 Likes

My parents would have been locked up for life. One time my mom left me at the mall when I was 9 or 10. Was several miles.

Walked a mile home from school most days. Just weird how the world has changed. I was thinking about it last night, but I almost never see kids just out and about in the wild. Yet when we were kids there would be kids all over the place Sun up to sundown.

As for the story, I have seen too many cops on social media support these bs sex trafficking tropes.

2 Likes

I walked a mile to school in kindergarten, including across a busy 4 lane street in front of our house. My Grandpa was supposed to walk me across that road and make sure I met up with some other kids. But he was a drunk, and sometimes I missed the other kids.

By 2nd grade I was walking a half mile to a different school on my own every day. It really wasn’t a big deal back then.

2 Likes

How did we get here? The first time I didn’t regularly walk to school, I was 12 years old and in eighth grade. I remember summers where all the kids were outside all day, just went home for lunch, supper and when it got dark.

1 Like

It’s not a big deal NOW except for the fucking busybody Karens who “see a child walking alone” and call the fucking cops.

It’s many times safer now than it was back then, too.

We had a county supervisor candidate who ran on a very good point: restoring school bus routes and letting kids walk or bIke to school would improve safety, infrastructure, and the environment by eliminating those infuriating mile-long drop off lines.

2 Likes

Has bus service ended everywhere? I thought it was an LA thing. I assume rural areas still have it.

We have it where I am, but many of the north county cities/suburbs (where she ran) have gotten rid of it.

She lost. The bussing reasoning was much too high level for your average brain dead voter

https://twitter.com/joshnbcnews/status/1592647482236096512?s=46&t=De7Kq1BZPsuc1vKh8EjMpA

https://twitter.com/joshnbcnews/status/1592647557963997184?s=46&t=De7Kq1BZPsuc1vKh8EjMpA

2 Likes

I don’t know where to put this comment so I’m putting it here: Remember when news broke about the first photo of a black hole? Well, let me put it this way: Why did they only take one photo of it?

1 Like

Kids are not allowed to be independent in the US

3 Likes

Clipboard01

2 Likes

Because they didn’t? The “photo” was an amalgam of many images taken by radio telescopes across the world. The interesting part wasn’t the image itself (surprise, it’s black), but the algorithm used to combine them. Or was this a point about something altogether different?

1 Like

I wish I’d grown up in one of those stoop neighborhoods in Brooklyn or Philly, where everyone’s always outside, neighbors look out for each other’s kids, old biddies sit in the window and watch everyone. At least that’s how I imagine them.

When I lived in SF kids took public transportation to school. I assume they still do? But somehow walking a block in a Waco suburb isn’t safe.

Wow. Incredible. So cool. Someone made an algorithm that layers photos together. An algorithm that already existed for a long time. Golf clap for it. Ladies’ clap for it. Tennis clap for it.

If you’re actually interested here’s some reading.

The researchers had to develop sophisticated new tools that accounted for the gas movement around Sgr A*. While M87* was an easier, steadier target, with nearly all images looking the same, that was not the case for Sgr A*. The image of the Sgr A* black hole is an average of the different images the team extracted, finally revealing the giant lurking at the centre of our galaxy for the first time.

The effort was made possible through the ingenuity of more than 300 researchers from 80 institutes around the world that together make up the EHT Collaboration. In addition to developing complex tools to overcome the challenges of imaging Sgr A*, the team worked rigorously for five years, using supercomputers to combine and analyse their data, all while compiling an unprecedented library of simulated black holes to compare with the observations.

Though scientists had theorized they could image black holes by capturing their silhouettes against their glowing surroundings, the ability to image an object so distant still eluded them. A team formed to take on the challenge, creating a network of telescopes known as the Event Horizon Telescope, or the EHT. They set out to capture an image of a black hole by improving upon a technique that allows for the imaging of far-away objects, known as Very Long Baseline Interferometry, or VLBI.

Each telescope used for the EHT had to be highly synchronized with the others to within a fraction of a millimeter using an atomic clock locked onto a GPS time standard. This degree of precision makes the EHT capable of resolving objects about 4,000 times better than the Hubble Space Telescope. As each telescope acquired data from the target black hole, the digitized data and time stamp were recorded on computer disk media. Gathering data for four days around the world gave the team a substantial amount of data to process. The recorded media were then physically transported to a central location because the amount of data, around 5 petabytes, exceeds what the current internet speeds can handle. At this central location, data from all eight sites were synchronized using the time stamps and combined to create a composite set of images, revealing the never-before-seen silhouette of M87*’s event horizon. The team is also working on generating an image of Sagittarius A* from additional observations made by the EHT.

https://spectrum.ieee.org/the-inside-story-of-the-first-picture-of-a-black-hole

The 1.3-mm wavelength, considerably shorter than the norm in VLBI, also pushed the limits of our technology. The receivers we used converted the 230-GHz signals down to a more manageable frequency of around 2 GHz. But to get as much information as we could about M87*, we recorded both right- and left-hand circular polarizations at two frequencies centered around 230 GHz. As a result, our instrumentation had to sample and store four separate data streams pouring in at the prodigious rate of 32 gigabits per second at each of the participating telescopes.

Interferometry works only if you can precisely align the peaks in the signal recorded at each pair of telescopes, so the short wavelength also required us to install hydrogen-maser atomic clocks at each site that could sample the signal with subpicosecond accuracy. We used GPS signals to time-stamp the observations.

On four nights in April 2017, everything had to come together. Seven giant telescopes (some of them multidish arrays) pointed at the same minuscule point in the sky. Seven maser clocks locked into sync. A half ton of helium-filled, 6- to ­8-terabyte hard drives started spinning. I along with a few dozen other bleary-eyed scientists sat at our screens in mountaintop observatories hoping that clouds would not roll in. Because of the way interferometry works, we would immediately lose 40 percent of our data if cloud cover or technical issues forced even one of the telescopes to drop out.

But the heavens smiled on us. By the end of the week, we were preparing 5 petabytes of raw data for shipment to MIT Haystack Observatory and the Max Planck Institute for Radio Astronomy, in Germany. There, researchers, using specially designed supercomputers to correlate the signals, aligned data segments in time. Then, to counter the phase-distorting influence of turbulent atmosphere above each telescope, we used purpose-built adaptive algorithms to perform even finer alignment, matching signals to within a trillionth of a second.

Now we faced another giant challenge: distilling all those quadrillions of bytes of data down to kilobytes of actual information that would go into an image we could show the world.

Nowhere in those petabytes of data were numbers we could simply plot as a picture. The “lens" of our telescope was a tremendous amount of software, which drew heavily on open-source packages and now is available online so that anyone can replicate or improve on our results.

Radio interferometry is relatively straightforward when you have many telescopes close together, aimed at a bright source, and observing at long wavelengths. The rotation of Earth during the night causes the baselines connecting pairs of the telescopes to sweep through a range of angles and effective lengths, filling in the space of possible measurements. After the data is collected, you line up the signals, extract a two-dimensional spatial-frequency pattern from the variations in amplitude and phase among them, and then do an inverse Fourier transform to convert the 2D frequency pattern into a 2D picture.

VLBI is a lot harder, especially when observing with just a handful of dishes at a short wavelength, as we were for M87*. Perfect calibration of the system was impossible, though we used an eighth telescope at the South Pole to help with that. Most problematic were differences in weather, altitude, and humidity at each telescope. The atmospheric noise scrambles the phase of the incoming signals.

The problem we faced in observing with just seven telescopes and scrambled phases is a bit like trying to make out the tune of a duet played on a piano on which most of the keys are broken and the two players start out of sync with each other. That’s hard—but not impossible. If you know what songs typically sound like, you can often still work out the tune.

It also helps that the noise scrambles the signal in an organized way that allows us to exploit a terrific trick called closure quantities. By multiplying correlated data from each pair in a trio of telescopes in the right order, we are able to cancel out a big chunk of the noise, though at the cost of adding some complexity to the problem.

The longest and shortest baselines in our telescope network set the limits of our resolution and field of view, and they were limited indeed. In effect, we could reconstruct a picture 160 microarcseconds wide (equivalent to 44 billionths of a degree on the sky) with roughly 20 microarcseconds of resolution. Literally an infinite number of images could fit such a data pattern. Somehow we would have to pick—and decide how confident to be in our choice.

To avoid fooling ourselves, we created lots of images from the M87* data, in lots of different ways, and developed a rigorous process—well beyond what is typically done in radio astronomy—to determine whether our reconstructions were reliable. Every step of this process, from initial correlation to final interpretation, was tested in multiple ways, including by using multiple software pipelines.

Before we ever started collecting data, we created a computer simulation of our telescope network and all the various sources of error that would affect it. Then we fed into this simulation a variety of synthetic images—some derived from astrophysical models of black holes and others we had completely made up, including one loosely based on Frosty the Snowman.

Next, we asked various groups to reconstruct images from the synthetic observations generated by the simulation. So we turned images into observations and then let others turn those back into images. The groups all produced pictures that were fuzzy and a bit off, but in the ballpark. The similarities and differences among those pictures taught us which aspects of the image were reliable and which spurious.

In June 2018, researchers on the imaging team split into four squads to work in complete isolation on the real data we had collected in 2017. For seven weeks, each squad worked incommunicado to make the best picture it could of M87*.

Two squads primarily used an updated version of an iterative procedure, known as the CLEAN algorithm, which was developed in the 1970s and has since been the standard tool for VLBI. Radio astronomers trust it, but in cases like ours, where the data is very sparse, the image-generation process often requires a lot of manual intervention.

Drawing on my experience with image reconstruction in other fields, my collaborators and I developed a different approach for the other two squads. It is a kind of forward modeling that starts with an image—say, a fuzzy blob—and uses the observational data to modify this starting guess until it finds an image that looks like an astronomical picture and has a high probability of producing the measurements we observed.

I’d seen this technique work well in other contexts, and we had tested it countless times with synthetic EHT data. Still, I was stunned when I fed the M87* data into our software and, within minutes, an image appeared: a fuzzy, asymmetrical ring, brighter on the bottom. I couldn’t help worrying, though, that the other groups might come up with something quite different.

On 24 July 2018, a group of about 40 EHT members reconvened in a conference room in Cambridge, Mass. We each put up our best images—and everyone immediately started clapping and laughing. All four were rings of about the same diameter, asymmetrical, and brighter on the bottom.

We knew then that we were going to succeed, but we still had to demonstrate that we hadn’t all just injected a human bias favoring a ring into our software. So we ran the data through three separate imaging pipelines and performed image reconstruction with a wide range of prototype images of the kind that we worried might fool us. In one of the imaging pipelines, we ran hundreds of thousands of simulations to systematically select roughly 1,500 of the best settings.

At the end, we took a high-scoring image from each of the three pipelines, blurred them to the same resolution, and took the average. The resulting picture made the front page of newspapers around the world.

2 Likes

Science communication is hard. Pretty pictures have high impact but hide all the sweat.

1 Like

These prices are absurd!

1 Like

Where are all the homeless people and drug addicts living in tents in those Amersterdam videos?

The parks are at max capacity at current prices so the only solution is to keep raising prices.

I feel like we have gotten into the cycle of maturity getting slower. I think this overprotectiveness is seriously retarding that cycle.

When will the current generation of kids be able to exist on their own two feet. For all sorts of reasons kids are staying in parents houses much longer than previously.