2023 LC Thread - It was predetermined that I would change the thread title (Part 1)

I can’t believe the mods let people discuss a topic they were interested in for this long. Is there a thread where we can talk about moderation?

Correctamundo

It’s deeply weird for people to go “but the jet engines propel the plane forward!” as if people don’t know that, or to say “lol there’s no driveline to the wheels, Sherlock” as if people don’t know that

But the weirdest of all is “obviously the correct answer is the one that ignores basically the only parameter set by the question. The plane would go forward on a real treadmill instead of the one in this question, and the wheels would just have to accept that the question’s parameter regarding their speed is wrong, and anyone who says otherwise is the weird one, not me”

4 Likes

Just pretend I’m Sherlock Holmes, ut oh, ut oh

1 Like

Dude shut up. I’m commenting on how it’s still going and that I’m surprised.

You are so oppressed

1 Like

Sinead was right about the pope, tho. Fuck that guy.

1 Like

Ike’s is big mad he had to read text on his screen. Some might even say he is unhinged

2 Likes

Ikes is afforded the opportunity to defend himself by the constitution of these United States, if the attack of text is deemed an attack on liberty, a defense against tyranny

2 Likes

I don’t think this has ever been done before. I’m curious how you all rank the seasons of The Wire

Moving on…

What color is this dress?

7 Likes

Black n blue

4
1
3
2
5

From the Wiki:

Simon Burgess has argued that the problem can be divided into two stages: the stage before the predictor has gained all the information on which the prediction will be based and the stage after it. While the player is still in the first stage, they are presumably able to influence the predictor’s prediction, for example, by committing to taking only one box. So players who are still in the first stage should simply commit themselves to one-boxing.

Burgess readily acknowledges that those who are in the second stage should take both boxes. As he emphasises, however, for all practical purposes that is beside the point; the decisions “that determine what happens to the vast bulk of the money on offer all occur in the first [stage]”.[13] So players who find themselves in the second stage without having already committed to one-boxing will invariably end up without the riches and without anyone else to blame. In Burgess’s words: “you’ve been a bad boy scout”; “the riches are reserved for those who are prepared”.[14]

Burgess has stressed that – pace certain critics (e.g., Peter Slezak) – he does not recommend that players try to trick the predictor. Nor does he assume that the predictor is unable to predict the player’s thought process in the second stage.[15] Quite to the contrary, Burgess analyses Newcomb’s paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values – whether implicitly or explicitly – that are entirely consistent at all times. To treat the paradox as a common cause problem is simply to assume that the player’s decision and the predictor’s prediction have a common cause. (That common cause may be, for example, the player’s brain state at some particular time before the second stage begins.)

It is also notable that Burgess highlights a similarity between Newcomb’s paradox and the Kavka’s toxin puzzle. In both problems one can have a reason to intend to do something without having a reason to actually do it. Recognition of that similarity, however, is something that Burgess actually credits to Andy Egan.[16]

He puts his finger here on something that always annoyed me about the Pirate Game, which is that the thought experiment specifies both that the pirates are “perfectly rational” and that the other pirates know this. But including the premise that players can know for a fact what other players’ strategies will be should change the definition of “perfectly rational”.

To take the three-player game (A, B, C) as an example, the reasoning goes that if it goes down to two players, B can simply vote to give himself all the money, therefore A should propose a 99-0-1 split which C will have to accept lest he get zero instead. But in reality what C should obviously say to A is “real talk, I am going to murder the shit out of you if you don’t do an even split here and I don’t care if I get zero because of it”.

Like in the quote above, the idea is that the game really has two stages - first is where one decides what kind of player one is, and second is where one plays the game in that manner. In a world where your inner nature can be known to others, choosing the “perfectly rational” strategy is actually totally irrational.

Similar thing with Newcomb’s paradox. Being the kind of person who chooses to open just box B is obviously more profitable than being the kind of person who opens A+B, in a world where my true inner nature can be known to the predictor. And the problem setup assures me that I live in such a world. So I ought to be the kind of person who just opens B. Any reasoning that concludes I ought to be the kind of person who opens A+B is manifestly incorrect.

3 Likes

Newcomb’s problem is like the airplane problem in that it’s poorly specified. Obviously what you want is to take one box iff that is the only way to get the high value in the opaque box, and the other player wants to put the high value in the opaque box iff you will only take one box. However, both of these problems are undecidable as a general matter, even if you ignore the infinite recursion. What the question really boils down to, as you suggest, is what properties about your decision making is it desirable to make legible/decidable/provable, and which should remain undecidable.

Extremely hilariously, the people most into Newcomb’s Paradox are LessWrong AI doomer types, and they take it as an article of faith that, in a scenario where you see good things happen to people who irrationally trust a super intelligent AI, it’s a hallmark of rational thinking to also trust the superintelligent AI.

The correct answer here is that there is no generally optimal set of behaviors, and it’s all dependent on who you’ll be interacting with and in what scenarios. Being a murderous pirate with scary facial tattoos is excellent for dividing treasure on a desert island, but less good for an MBA. In a prisoners dilemma algorithm tournament, it’s the dominant strategy to cooperate only with agents whose source code is identical to your own if clones of you are very common, otherwise it’s terrible. The right answer is ecological, not determinable from first principles.

^^ This is worth repeating. If you think the question is badly posed, you can simply replace the treadmill with the above and arrive at the same basic issue.

1 Like

This is wrong. The laws of motion do not allow for a consistent solution for the plane’s acceleration under the constraints provided. It’s inconsistent to say the plane accelerates because either the wheels would slip or the constraint about the relative speeds of the wheels and the treadmill would be violated. It’s equally inconsistent to say that the plane’s acceleration is zero, because there’s a net force on the plane from its engines.

Physics tells us that it is impossible to build a treadmill such that a plane with free-spinning wheels can’t take off from it. If you take a contradiction as your premise, then you can prove anything. You can prove that the plane doesn’t take off, that it does take off, that it takes off flying backwards, whatever you want. If false then X is vacuously true for all values of X.

Wrong

Could a plane take off going in reverse?

It’s correct. In reality, the plane takes off regardless of wheels or treadmills or whatever. The condition of the wheels and treadmill matching speed only works if the plane is not moving forward. As soon as it starts moving forward, wheel speed is going to be greater than treadmill speed. And if the plane isn’t moving forward, it doesn’t take off

This is some serious word salad. Nothing in the question as posed is provable because it’s a poorly designed question.

This is why the plane can’t accelerate. But it also must accelerate because the engines are pushing it forwards and nothing is pushing it back. It’s a contradiction. Neither answer can be correct.