You have to pick a red or blue pill.
If you pick red, you will never die.
If you pick blue, you will die unless more than 50% of the other people also pick blue.
… and it will be overwhelmingly red.
They tricked people by starting off by telling you the benefits of picking blue, which influenced your decision making.
For non sociopaths, the optimal outcomes are 0% blue or >50% blue. The original question gets an optimal outcome for all participants. Does yours? It’s a very odd framing to say that someone has been “tricked” into getting the best possible outcome, especially when the purportedly non-tricky framing gives worse results. Like, the “trick” here is that emphasizing the consequences of your actions on others makes you weigh those consequences more heavily.
By way of serious answer here, I admire but do not share your confidence in predicting social behaviour. At the start of the pandemic I would have thought people wouldn’t act like complete morons and stockpile toilet paper to the extent that it disappeared from shelves, but that’s what they did (here anyway, I don’t remember if that was a thing in the US). Choosing blue to me carries an unquantifiable but non-zero chance of death. If others want to play Russian Roulette for absolutely no reason, I’m not jumping in front of the gun.
Of course if I have a chance to discuss it with others first and we get an agreement to choose blue, that’s fine. But that’s not the situation.
And of course if you’re going to be like “BTW there are babies and they choose randomly”, that’s different. Usually in these things the assumption is that adults of sound mind are doing the choosing.
Shortages of toilet paper were an easy prediction here so maybe it’s not the best analogy, but otherwise yes, good luck predicting how many will take the blue pill.
What I disagree with here is the “every single economics course” and “footnote” stuff. Behavioural economics is taken very seriously in practice - my full time job is trying to get people to save for retirement and get good outcomes, we literally never assume that people are rational. That was tried, and failed, in the 1990s. Graduates these days in relevant fields (investments, actuarial science, economics, commerce) are all keenly aware of the pitfalls of assumptions and the messiness of actual human behavior.
Maybe every single Economics 101 course “suffers” from introducing basic economics concepts in the faulty homo economicus framework, the same way that Physics 101 courses suffer from introducing basic physics concepts by assuming the movement of perfect spheres through spaces with no air resistance or friction. That doesn’t mean that physics PhDs don’t know that the world doesn’t actually look like that, and economics PhDs and practioners in related fields are very aware of the impact of human irrational behavior.
If you assume common knowledge of rationality, then sure, choose red because it’s safe, and there’s no reason anyone would choose blue. The problem is that the assumption is false. What about stupid people? What about misguided idealists who misread the question and thought that babies were at risk? What about people who properly understood the question but figured that some people would misunderstand and worry about babies so everyone was going to go blue to bail those people out? In practice, the perfect red pill world where no one dies will never materialize, and you’re left trying to convince yourself that it was fine (maybe even a net positive?) that they died.
Here is a game with a similar payoff matrix. A corrupt cop with a sideline in dealing drugs murders a rival and covers it up by saying that it was self-defense. Five other cops are witnesses to the crime. If at least three come forward, the dirty cop goes to jail, otherwise everyone who does speak up is murdered. Slam dunk stay quiet?
The point is that mostly red and mostly blue are both equilibria. Red is a stable equilibrium, and if you think you’re living in a red world, there’s no reason to go blue, even if you’re an altruist. The blender liquifies you and you achieve nothing. Blue is an unstable equilibrium. Choosing blue requires you to have confidence that most of the other players in the game have your back. But the blue world is much better than the red world, because someone is always going to pick blue, and those people will die if they aren’t bailed out.
It’s completely reasonable to assess that a particular case is a red scenario, or that you’re not sure and the risk is too high to go blue. But (as you can tell) I find it very upsetting to say that all these different scenarios are the same and have the same answer. They are different, and if your analysis assumes that they “should” be the same because they are isomorphic when translated into game theory, that means the analysis is wrong.
+1. A completely accurate model is “Everything is very complicated and I’m not sure what’s going to happen next. Could be anything.” This model never makes inaccurate predictions. It always predicts that anything might happen, and some subset of anything always does. Making a model that produces useful predictions requires making a bunch of simplifying assumptions, and those assumptions won’t hold in all cases, so sometimes your model will make incorrect predictions. But it’s still more useful than the “could be anything” model. Dismissing a model because it makes assumptions that don’t universally hold is misguided, as is assuming that the assumptions must hold because otherwise you can’t make useful predictions.
Your alternative game here differs in that bringing down corrupt cops is a good. Nothing is achieved by choosing blue in the original game. When people take risks for absolutely no reason, the degree of personal risk I am willing to assume to protect them is non-zero, but very very low. If that makes me a sociopath so be it. I do agree that changing the parameters of this game makes a difference and will not just squint through my monocle and be like “oho, weakly dominated strategy” but I think I am assuming much greater social unpredictability than you are.
There’s nothing even a little bit odd about it. You seem really incredulous about the idea that people can be influenced by the way a question is framed, but that has to be one of the most well-documented social science discoveries ever. Of course more people will take the red pill if you tell them “Obviously, nothing bad will happen if you take the red pill,”
And there’s the voting component, where I’m taking this risk in the chance that I, personally, am the 50%+1 vote. If there’s already 50%, I’m safe either way, and if we aren’t there yet, then why am I risking anything?
You’re missing the point. Obviously the framing of the question affects the outcomes, which is why it’s reasonable to respond differently to the pill question vs the blender question. My objection is to characterizing the pro-blue-pill framing as the trick when it produces a better outcome for everyone. The trick is the one that encourages people to focus narrowly on their self-interest rather than cooperating for the good of all.
This seems semantic. I guess I’m okay with saying people can be tricked into the outcome that’s better for everyone. You can even trick people into voting for their own personal self-interest, I don’t think that stretches the ordinary meaning of “trick” very much. The point is you’re manipulating behavior based on superficial things like framing the question in a different way.
Well it does unless incorrectly calibrated, like if you fuck up and get only 40% of people to choose blue, then you kill a ton of people. That’s the bit you’re glossing over here. To me the “let’s all not jump in the blender” framing is a lot easier and I don’t care if a few insane people jump in anyway. The reason I don’t care is not that I am a sociopath, but that the alternative you’re proposing carries serious risk of killing a giant number of well-meaning people. As I said, if you have a finite number of people and can discuss this and reach consensus that’s one thing, but appealing to your theories of what you reckon people will likely do is absolutely not good enough for me.
The discrepancy is at least partly due to people assigning a significantly lower value to death by blender vs death by pill of wrong in the payoff matrix.
If you have a choice between dying in your sleep or living one extra year but you are killed in a giant blender, which sounds worse?