โŒ

Reading view

Red button or blue button? What a viral question tells us about game theory and the state of the world

Gabriel Vasiliu / Unsplash

Everyone on earth takes a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, only people who pressed the red button survive. Which button would you press? BE HONEST.

This question is the latest thought experiment to set off waves of controversy on social media, following classic examples such as the trolley problem and the prisonerโ€™s dilemma.

Most people think the choice is extremely obvious. However, not everyone agrees whether the obvious answer is blue or red โ€“ and they want to argue about it.

Whatโ€™s going on here? From the point of view of philosophy and game theory, the question shows two different intuitions and views of decision-making with starkly contrasting results. And the very popularity of the question highlights the fraught existential stakes many of us feel in modern life.

Red or blue? Itโ€™s complicated

The case for red seems simple. If more than 50% of people press the blue button, red pressers survive. If not, red pressers survive anyway. So basic self-interest leads to red.

In game theory, this choice leads to what is known as the Nash equilibrium. This is the best choice for a participant looking to advance their own interests.

However, in several polls, the majority of respondents pick blue. At first glance, this may seem irrational and self-destructive.

Why would anyone stake their own life on the collective decisions of others? This is where, as with any good thought experiment, the real value of the provocation shows itself, as we ponder the โ€œwhyโ€ behind the choice.

Blue pressers might proffer a diverse set of responses: โ€œIโ€™m worried my family and friends might pick blue and I want them to surviveโ€; โ€œIโ€™m concerned people might find out if I pick red and judge meโ€; โ€œIf I picked red I would feel responsible for the potential deaths of othersโ€; โ€œI believe humanity is inherently goodโ€, and so on.

Such responses hint at what game theorists call the Pareto-optimal outcome, where the least potential damage is done by oneโ€™s choice.

Why now?

Whatโ€™s also interesting is why such a thought experiment has gone viral in 2026. In any society, what cultural theorist Raymond Williams called a โ€œstructure of feelingโ€ holds sway: an affective atmosphere, a set of moods and emotions that are most visible in its symbolic output.

We can here point to popular culture. Shows such as Netflixโ€™s hit series Squid Game, the glut of Survivor-style reality TV shows, the digital game Among Us and the Hunger Games books and films rely on similar setups.

A man in a blue vest and a woman in a red vest stare at each other
Shows like Squid Game show the current appeal of the gamified moral dilemma. Netflix

The fundamental questions tend to remain the same. Who can be trusted? How do incentives change our moral stance? Do systems reward altruism or selfishness?

More than at any time in human history, we are interdependent on a global scale: politically, economically, militarily, technologically, culturally. When a domino falls on one side of the planet, we now see it, hear it and feel it on the other side.

This engenders a distinct sense of vulnerability and precarity. We are bombarded every day with information from all around that can stress, enrage, and exhaust us.

Why here?

The specific formulation of the thought experiment, condensed down into a simple binary choice, is also perfect for social media, where hot takes dominate and extremity is rewarded by the algorithm: yes or no, right or wrong, gold-and-white dress or blue-and-black.

Itโ€™s also where similar questions are often asked of influencers, who might sacrifice their own moral viewpoints in pursuit of attention and visibility. Itโ€™s a perfect quick moral apocalypse for a doomscrolling public.

Another useful idea here is the โ€œPromethean gapโ€ described in 1956 by philosopher of technology Gรผnther Anders. The idea is that the more technological capacity grows, the less humanity can comprehend emotionally, intellectually and morally.

We have, in a sense, outsourced too much of ourselves to technology. In doing so, we have let some crucial competencies atrophy, and so the gap grows.

Under rapidly advancing technology, our capacity for action exceeds our capacities for moral imagination.

This fear is readily apparent in the thought experiment: the world ended at the push of a button. By comparison, the stakes of the prisonerโ€™s dilemma or the trolley problem seem positively quaint.

The Conversation

Steven Conway does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

  •  
โŒ