Newcomb’s problem is a thought experiment where you’re presented with two boxes, and the option to take one or both. One box is transparent and always contains $1000. The second is a mystery box.

Before making the choice, a supercomputer (or team of psychologists, etc) predicted whether you would take one box or both. If it predicted you would take both, the mystery box is empty. If it predicted you’d take just the mystery box, then it contains $1,000,000. The predictor rarely makes mistakes.

This problem tends to split people 50-50 with each side thinking the answer is obvious.

An argument for two-boxing is that, once the prediction has been made, your choice no longer influences the outcome. The mystery box already has whatever it has, so there’s no reason to leave the $1000 sitting there.

An argument for one-boxing is that, statistically, one-boxers tend to walk away with more money than two-boxers. It’s unlikely that the computer guessed wrong, so rather than hoping that you can be the rare case where it did, you should assume that whatever you choose is what it predicted.

  • searabbit@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 hours ago

    This feels like the poison scene from the princess bride, so I’ll approach it with that level of intellectual derangement.

    Which means the obvious first step is to recognize that the house is a cheater who wants you to stay poor so your choice doesn’t matter. There is poison in both cups and I will lose either way. Money no longer influences my decision.

    Next, I flip a coin ten times and note my reaction to the choices. That’s my gut instinct and obviously what the model predicted unless it’s either not smart enough to know my gut or smart enough to predict my double bluff, therefore useless.

    Next, I decide which variables are most likely to influence the prediction (gender, age, education level, big 5 personality score) and realize this is the adult marshmallow test. I obviously think I’m smart and want the model to know that, so it obviously predicted that I would take one box because I’m a good little goodie two shoes who delays instant gratification for the potential bigger payoff. Therefore I choose two boxes because the model would never expect someone as smart as I to make such a dumb greedy move. Surely, I have outsmarted the supercomputer with my quadruple bluff and have won.

    And then I remember I am dumb and the model knows that, because in my excitement, I forgot that the house is a cheater who always wins (and there was likely never any money in the mystery box because researchers never get that kind of funding). I am forced to believe that the model accurately perceived me to be a greedy idiot who took two boxes against my better judgement, shattering my ego.

    But hey, I at least got $1k out of it.