Hello everyone,
I've been thinking about one argument and would like your input on it. In this particular formulation of the argument, it relates to advanced AI and the choice of non-existence. The argument can be formulated for other things as well, but given that advanced AI may be here in a couple of years, it feels like the most pressing issue.
Here is 'The Valley Argument':
First, imagine 2 valleys. One valley leads to billions and billions of years of extreme joy, while the other valley leads to billions and billions of years of extreme suffering. God comes down from heaven and tells you that there is a 95% chance that you will end up in the valley of joy for billions of years and 5% chance that you will end up in the valley of suffering for billions of years. He also gives you a choice to choose non-existence. What would you do? Would you take the gamble?
In this case, given that we are talking about the potential extreme suffering/torture for billions of years and the chances of it are 5% I would choose non-existence.
Now apply this to the AGI and ASI. For the sake of argument, AGI of IQ 300 could be here in a couple of years and ASI of IQ 10000 could be here in about a decade. AGI of IQ 300 could prolong human life to billions and billions of years, take over the world and create a world in it's image - whatever that might be. People have various estimates how likely it is that the AGI will go wrong, but one thing that many of them keep saying is that the worst case scenario is that it will kill us all. That is not the worst case scenario. The worst case scenario is that it will cause extreme suffering for billions and billions and billions of years.
Imagine 2 valleys from above and apply it to AGI/ASI. Let's give you better odds, let's say they are 1%, or 0.1% - would you be willing to take the gamble with heaven or hell even if the odds are 1% for hell? And if not, at what point would you be willing to take the gamble instead of choosing non-existence?
I've asked myself that question and have asked ChatGPT, Claude and Gemini and we all come to the near-zero answer.
Now, if some of you might say that you would be willing to take the gamble at 1% for a living hell, in this case, would you be willing to spend 2 hours now in a real torture chamber for every 198 hours that you are awake and not there? Advanced AGI could not only create suffering based on the current levels of pain experience that humans can have, which is already horrible, but increase pain experience of humans to unimaginable levels (for whatever reason, misalignment, indifference, retributive justice etc.)
For the simplicity of this discussion, let's assume that there is no afterlife and that non-existence is an option. At which point, what odds do you need to have to take the gamble between living heaven or hell instead of choosing non-existence?