Foie gras, the delicacy made from the liver of a very fat goose (or sometimes duck), is believed to be unethical and is therefore frequently banned. For a long time, it was believed that the only way to properly fatten a goose is to continually force-feed it through a tube over several weeks, which is probably a highly unpleasant experience, although it's difficult to tell. Recently, Spanish farmer Eduardo Sousa revealed that under highly specific conditions, you can get geese to fatten themselves voluntarily.
Geese will instinctively gorge themselves when winter is coming on. Eat a goose right after it's fattened itself up for the winter, and you get a delicious treat that died happy. The problem is that geese will only do this if they believe food may become scarce during the winter (or their instinct to gorge only kicks in when the environment is such that that would be a reasonable inference; it's not clear whether it's the goose or evolution doing the analysis). If they realize that food will remain available during the winter, they eat normally. And there are quite a few possible clues--farmers trying to replicate Sousa's setup have discovered that cheating on any part leads to unfatted livers.
- Even as chicks, geese cannot be handled by a human, or encounter other geese who have been.
- There can be no visible fences.
- Geese cannot be "fed," rather a variety of food must be distributed randomly throughout a large space, with the placement constantly changing, so that the geese happen to come across it.
So... suppose hypothetically that it were analogously true of humans that our likelihood of voluntarily maximizing "Fun" is dependent on being in an environment in which our access to Fun appears primarily determined by chance and our own efforts, and in which we believe Fun may soon run out.
It seems to follow from that supposition that if an outside force (e.g., a superhuman FAI) wants to maximize the amount of Fun we have, while still respecting our agency, it has to create such an environment.
How does adding that hypothetical constraint affect the conclusions of the Fun Theory Sequence?
Or does it?
How does such an environment differ from the world we actually live in?
Or does it?
Not rhetorical questions.
The first requirement:
suggests that a FAI would not tell us that it exists. In other words, the singularity may already have happened.