Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: scarcegreengrass 25 July 2017 06:28:43PM 0 points [-]

Thanks again for collecting these.

Comment author: scarcegreengrass 25 July 2017 04:50:03PM 0 points [-]

I might update on this, thanks.

Comment author: entirelyuseless 20 July 2017 01:40:57PM 1 point [-]

I think the OP's argument was that "I am currently awake rather than asleep. So there are likely a lot of red rooms," is analagous to "I currently exist rather than not existing. So there are likely a lot of existing people." The first argument is obviously stupid; so the second argument is probably stupid as well. That seems reasonable to me.

Comment author: scarcegreengrass 25 July 2017 03:20:29PM 0 points [-]

I'm not an expert, so i might be misunderstanding, but let me try to come up with a rebuttal.

'Obviously' is a strong word here. I think "I am currently awake rather than asleep. So there are likely a lot of red rooms" is pretty intuitive under these rules. After all, red rooms cause wakefulness.

Here's how i look at it: Imagine there is a city where all the hotels have 81 rooms (and exactly the same prices). Some hotels are almost full and some are almost empty. A travel agency books you a random room, distributed such that you are equally likely to be assigned any vacant room in the city. You are more likely to be assigned a room in one of the almost-empty hotels than in one of the almost-full hotels.

(The statement about existing people is more complicated.)

Comment author: James_Miller 09 July 2017 11:10:55PM 1 point [-]

but as a data hoard on Moon, as most probably the next civilization will appear again on Earth.

Excellent point.

We create a special black hole on LHC, that create many universes, like our own. In other words, our universe is fine-tuned in the way that civilisations self-destroy by creating many new universes

I'm not sure. Wouldn't new universes be mostly created by advanced civilizations trying to create new universes? I think your idea works only if creating a new universe requires destroying an old one.

Comment author: scarcegreengrass 10 July 2017 01:08:56AM 0 points [-]

I also don't understand that universe replication scenario. Maybe if there were a type of black hole -like object that both created many universes and destroyed all stars in their vicinity (generally destroying their creators).

Comment author: scarcegreengrass 07 July 2017 02:25:00PM 0 points [-]

Personally i think people often do indeed write rigorous criticisms of various points of rationality and EA consensus. It's not an under-debated topic. Maybe some of the very deep assumptions are less debated, eg some of the basic assumptions of humanism. But i think that's just because no one finds them faulty.

Comment author: turchin 26 June 2017 10:05:55PM 1 point [-]

I think there is a difference between creating an agent and negotiating with another agent. If agent 1 creates an agent 2, it will always know for sure its goal function.

However, if two agents meet, and agent A says to agent B that it has utility function U, and even if it sends its source code as proof, agent A doesn't have reasons to believe it. Any source code could be faked. The more advanced are both agents, the more difficult for them to prove its values to each other. So they will be always in suspicion that another side is cheating.

As a result, as I once said once too strongly: Any two sufficiently advanced agents will go to war with each other. The one exception is if they are two instances of the same source code, but even in this case cheating is possible.

To prevent cheating is better to destroy the second agent (unfortunately). What are the solutions for this problem in LW research?

Comment author: scarcegreengrass 28 June 2017 05:50:00PM 1 point [-]

Note that source code can't be faked in the self modification case. Software agent A can set up a test environment (a virtual machine or simulated universe), create new agent B inside that, and then A has a very detailed and accurate view of B's innards.

However, logical uncertainty is still an obstacle, especially with agents not verified by theorem-proving.

Comment author: scarcegreengrass 22 June 2017 09:35:29PM 0 points [-]

That's some good snark about the six-pointed cross.

Comment author: scarcegreengrass 22 June 2017 09:02:30PM 0 points [-]

So, i'm trying to wrap my head around this concept. Let me sketch an example:

Far-future humans have a project where they create millions of organisms that they think could plausibly exist in other universes. They prioritize organisms that might have evolved given whatever astronomical conditions are thought to exist in other universes, and organisms that could plausibly base their behavior on moral philosophy and game theory. They also create intelligent machines, software agents, or anything else that could be common in the multiverse. They make custom habitats for each of these species and instantiate a small population in each one. The humans do this via synthetic biology, actual evolution from scratch (if affordable), or simulation. Each habitat is optimized to be an excellent environment to live in from the perspective of the species or agent inside it. This whole project costs a small fraction of the available resources of the human economy. The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

Is this an example of the type of cooperation discussed here?

Comment author: [deleted] 01 June 2017 04:55:46PM *  2 points [-]

Interesting, I like the idea. FWIW, I wouldn't spend too much time predicting the values of superrational ETs and the characteristics of their civilizations. It appears to be too difficult to be accurate with a significant level of detail, and requires inference from a sample size of one (humans and evolution on Earth). I recommend punting to the FAI for this part.

Comment author: scarcegreengrass 07 June 2017 05:19:23PM 1 point [-]

Or alternately, one could indeed spend time on it, but be careful to remain aware of one's uncertainty.

Comment author: scarcegreengrass 30 May 2017 07:49:49PM 3 points [-]

Fascinating paper!

I found Sandberg's 'popular summary' of this paper useful too: http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

View more: Next