You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Fridolin comments on Open thread, Aug. 17 - Aug. 23, 2015 - Less Wrong Discussion

3 Post author: MrMind 17 August 2015 07:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (106)

You are viewing a single comment's thread.

Comment author: [deleted] 17 August 2015 08:14:21AM *  1 point [-]

What are your thoughts on the following Great Filter hypothesis: (1) Reward-based learning is the only efficient way to create AI. (2) AGI is easy, but FAI is hard to invent because the universe is so unpredictable (intelligent systems themselves being the most unpredictable structures) and nearly all reward functions will diverge once the AI starts to self-improve and create copies of itself. (3) The reward functions needed for a friendly reinforcement learner reflect reality in complex ways. In the case of humans they are learned by trial and error during evolution. (4) Because of this, the invention of FAI requires a simulation in which it can safely learn complex reward functions via evolution or narrow AI, which is time-consuming. (5) However, once AGI is widely regarded as feasible, people will realize that whoever invents it first will have nearly unlimited power. An AI arms race will ensue in which unfriendly AGIs are much more likely to arise.

Comment author: ZankerH 17 August 2015 08:23:16AM 10 points [-]

I don't see why an unfriendly AGI would be significantly less likely to leave a trail of astronomical evidence of its existence than a friendly AI or an interstellar civilisation in general.

Comment author: [deleted] 17 August 2015 04:19:07PM *  0 points [-]

I can think of three explanations, but I'm not sure how likely they are: Gamma ray bursts are exploding unfriendly AGIs (i.e. there actually is astronomical evidence), unfriendly AGIs destroy themselves with high probability (lack of self-preservation drive) or interstellar space travel is impossible for some reason.

Comment author: Lalartu 18 August 2015 08:00:10AM 2 points [-]

If interstellar travel (and astroengeneering) is impossible, that is enough to explain Great Filter without additional assumptions.

Comment author: [deleted] 18 August 2015 09:04:39AM 0 points [-]

Oops! That's right.