timtyler comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread.

Comment author: timtyler 10 January 2013 11:29:06AM 2 points [-]

For a reference, perhaps consider: The Perils of Precaution.

Comment author: JoshuaFox 10 January 2013 01:42:53PM *  2 points [-]

Good reference. SI is perhaps being too cautious by insisting on theoretically perfect AI only.

Comment author: Manfred 11 January 2013 12:03:43AM 1 point [-]

This is perhaps a silly statement.

Comment author: timtyler 11 January 2013 12:07:28AM 0 points [-]

Why do you think it is "silly"?

Comment author: Manfred 11 January 2013 12:26:14AM *  0 points [-]

The qualification with "perhaps" makes it tautological and therefore silly. (You may notice that my comment was also tautological).

The slight strawman with "insisting on theoretically perfect" is, well, I'll call it silly. As Eliezer replied, the goal is more like theoretically not doomed.

And last, the typo in "SI is perhaps being too cautious by insisting on theoretically perfect SI" makes it funny.

Comment author: JoshuaFox 11 January 2013 05:59:01AM 0 points [-]

Thanks, at least I corrected the typo.

The article did mention that even with a "perfect" theory , there may be mistakes in the proof or the implementation may go wrong. I don't remember him saying so as clearly in earlier writings as he did in this comment, so it's good we raised the issue.

Comment author: Halfwit 10 January 2013 05:23:21PM *  1 point [-]

When a heuristic AI is creating a successor that shares its goals, does it insist on formally-verified self improvements? Does it try understanding its mushy, hazy goal system so as to avoid reifying something it would regret given its current goals? It seems to me like some mind eventually will have to confront the FAI issue, why not humans then?

Comment author: timtyler 11 January 2013 12:04:50AM *  0 points [-]

If you check with Creating Friendly AI you will see that the term is defined by its primary proponent as follows:

The term “Friendly AI” refers to the production of human-benefiting, non-human harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals.

It's an anthropocentric term. Only humans would care about creating this sort of agent. You would have to redefine the term if you want to use it to refer to something more general.

Comment author: MugaSofer 27 January 2013 04:12:11PM -2 points [-]

Half specifically referred to "creating a successor that shares it's goals"; this is the problem we face when building an FAI. Nobody is saying an agent with arbitrary goals must at some point face the challenge of building an FAI.

(Incidentally, while Friendly is anthropocentric by default, in common usage analogous concepts relating to other species are referred to as "Friendly to X" of "X-Friendly", just a good is by default used to mean by human standards, but is sometimes used in "good for X".

Comment author: JoshuaFox 10 January 2013 08:25:05PM 0 points [-]

does it insist on formally-verified self improvements. Does it try understanding its mushy, hazy goal system so as to avoid reifying something it would regret given its current goals.

Apparently not. If it did do these things perfectly, it would not be what we are here calling the "heuristic AI."

Comment author: John_Maxwell_IV 09 February 2013 07:07:57AM 0 points [-]

Does this essay say anything substantive beyond "maximize expected value"?

Comment author: timtyler 10 February 2013 10:54:05PM 0 points [-]

That isn't the point of the essay at all. It argues that over-caution can often be a bad strategy. I make a similar point in the context of superintelligence in my the risks of caution video.