eli_sennesh comments on Why I will Win my Bet with Eliezer Yudkowsky - Less Wrong

-2 Post author: Unknowns 27 November 2014 06:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread.

Comment author: [deleted] 30 November 2014 12:10:50PM -1 points [-]

It appears to me that these kinds of questions are impossible to coherently resolve without making reference to some specific AGI architecture. When "the AI" is an imaginary construct whose structure is only partially shared between the different people imagining it, we can have all the vague arguments we like and arrive to no real answers whatsoever. When it's an actual object mathematically specified, we can resolve the issue by just looking at the math, usually without even having to implement the described "AI".

Therefore, I recommend we stop arguing about things we can't specify.

At the moment, people do not program AIs with explicit utility functions, but program them to pursue certain limited goals as in the example.

At the moment, people do not program AGI agents. Period. Whatsoever. There aren't any operational AGIs except of the most primitive, infantile kind used as reinforcement-learning experiments in places like DeepMind.