eli_sennesh comments on On Terminal Goals and Virtue Ethics - Less Wrong

67 Post author: Swimmer963 18 June 2014 04:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 24 June 2014 06:25:47PM 0 points [-]

From what I can tell on the outside, the MIRI approach seems to be: (1) find a practical theory of FAI; (2) design an AGI in accordance with this theory; (3) implement that design; (4) mission accomplished!

Yes, dear, some of us are programmers, we know about waterfalls. Our approach is more like, "Attack the most promising problems that present themselves, at every point; don't actually build things which you don't yet know how to make not destroy the world, at any point." Right now this means working on unbounded problems because there are no bounded problems which seem more relevant and more on the critical path. If at any point we can build something to test ideas, of course we will; unless our state of ignorance is such that we can't test that particular idea without risking destroying the world, in which case we won't, but if you're really setting out to test ideas you can probably figure out some other way to test them, except for very rare highly global theses like "The intelligence explosion continues past the human level." More local theses should be testable.

See also Ch. 22 from HPMOR, and keep in mind that I am not Harry, I contain Harry, all the other characters, their whole universe, and everything that happens inside it. In other words, I am not Harry, I am the universe that responded to Harry.

Comment author: [deleted] 24 June 2014 09:55:35PM 1 point [-]

See also Ch. 22 from HPMOR, and keep in mind that I am not Harry, I contain Harry, all the other characters, their whole universe, and everything that happens inside it. In other words, I am not Harry, I am the universe that responded to Harry.

Badass boasting from fictional evidence?

Yes, dear, some of us are programmers, we know about waterfalls.

If anyone here knew anything about the Waterfall Model, they'd know it was only ever proposed sarcastically, as a perfect example of how real engineering projects never work. "Agile" is pretty goddamn fake, too. There's no replacement for actually using your mind to reason about what project-planning steps have the greatest expected value at any given time, and to account for unknown unknowns (ie: debugging, other obstacles) as well.

Comment author: Eliezer_Yudkowsky 26 June 2014 05:51:22PM 0 points [-]

If anyone here knew anything about the Waterfall Model, they'd know it was only ever proposed sarcastically, as a perfect example of how real engineering projects never work

Yes, and I used it in that context: "We know about waterfalls" = "We know not to do waterfalls, so you don't need to tell us that". Thank you for that very charitable interpretation of my words.

Comment author: [deleted] 27 June 2014 05:48:03AM 0 points [-]

Well, when you start off a sentence with "Yes, dear", the dripping sarcasm can be read multiple ways, none of them very useful or nice.

Whatever. No point fighting over tone given shared goals.