You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on SMBC comic: poorly programmed average-utility-maximizing AI - Less Wrong Discussion

9 Post author: Jonathan_Graehl 06 April 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 06 April 2012 03:15:46PM *  3 points [-]

It doesn't have to tell the monster. (this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn't a couple thousands years of moral philosophy about the issue and related issues)

Comment author: Emile 06 April 2012 03:56:31PM 2 points [-]

this btw is one wireheading-related issue; i do quite hate the lingo here though; calling it wireheaded makes it sound like there isn't a couple thousands years of moral philosophy about the issue and related issues

I'm not aware of an alternative to "wireheading" with the same meaning.

Comment author: Rhwawn 06 April 2012 08:09:42PM 10 points [-]

Go classical - 'lotus-eating'.

Comment author: Dmytry 06 April 2012 09:11:44PM *  1 point [-]

Good one.

http://en.wikipedia.org/wiki/Lotus-eaters

That's the ancient greeks writing about hypothetical wireheads. (the 'moral philosophy' is perhaps a bad choice of word for search for greek stuff; ethics is the greek word)

Comment author: Emile 06 April 2012 08:56:44PM *  1 point [-]

A bit of search around that showed nearly no reference to lotus eating/lotus eater in moral philosophy.

Something much closer to "wireheading" would be hedonism, and more specifically Nozick's Experience Machine, which is pretty much wireheading, but isn't thousands of years old, and has been referenced here.

(And the term "wirehead" as used here probably comes from the Known Space stories, so probably predates Nozick's 1974 book)

Comment author: Rhwawn 06 April 2012 09:14:41PM 2 points [-]

I don't think you looked very hard - I turned up a few books apparently on moral philosophy by searching in Google Books for 'moral ("lotus eating" OR "lotus-eating" OR "lotus eater" OR "lotus-eater")'.

And yes, I'm pretty sure the wirehead term comes from Niven's Known Space. I've never seen any other origin discussed.

Comment author: Dmytry 06 April 2012 09:03:19PM *  2 points [-]

Well, for one thing, it ought to be obvious that Mohammed would have banned a wire into the pleasure centre, but lacking the wires, he just banned the alcohol and other intoxicants. The concept of 'wrong' ways of seeking the pleasure is very, very old.

Comment author: Desrtopa 06 April 2012 03:18:45PM 0 points [-]

It would be awfully hard to hide.

Sure, it could lock the monster in an illusory world of optimal happiness, or just stimulate his pleasure centers directly, etc. But unless we assume that the AI is working under constraints that prevent that sort of thing, the comic doesn't make much sense.

Comment author: Dmytry 06 April 2012 04:03:14PM *  0 points [-]

There's no clear line between 'hiding' and 'not showing'. You can leave just a million people or so, to be put around the monster, and simply not show him the rest. It is not like the AI is making every wall into the screen displaying the suffering on the construction of pyramids. Or you can kill those people and show it in such a way that the monster derives pleasure from it. At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.

edit: I think those solutions really easily come to mind when you know of what a soviet factory would do to exceed the five year plan.

Comment author: Desrtopa 06 April 2012 06:33:35PM 1 point [-]

At any rate, anyone whose death would go unnoticed by the monster, or whose death does not sufficiently distress the monster, would die, if the AI is to focus on average pleasure.

The AI explicitly wasn't focused on average pleasure, but on total pleasure, as measured by average pleasure times the population.

Comment author: Dmytry 06 April 2012 08:33:56PM 1 point [-]

Yep. I was just posting on what average pleasure maximizing AI would do, that isn't part of the story.