cousin_it comments on What does a calculator mean by "2"? - Less Wrong

8 Post author: Wei_Dai 07 February 2011 02:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 07 February 2011 12:09:05PM *  1 point [-]

Good. I'm adopting this way of thought.

So one possible way forward is to enumerate all our reasons for action, and also all the reasons for discomfort, I guess. Maybe Eliezer was wrong in mocking the Open Source Wish Project. Better yet, we may look for an automated way of enumerating all our "thermostats" and checking that we didn't miss any. This sounds more promising than trying to formulate a unified utility function, because this way we can figure out the easy stuff first (children on railtracks) and leave the difficult stuff for later (torture vs dust specks).

Comment author: Nisan 08 February 2011 08:18:46AM 1 point [-]

So one possible way forward is to enumerate all our reasons for action

This is a good idea. "What reasons for action do actual people use?" sounds like a better question than "What reasons for action exist?"

Comment author: Vladimir_Nesov 07 February 2011 01:14:25PM 1 point [-]

Maybe Eliezer was wrong in mocking the Open Source Wish Project.

"Wishes" are directed at undefined magical genies. What we need are laws of thought, methods of (and tools for) figuring out what to do.

Comment author: cousin_it 07 February 2011 04:37:05PM *  1 point [-]

Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI, so I wouldn't wish this problem upon myself! First I'd like to see an exhaustive list of reasons for action that actual people use in ordinary situations that feel "clear-cut". Then we can look at this data and figure out the next step.

Comment author: Vladimir_Nesov 07 February 2011 06:13:15PM 1 point [-]

Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI

Yes, blowing up the universe with an intelligence explosion is much easier than preserving human values.

Comment author: Vladimir_Nesov 07 February 2011 05:07:25PM *  0 points [-]

Then we can look at this data and figure out the next step.

Sounds like an excuse to postpone figuring out the next step. What do you expect to see, and what would you do depending on what you see? "List of reasons for action that actual people use in ordinary situations" doesn't look useful.

Comment author: cousin_it 07 February 2011 05:26:17PM *  0 points [-]

Thinking you can figure out the next step today is unsubstantiated arrogance. You cannot write a program that will win the Netflix Prize if you don't have the test dataset. Yeah I guess a superintelligence could write it blindly from first principles, using just a textbook on machine learning, but seriously, WTF.

Comment author: Vladimir_Nesov 07 February 2011 05:28:46PM *  0 points [-]

With Netflix Prize, you need for training the kind of data that you want to predict. Predicting what stories people will tell in novel situations when deciding to act is not our goal.

Comment author: cousin_it 07 February 2011 05:44:32PM *  0 points [-]

Why not? I think you could use that knowledge to design a utopia that won't make people go aaaargh. Then build it, using AIs or whatever tools you have.

Comment author: Vladimir_Nesov 07 February 2011 06:02:46PM *  1 point [-]

The usual complexity of value considerations. The meaning of the stories (i.e. specifications detailed enough to actually implement, the way they should be and not simply the way a human would try elaborating) is not given just by the text of the stories, and once you're able to figure out the way things should be, you no longer need human-generated stories.

This is a different kind of object, and having lots of stories doesn't obviously help. Even if the stories would serve some purpose, I don't quite see how waiting for an explicit collection of stories is going to help in developing the tools that use them.

Comment author: XiXiDu 07 February 2011 12:21:09PM 0 points [-]

So one possible way forward is to enumerate all our reasons for action...

Are you aware of this thread?

Comment author: Vladimir_Nesov 07 February 2011 01:09:22PM *  0 points [-]

"Reason for action" is no more enlightening than "morality", but with less explanatory (curiosity-stopping) connotations. In that context, it was more of "that hot yellow-ish stuff over there" as opposed to "phlogiston".