Vladimir_Nesov comments on What does a calculator mean by "2"? - Less Wrong

8 Post author: Wei_Dai 07 February 2011 02:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 07 February 2011 11:07:27AM 11 points [-]

On "right" in moral arguments. Why does it make sense to introduce the notion of "right" at all? Whenever we are faced with a moral argument, we're moved by specific moral considerations, never abstract rightness. There is a mystery: what makes these arguments worth being moved by? And then we have the answer: they possess the livening quality of elan vital (ahem) meta-ethical morality.

It begins to look more and more compelling to me that "morality" is more like phlogiston than fire, a word with no explanatory power and moving parts that just lumps together all the specific reasons for action, and has too many explanatory connotations for an open question.

Comment author: Wei_Dai 07 February 2011 07:10:34PM 1 point [-]

Do you take a similar position on mathematical truth? If not, why? What's the relevant difference between "true" and "right"?

Comment author: Vladimir_Nesov 08 February 2011 12:33:46AM *  1 point [-]

For any heuristic, indeed any query that is part of the agent, the normative criterion for its performance should be given by the whole agent. What should truth be, the answers to logical questions? What probability should given event in the world be assigned? These questions are no simpler than the whole of morality. If we define a heuristic that is not optimized by the whole morality, this heuristic will inevitably become obsolete, tossed out whole. If we allow improvements (or see substitution as change), then the heuristic refers to morality, and is potentially no simpler than the whole.

Truth and reality are the most precise and powerful heuristics known to us. Truth as the way logical queries should be answered, and reality as the way we should assign anticipation to the world, plan for some circumstances over others. But there is no guarantee that the "urge to keep on counting" remains the dominant factor in queries about truth, or that chocolate superstimulus doesn't leave a dint on parameters of quantum gravity.

The difference from the overall "morality" is that we know a great deal more about these aspects than about the others. The words themselves are no longer relevant in their potential curiosity-stopping quality.

(Knowledge of these powerful heuristics will most likely lead to humanity's ruin. Anything that doesn't use them is not interesting, an alien AI that doesn't care about truth or reality eliminates itself quickly from our notice. But one that does care about these virtues will start rewriting things we deem important, even if it possesses almost no other virtues.)

Comment author: cousin_it 07 February 2011 12:09:05PM *  1 point [-]

Good. I'm adopting this way of thought.

So one possible way forward is to enumerate all our reasons for action, and also all the reasons for discomfort, I guess. Maybe Eliezer was wrong in mocking the Open Source Wish Project. Better yet, we may look for an automated way of enumerating all our "thermostats" and checking that we didn't miss any. This sounds more promising than trying to formulate a unified utility function, because this way we can figure out the easy stuff first (children on railtracks) and leave the difficult stuff for later (torture vs dust specks).

Comment author: Nisan 08 February 2011 08:18:46AM 1 point [-]

So one possible way forward is to enumerate all our reasons for action

This is a good idea. "What reasons for action do actual people use?" sounds like a better question than "What reasons for action exist?"

Comment author: Vladimir_Nesov 07 February 2011 01:14:25PM 1 point [-]

Maybe Eliezer was wrong in mocking the Open Source Wish Project.

"Wishes" are directed at undefined magical genies. What we need are laws of thought, methods of (and tools for) figuring out what to do.

Comment author: cousin_it 07 February 2011 04:37:05PM *  1 point [-]

Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI, so I wouldn't wish this problem upon myself! First I'd like to see an exhaustive list of reasons for action that actual people use in ordinary situations that feel "clear-cut". Then we can look at this data and figure out the next step.

Comment author: Vladimir_Nesov 07 February 2011 06:13:15PM 1 point [-]

Devising a procedure to figure out what to do in arbitrary situations is obviously even harder than creating a human-equivalent AI

Yes, blowing up the universe with an intelligence explosion is much easier than preserving human values.

Comment author: Vladimir_Nesov 07 February 2011 05:07:25PM *  0 points [-]

Then we can look at this data and figure out the next step.

Sounds like an excuse to postpone figuring out the next step. What do you expect to see, and what would you do depending on what you see? "List of reasons for action that actual people use in ordinary situations" doesn't look useful.

Comment author: cousin_it 07 February 2011 05:26:17PM *  0 points [-]

Thinking you can figure out the next step today is unsubstantiated arrogance. You cannot write a program that will win the Netflix Prize if you don't have the test dataset. Yeah I guess a superintelligence could write it blindly from first principles, using just a textbook on machine learning, but seriously, WTF.

Comment author: Vladimir_Nesov 07 February 2011 05:28:46PM *  0 points [-]

With Netflix Prize, you need for training the kind of data that you want to predict. Predicting what stories people will tell in novel situations when deciding to act is not our goal.

Comment author: cousin_it 07 February 2011 05:44:32PM *  0 points [-]

Why not? I think you could use that knowledge to design a utopia that won't make people go aaaargh. Then build it, using AIs or whatever tools you have.

Comment author: Vladimir_Nesov 07 February 2011 06:02:46PM *  1 point [-]

The usual complexity of value considerations. The meaning of the stories (i.e. specifications detailed enough to actually implement, the way they should be and not simply the way a human would try elaborating) is not given just by the text of the stories, and once you're able to figure out the way things should be, you no longer need human-generated stories.

This is a different kind of object, and having lots of stories doesn't obviously help. Even if the stories would serve some purpose, I don't quite see how waiting for an explicit collection of stories is going to help in developing the tools that use them.

Comment author: XiXiDu 07 February 2011 12:21:09PM 0 points [-]

So one possible way forward is to enumerate all our reasons for action...

Are you aware of this thread?

Comment author: Vladimir_Nesov 07 February 2011 01:09:22PM *  0 points [-]

"Reason for action" is no more enlightening than "morality", but with less explanatory (curiosity-stopping) connotations. In that context, it was more of "that hot yellow-ish stuff over there" as opposed to "phlogiston".