Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to High Challenge
Comment author: Vizikahn2 19 December 2008 08:44:15AM 4 points [-]

How about making games that serve a purpose in the real world? Imagine a virtual world that generates and distributes quests and puzzles based on what kind of (robotic) work is needed in the real world. I guess this would go under "removing low-quality work to make way for high-quality work".

http://en.wikipedia.org/wiki/Game_with_a_purpose http://en.wikipedia.org/wiki/Human-based_computation

In response to comment by Vizikahn2 on High Challenge
Comment author: Arandur 09 November 2012 06:41:24PM 0 points [-]

... huh. I wonder if Neal Stephenson is a LW reader. See his (most recent?) book, REAMDE, for an implementation of this idea.

In response to High Challenge
Comment author: Arandur 09 November 2012 06:36:58PM 0 points [-]

I'm not sure that the difference between 4D states and 3D states is meaningful, with respect to eudaimoniac valuations. Doesn't this overlook the fact that human memories are encoded physically, and are therefore part of the 3D state being looked at? I don't see any meaningful difference between a valuation over a 4D state, and a valuation over a 3D state including memories of the past.

In other words, I can think of no 3D state whose eudaimoniac valuation is worse than that of the 4D state having it as its endpoint.

(In fact, I can think of quite a few which may in fact be better, for pathological choices of 4D state, e.g. ones extending all the way back to the Dark Ages or before.)

P.S. Is there a standardized spelling for the term which I have chosen to spell as "eudaimoniac"? A quick Google search suggested this one as the best candidate.

In response to comment by Arandur on Is Morality Given?
Comment author: Alicorn 22 August 2011 01:02:16AM 5 points [-]
In response to comment by Alicorn on Is Morality Given?
Comment author: Arandur 22 August 2011 01:05:30AM 1 point [-]

Oh dear; how embarrassing. Let me try my argument again from the top, then.

In response to comment by Arandur on Is Morality Given?
Comment author: hairyfigment 21 August 2011 05:54:09PM 3 points [-]

Er, why couldn't Clippy model itself? Surely you don't mean that you think Clippy would change its end-goals if it did so (for what reason?)

Comment author: Arandur 22 August 2011 12:46:04AM 5 points [-]

... Just to check: we're talking about Microsoft Office's Clippy, right?

In response to comment by Arandur on Is Morality Given?
Comment author: hairyfigment 21 August 2011 03:14:01AM 0 points [-]

Well, that depends. What does "sufficiently advanced" mean? Does this claim have anything to say about Clippy?

If it doesn't constrain anticipation there, I suspect no difference exists.

Comment author: Arandur 21 August 2011 05:35:12PM 0 points [-]

Ha! No. I guess I'm using a stricter definition of a "mind" than is used in that post: one that is able to model itself. I recognize the utility of such a generalized definition of intelligence, but I'm talking about a subclass of said intelligences.

In response to comment by Arandur on Is Morality Given?
Comment author: hairyfigment 20 August 2011 01:35:48AM 1 point [-]

By your Devil's logic here, we would expect at least part of human nature to accord with the whole of this 'stone tablet'. I think we could vary the argument to avoid this conclusion. But as written it implies that each 'law' from the 'tablet' has a reflection in human nature, even if perhaps some other part of human nature works against its realization.

This implies that there exists some complicated aspect of human nature we could use to define morality which would give us the same answers as the 'stone tablet'.

Comment author: Arandur 20 August 2011 04:27:35AM 1 point [-]

Which sounds like that fuzzily-defined "conscience" thing. So suppose I say that this "Stone tablet" is not a literal tablet, but is rather a set of rules that sufficiently advanced lifeforms will tend to accord to? Is this fundamentally different than the opposite side of the argument?

Comment author: Strange7 19 August 2011 10:52:31PM -1 points [-]

More generally I mean that an AI capable of succumbing to this particular problem wouldn't be able to function in the real world well enough to cause damage.

Comment author: Arandur 20 August 2011 04:23:35AM -1 points [-]

I'm not sure that was ever a question. :3

Comment author: Strange7 19 August 2011 01:31:52AM 2 points [-]

Why would an AI consider those two scenarios and no others? Seems more likely it would have to chew over every equivalently-complex hypothesis before coming to any actionable conclusion... at which point it stops being a worrisome, potentially world-destroying AI and becomes a brick, with a progress bar that won't visibly advance until after the last proton has decayed.

Comment author: Arandur 19 August 2011 10:16:56PM 1 point [-]

... which doesn't solve the problem, but at least that AI won't be giving anyone... five dollars? Your point is valid, but it doesn't expand on anything.

Comment author: Arandur 19 August 2011 10:12:55PM *  2 points [-]

I think the problem might lie in the almost laughable disparity between the price and the possible risk. A human mind is not capable of instinctively providing a reason why it would be worth killing 3^^^^3 people - or even, I think, a million people - as punishment for not getting $5. A mind who would value $5 as much or more than the lives of 3^^^^3 people is utterly alien to us, and so we leap to the much more likely assumption that the guy is crazy.

Is this a bias? I'd call it a heuristic. It calls to my mind the discussion in Neal Stephenson's Anathem about pink nerve-gas-farting dragons. (Mandatory warning: fictional example.) The crux of it is, our minds only bother to anticipate situations that we can conceive of as logical. Therefore, the manifest illogicality of the mugging (why is 3^^^^3 lives worth $5; if you're a Matrix Lord why can't you just generate $5 or better yet, modify my mind so that I'm inclined to give you $5, etc.) causes us to anti-anticipate its truth. Otherwise, what's to stop you from imagining, as stated by Tom_McCabe2 (and mitchell_porter2, &c.), that typing the string "QWERTYUIOP" leads to, for example, 3^^^^3 deaths? If you imagine it, and conceive of it as a logically possible outcome, then regardless of its improbability, by your argument (as I see it), a "mind that worked strictly by Solomonoff induction" should cease to type that string of letters ever again. By induction, such a mind could cause itself to cease to take any action, which would lead to... well, if the AI had access to itself, likely self-deletion.

That's my top-of-the-head theory. It doesn't really answer the question at hand, but maybe I'm on the right track...?

In response to Is Morality Given?
Comment author: Arandur 18 August 2011 03:41:16PM 2 points [-]

"If morality exists independently of human nature, then isn't it a remarkable coincidence that, say, love is good?"

I'm going to play Devil's Advocate for a moment here. Anyone, please feel free to answer, but do not interpret the below arguments as correlating with my set of beliefs.

"A remarkable coincidence? Of course not! If we're supposing that this 'stone tablet' has some influence on the universe - and if it exists, it must exert influence, otherwise we wouldn't have any evidence wherewith to be arguing over whether or not it exists - then it had influence on our 'creation', whether (in order to cover all bases) we got here purely through evolution, or via some external manipulation as well. I should think it would be yet stranger if we had human natures that did not accord with such a 'stone tablet'."

View more: Next