Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by gjm on Boxing an AI?
Comment author: tailcalled 27 March 2015 03:54:31PM 0 points [-]

I'm not saying it would solve everything, I'm saying it would be a way to test significant aspects of AI without destroying the world, including significant aspects of their morality. It's not a "do this magic and morality for AI is solved" as much as a "this doable step helps parts of AI design, probably including preventing the worst classes of paperclip-maximization".

In response to comment by tailcalled on Boxing an AI?
Comment author: gjm 27 March 2015 09:20:39PM 1 point [-]

Yup, maybe. But don't you think it's likely that the values we want to impart to an AI are going to be ones that come out really radically differently for a universe without us in it? For instance, we might want the AI to serve us, which of course isn't even a concept that makes sense if it's in a simulated universe without us. Or we might want it to value all intelligent life, which is a thing that looks very different if the AI is the only intelligent life in its universe. So: yes, I agree that running the AI in a simulated world might tell us some useful things, but it doesn't look to me as if the things it could tell us a lot about overlap very much with the things we care most about.

In response to Boxing an AI?
Comment author: gjm 27 March 2015 03:42:40PM 3 points [-]

One of the things we care about most, if there's a superintelligent AI around, is what it does for, with, and to us. Making a game sufficiently advanced to have accurate models of us in it (1) is really difficult, and (2) is arguably grossly immoral if there's a real danger that those models of us are going to get eaten or tortured or whatever by an AI. Expanding on #1: it requires human-level AI plus enough capacity to simulate an awful lot of humans, which means also enough capacity to simulate one human very very fast; it is entirely possible that we will have superhumanly capable AI before we have (at least under our control) the ability to simulate millions or billions of humans.

(There is also the issue mentioned by AABoyles and Error, that the AI may well work out it's in a box. Note that we may not be able to tell whether it's worked out it's in a box. And if we're observing the box and it guesses there's someone observing the box, it may be able to influence our decisions in ways we would prefer it not to.)

Comment author: ChristianKl 25 March 2015 02:51:55PM 0 points [-]

Why do you believe that the problem of naming doesn't fall into computer science? Because people in that field find the question to low status to work on?

Comment author: gjm 25 March 2015 03:57:22PM *  0 points [-]

Nothing to do with status (did I actually say something that suggested a status link?), and my claim isn't that computer science doesn't have a problem with naming things (everything has a problem with naming things) but that when Karlton said "computer science" he probably meant "software development".

[EDITED to remove a remark that was maybe unproductively cynical.]

Comment author: DeVliegendeHollander 25 March 2015 08:48:49AM 1 point [-]

I think the GSH is largely that our whole way of thinking, our terminology, our philosophy, our science evolved in theistic societies. Taking god out of it leaves a lot of former linkages dangling in the air, probably we learn to link them up sooner or later but it requires revising a surprisingly large amount.

For example, a godless universe has no laws of nature, just models of nature that happen to be predictive.

For example, there isn't really such a thing as progress because there cannot be a goal in history in the godless universe. There is social change, and it is up to you to judge if it is good.

For example, there are no implicit promises in the godless universe, we could do everything "right" and still get extinct. This is non-intuitive deep on the bones level, our whole cultural history teaches that if you we make a good enough effort some mysterious force will pat our backs, give a B+ for effort and will pick up the rest of the slack: because this is what our parents and teachers did to us. Just look at common action movies, they are about heroes trying hard, and almost failing, then getting almost miraculously lucky. Deus ex machina.

The GSH becomes very intense when you start raising children. For example it would mean not giving praise for effort, in fact, sometimes punishing good solutions to demonstrate how in the real world you can do things right and still fail. This would be really cruel and probably we don't want to do it. Most education tends to imply what it teaches is certain truth, laws of nature etc. so things get hard from here.

Comment author: gjm 25 March 2015 01:06:49PM 2 points [-]

Taking god out of it leaves a lot of former linkages dangling in the air

There's a nice exposition of roughly this idea over at Yvain's / Scott Alexander's blog.

Comment author: NancyLebovitz 05 September 2013 02:12:14AM 8 points [-]

This is all reminding me of a long article (that I can't find) about children left in cars-- a potentially deadly situation. The article concluded that the major risk wasn't so much neglectful parents as overwhelmed parents-- they were using all their mental resources for normal life, and if their schedules were changed or their stress level was increased, the risk of forgetting a child in a car was increased.

I saw some suggestions for improving the odds of not making that mistake. One was to have a habit of clipping a string from the child to the parent when the child is put into the car seat. Another was before the child was born making a habit of circling the car and looking into it immediately after getting out.

Comment author: gjm 25 March 2015 09:14:41AM 1 point [-]

a long article (that I can't find) about children left in cars

Perhaps this one: Fatal Distraction by Gene Weingarten in the Washington Post.

Comment author: eli_sennesh 19 March 2015 08:38:04AM 0 points [-]

Describing good play as "making few mistakes" seems like the wrong terminology to me. A mistake is not a thing, in and of itself, it's just the entire space of possible games outside the very narrow subset that lead to victory. If you give me a list of 100 chess mistakes, you've actually told me a lot less about the game than if you've given me a list of 50 good strategies -- identifying a point in the larger space of losing strategies encodes far less information than picking one in the smaller space of winning.

And the real reason I'm nitpicking here is because my advisor has always proceeded mostly by pointing out mistakes, but rarely by identifying helpful, effective strategies, and so I feel like I've failed to learn much from him for very solid information-theoretic reasons.

Comment author: gjm 25 March 2015 02:23:09AM 1 point [-]

my advisor [...]

Have you discussed this with him? Perhaps he hasn't noticed this and would be delighted to talk strategies. Perhaps he has a reason (good or bad) for doing as he does. (E.g., he may think that you'll learn more effectively by finding effective strategies for yourself, and that pointing them out explicitly will stunt your development in the longer run.) Perhaps his understanding of effective strategies is all implicit and he can't communicate it to you explicitly.

Comment author: James_Miller 24 March 2015 09:45:29PM 0 points [-]

No because hedge funds would already be doing this, and you would have to think you were better at it than them.

Comment author: gjm 25 March 2015 02:20:34AM 1 point [-]

You could alternatively think that hedge funds don't have enough trading volume to shift the prices so far that the heuristic stops working. Imagine a market composed of a thousand idiots, one hedge fund, and you, each participant having roughly equal resources. If all the idiots make some mistake, the hedge fund will get there before you and profit from it, but unless it's leveraged 1000:1 (which would be ... brave) there are likely still profits for you to take by exploiting the same mistake.

In reality there are a lot of hedge funds and some of them have an awful lot of money, but they're still no more than ~1% of the market.

Comment author: Xerographica 25 March 2015 12:47:19AM *  -7 points [-]

I decree that, from this day forward, every discussion has to be about my obsession with taxes. Not really. In case you didn't get the memo... nobody here is forced to reply to my comments. That I know of. If you were forced to reply to my comments... then please let me know who overrode your difference. I will surely give them a stern and strongly worded lecture on the value of difference.

Of course SA's concern is that AIs would override difference. Overriding difference means less freedom. If SA wasn't concerned with AIs turning us humans into puppets... then he wouldn't be obsessed with AI safety.

My question is... if he's concerned with having our difference overridden... then why isn't he concerned with our current system? It's a perfectly legitimate and relevant question. Why is he ignoring the clear and present danger and focusing instead on an unclear and future danger?

Comment author: gjm 25 March 2015 02:12:19AM 2 points [-]

Of course SA's concern is that AIs would override difference. Overriding difference means less freedom. [...]

I question the accuracy of your mental model of Stuart_Armstrong, and of your reading of what he wrote. There are many ways in which an insufficiently friendly AI could harm us, and they aren't all about "overriding difference" or "less freedom". If (e.g.) people are entombed in bunkers, lobotomized and on medical drips, lack of freedom is not their only problem. (I confess myself at a bit of a disadvantage here, because I don't know exactly what you mean by "overriding difference"; it doesn't sound to me equivalent to lacking freedom, for instance. Your love of neologism is impeding communication.)

why isn't he concerned with our current system?

I don't believe you have any good reason to think he isn't. All you know is that he is currently posting a lot of stuff about something else, and it appears that this bothers you.

Allow me to answer the question that I think is implicit in your first paragraph. The reason why I'm making a fuss about this is that you are doing something incredibly rude: barging into a discussion that has nothing at all to do with your pet obsession and trying to wrench the discussion onto the topic you favour. (And, in doing so, attacking someone who has done nothing to merit your attack.)

I have seen online communities destroyed by individuals with such obsessions. I don't think that's a serious danger here; LW is pretty robust. But, although you don't have the power to destroy LW, you do (unfortunately) have the power to make every discussion here just a little bit more annoying and less useful, and I am worried that you are going to try, and I would like to dissuade you from doing it.

Comment author: Mark_Friedenbach 24 March 2015 11:09:46PM 1 point [-]

Sorry I'm dense. What does this have to do with anything? It is true that the balloonist is in a hot air balloon 30 feet above a field. These are correct facts. Are you arguing for a concept of truth which would not qualify "Yes, you're in a hot air balloon, about 30 feet above this field" to be a true statement?

Comment author: gjm 25 March 2015 12:24:42AM 2 points [-]

I think Lumifer is suggesting that a model can correspond accurately to reality (e.g., representing the fact that X is in a hot air balloon 30 feet above Y's current location) but none the less be useless (e.g., because all X wants to know is how to get to Vladivostok, and knowing he's in a balloon 30 feet above Y doesn't help with that). And that this is an example of how a model can be "bad" other than inaccurate correspondence with reality, which is what you were asking for a few comments upthread.

Comment author: Xerographica 24 March 2015 11:40:21PM -5 points [-]

The current system overrides difference. We elect a small group of humans to spend the taxes earned by a large group of humans. Your concern is that AIs would override difference. But, where's your concern for our current system? Why is it ok for humans to override difference but not ok for AIs to override difference? Either you have a double standard... or you don't realize that you support a system that overrides difference.

Comment author: gjm 25 March 2015 12:21:17AM 1 point [-]

Your concern is that AIs would override difference.

That doesn't look to me at all like an accurate description of Stuart_Armstrong's concern.

But, where's your concern for our current system?

Please try to understand that not every discussion has to be about your obsession with taxes.

View more: Next