Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: Lumifer 10 October 2016 02:48:06PM *  -2 points [-]

Nothing, because we still don't know what a friendly AI is.

Comment author: skeptical_lurker 10 October 2016 06:21:41PM 4 points [-]

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Comment author: skeptical_lurker 10 October 2016 06:14:36PM 2 points [-]

We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.

I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...

Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.

Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

Comment author: ZankerH 10 October 2016 03:56:07PM 1 point [-]

Despair and dedicate your remaining lifespan to maximal hedonism.

Comment author: skeptical_lurker 10 October 2016 05:55:04PM -1 points [-]

Google do not strike me as incompitant, and they do have ethics oversite for AI. Worry, yes, despair, no.

Comment author: ChristianKl 06 October 2016 08:27:49PM 2 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships?

There are many attributes of possible partners that make me less likely to data them but that at the same time aren't deal breakers. The fact that I have a theistic girlfriend doesn't mean that I wouldn't prefer a girlfriend who isn't theistic all things equal.

Comment author: skeptical_lurker 06 October 2016 09:15:12PM *  1 point [-]

It depends whether we are using 'racist' to mean 'believes that some races are superior to others in certain respects' or 'has less empathy for other races'. In the first case, sure, maybe you would date someone of another race, because group differences aren't so important when dealing with individuals. But in the latter case... if you are less able to empathise with people of other races it would seem really weird to date them.

Comment author: ChristianKl 06 October 2016 08:38:53PM 1 point [-]

and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Consent is a concept that get's easily complicated. Is it wrong to burn coal when the asthmatics who die because of it aren't consenting? Are the asthmatics in the US consenting by virtue of electing a government that allows coal to be burned?

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Comment author: skeptical_lurker 06 October 2016 09:06:57PM 0 points [-]

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Is that necessary for consent? I mean, one does not have to understand the rationale for undergoing a medical procedure in order to consent to it. Its more important to know the potential risks.

Comment author: Brillyant 05 October 2016 06:58:48PM 0 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people.

I don't think it means that. I don't think she meant that. (Though I guess it depends on your definition of "racist".)

if everyone is a little bit racist, why would people get into interracial relationships...

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

My understanding of Clinton's (and then Kaine's) remarks was that everyone has biases of which they are unconscious...and that these biases affect their thoughts...and therefore sometimes their actions.

Comment author: skeptical_lurker 05 October 2016 07:11:11PM 1 point [-]

I don't think it means that. I don't think she meant that.

I'm pretty sure that is what she means. There is a big controversy in the US over whether the police are racist, not over whether the police have cognitive biases. I would be overjoyed if presidential candidates really were discussing cognitive biases.

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

No disagreement here.

Comment author: ChristianKl 05 October 2016 02:00:15PM 0 points [-]

Human values may not be consistent, but this is a separate failure mode.

How is a AGI supposed to optimize for values that aren't consistent?

Much of the time this statement could be taken at face value

Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay? Is that what the person who made the claim means?

Comment author: skeptical_lurker 05 October 2016 06:53:49PM *  1 point [-]

How is a AGI supposed to optimize for values that aren't consistent?

I am not saying this is a trivial problem, but it is a separate problem from 'the hidden complexity of wishes' problem.

Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay?

Well, if the CEV of the anti-gay, pro-genetic manipulation people exceeds the CEV of the pro-gay/anti-genetic manipulation people then I suppose it would, although I'm not sure whether your question means genetic manipulation with or without consent (also, if a gay person wants to be straight, some would say that should be banned, so consent cuts both ways), and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Yes, a CEV FAI would forcibly alter people's sexualities if the aggrigated preferences in favour of that were strong enough. A democratic system will be a tyranny of the majority if the majority are tyrants.

Is that what the person who made the claim means?

I dunno, since I've only heard one sentence from this hypothetical person. But I would imagine that this sort of person would probably think that genetic manipulation is playing god, and moreover that superintelligent AI is playing god. Their strongest wish might be for the AI to turn itself off.

EDIT: how to react to the god hates fags people also depends upon whether being anti gay is a terminal value to these people, or whether it is predicated upon the existance of god. I'm assuming the FAI would not beleive in god, but then again some people might have faith as a terminal value, so... its complicated.

Comment author: Brillyant 05 October 2016 02:41:26PM *  -1 points [-]

Interesting rhetorical sparring point taking place in the U.S. election that relates to rationality here at LW.

In the first presidential debate, Hillary Clinton referenced bias when discussing the recent spate of police shootings of African Americans. Clinton said “implicit bias is a problem for everyone, not just police,” and went on to say “I think, unfortunately, too many of us in our great country jump to conclusions about each other," and “I think we need all of us to be asking hard questions about, ‘why am I feeling this way?’”

In the VP debate last night, again in the context of recent police shootings, Dem candidate Tim Kaine said, "People shouldn't be afraid to bring up issues of bias in law enforcement. And if you're afraid to have the discussion, you'll never solve it."

Clinton/Kaine have predictably drawn criticism from the Red Team for the comments (who try to paint the Blue Team as anti-police), but it seems to me the Dems have been more defensive than they need to be, given it seems obvious to me (from my time at LW) that humans are biased, and this bias would obviously be likely to play a role in high stress situations (like when guns are involved).

It will be interesting to me to see how this is adjudicated according to public opinion. Do people generally accept everyone has biases and of course this would affect police officers in high stress situations? Or do they view bias as a rare condition that only affects people without the proper virtue? Is this argument actually over different definitions of the word "bias"? Is it just a Red v. Blue argument that has little to do with facts?

I, for one, think Kaine and Clinton's comments were correct and made a very salient point. (But I'm biased against Trump.)

Comment author: skeptical_lurker 05 October 2016 06:33:52PM 1 point [-]

Clinton said “implicit bias is a problem for everyone, not just police,”

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships? Its possible that the majority of people prefer their own race but don't admit it, indeed the fact that racial groups cluster in cities could be argued to show this is the case via revealed preferences, but it seems obvious that some people have no racial bias.

Dem candidate Tim Kaine said, "People shouldn't be afraid to bring up issues of bias in law enforcement. And if you're afraid to have the discussion, you'll never solve it."

This, like all politics, is far from rational. It starts by painting the issue in terms of 'people who disagree with me are cowards' and proceeds to assume that this discussion must conclude that the bias exists.

Comment author: turchin 05 October 2016 04:01:50PM *  0 points [-]

It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.

Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).

But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.

It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.

Comment author: skeptical_lurker 05 October 2016 06:17:07PM 1 point [-]

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

Your version is great as rational fanfic, but in an actual debate I'd say that its generally best not to base ideas on action movies. Having said that, I do like the bit where the terminator has been told not to kill anyone, so he shoots them in the kneecaps.

View more: Next