Open thread, Oct. 03 - Oct. 09, 2016

4 Post author: MrMind 03 October 2016 06:59AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (175)

Comment author: Bound_up 08 October 2016 04:57:21PM 0 points [-]

I'm looking for an SSC post.

Scott talks about how a friend says he always seems to know what's what, and Scott says "Not, really; I'm the first to admit my error bars are wide and that my theories are speculative, often no better than hand-waving."

They go back and forth, with Scott giving precise reasons why he's not always right, and then he says "...I'm doing it right now, aren't I?"

Something like that. Can anybody point me to it?

Comment author: jimmy 08 October 2016 07:11:22PM 3 points [-]

An excellent post, but not Scott :)

http://mindingourway.com/confidence-all-the-way-up/

Comment author: niceguyanon 07 October 2016 01:40:52PM *  2 points [-]

Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.

Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.

Comment author: username2 08 October 2016 02:29:03PM 2 points [-]

I think that in many cases uncovering a potential tax evasion might not be enough to get that money, it might require prosecution and large scale evidence collection. Maybe it's not worth it unless amount of evaded taxes is large?

Comment author: ChristianKl 08 October 2016 04:01:29PM 1 point [-]

Generally the numbers suggest that an additional tax collector brings in a lot more money than he costs.

Comment author: waveman 07 October 2016 09:51:03PM 3 points [-]

Estimated cost of tax evasion per year to the Federal gov is 450B.

Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?

Comment author: TheAncientGeek 09 October 2016 09:51:31PM 1 point [-]

A major way of avoiding tax is to keep money offshore. ... so what can you usefully do with money while it is resting in an account in the Cayman islands?

Comment author: ChristianKl 07 October 2016 03:33:10PM 6 points [-]

Because the IRS isn't popular and it's not a good move for a politician to speak in favor of the IRS and advocate increase of IRS funding.

Comment author: sawahbodien 07 October 2016 11:22:57AM 1 point [-]

Is there a specific bias for thinking that everyone possesses the same knowledge as you? For example, after learning more about a certain subject, I have a tendency to think, "Oh, but everyone already knows this, don't they" even though they probably don't and I wouldn't have assumed that before learning about it myself.

Comment author: Lumifer 07 October 2016 02:37:31PM 2 points [-]

Theory of mind. Locally it's often called a "typical mind fallacy".

Comment author: waveman 07 October 2016 11:33:13AM 4 points [-]

A related concept is "inferential distance" - people can only move one step at a time from what they know.

Also typical mind fallacy.

Comment author: scarcegreengrass 05 October 2016 07:42:56PM 0 points [-]

Not new, possibly not interesting to anyone beside me. A 2013 astrobiology paper that explores an odd corner of the Fermi Paradox. The paper explores the bizarre perspective that Earth life was seeded by extraterrestrial life (directed panspermia) as a form of information backup. Our biosphere's junk DNA, in this scenario, stores information valuable to the extraterrestrial system.

https://arxiv.org/abs/1303.6739

Comment author: CellBioGuy 05 October 2016 10:23:55PM 1 point [-]
Comment author: ChristianKl 05 October 2016 09:00:42PM 6 points [-]

Our biosphere's junk DNA

Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.

Comment author: CellBioGuy 05 October 2016 10:18:27PM *  1 point [-]

Indeed it is seen easily when comparing multiple related species as it is that which changes very fast and seemingly randomly (and uniformly).

Comment author: gwern 05 October 2016 09:19:10PM *  5 points [-]

Lots of other problems with it too. Why is there any last-universal-common-ancestor in this scenario? You would want to drop a full ecosystem with millions of different organisms, each with different FEC shards of data. If you can deliver some bacteria to a virgin planet, you can deliver multiple kinds of bacteria, not just one. Yet, genetics finds that there's a LUCA (not that much of LUCA survives in current genomes).

Comment author: Stefan_Schubert 05 October 2016 05:01:23PM *  1 point [-]

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Comment author: Brillyant 05 October 2016 02:41:26PM *  -1 points [-]

Interesting rhetorical sparring point taking place in the U.S. election that relates to rationality here at LW.

In the first presidential debate, Hillary Clinton referenced bias when discussing the recent spate of police shootings of African Americans. Clinton said “implicit bias is a problem for everyone, not just police,” and went on to say “I think, unfortunately, too many of us in our great country jump to conclusions about each other," and “I think we need all of us to be asking hard questions about, ‘why am I feeling this way?’”

In the VP debate last night, again in the context of recent police shootings, Dem candidate Tim Kaine said, "People shouldn't be afraid to bring up issues of bias in law enforcement. And if you're afraid to have the discussion, you'll never solve it."

Clinton/Kaine have predictably drawn criticism from the Red Team for the comments (who try to paint the Blue Team as anti-police), but it seems to me the Dems have been more defensive than they need to be, given it seems obvious to me (from my time at LW) that humans are biased, and this bias would obviously be likely to play a role in high stress situations (like when guns are involved).

It will be interesting to me to see how this is adjudicated according to public opinion. Do people generally accept everyone has biases and of course this would affect police officers in high stress situations? Or do they view bias as a rare condition that only affects people without the proper virtue? Is this argument actually over different definitions of the word "bias"? Is it just a Red v. Blue argument that has little to do with facts?

I, for one, think Kaine and Clinton's comments were correct and made a very salient point. (But I'm biased against Trump.)

Comment author: ChristianKl 05 October 2016 07:24:43PM -1 points [-]

It's interesting that nobody asks why White people get shoot so much more than Asian people when the ratio of them getting shoot is equivalent to the ratio of White people vs. Black people. Per million, 5.03 Blacks, 5.02 Whites and only 0.72 Asians get shoot in this year by the police.

The focus on implicit bias is interesting. It's like blaming the weather. We can agree that the weather is bad but that doesn't change anything. The DNC emails suggest that it was DNC policy to not want to commit to any real demands of Black Lives Matter but simply focus on telling their narrative.

If they wanted real change they could proclaim that there a need for a new federal department that focuses on police accountability and in the future that department will persecute misdeeds by officers so that officers don't get persecuted by their buddies anymore.

Comment author: skeptical_lurker 05 October 2016 06:33:52PM 1 point [-]

Clinton said “implicit bias is a problem for everyone, not just police,”

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships? Its possible that the majority of people prefer their own race but don't admit it, indeed the fact that racial groups cluster in cities could be argued to show this is the case via revealed preferences, but it seems obvious that some people have no racial bias.

Dem candidate Tim Kaine said, "People shouldn't be afraid to bring up issues of bias in law enforcement. And if you're afraid to have the discussion, you'll never solve it."

This, like all politics, is far from rational. It starts by painting the issue in terms of 'people who disagree with me are cowards' and proceeds to assume that this discussion must conclude that the bias exists.

Comment author: ChristianKl 06 October 2016 08:27:49PM 2 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships?

There are many attributes of possible partners that make me less likely to data them but that at the same time aren't deal breakers. The fact that I have a theistic girlfriend doesn't mean that I wouldn't prefer a girlfriend who isn't theistic all things equal.

Comment author: skeptical_lurker 06 October 2016 09:15:12PM *  1 point [-]

It depends whether we are using 'racist' to mean 'believes that some races are superior to others in certain respects' or 'has less empathy for other races'. In the first case, sure, maybe you would date someone of another race, because group differences aren't so important when dealing with individuals. But in the latter case... if you are less able to empathise with people of other races it would seem really weird to date them.

Comment author: ChristianKl 06 October 2016 09:50:06PM 1 point [-]

It depends whether we are using 'racist' to mean 'believes that some races are superior to others in certain respects' or 'has less empathy for other races'.

We are using it here to mean "implicit racism". That's a term that used in the literature. There are studies that measure it. Implicit racism also isn't something that's only found in white people (in Clinton's words it's a problem for everyone). Black people also have implicit racism that makes them treat white people better in many instances.

Comment author: Brillyant 05 October 2016 06:58:48PM 0 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people.

I don't think it means that. I don't think she meant that. (Though I guess it depends on your definition of "racist".)

if everyone is a little bit racist, why would people get into interracial relationships...

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

My understanding of Clinton's (and then Kaine's) remarks was that everyone has biases of which they are unconscious...and that these biases affect their thoughts...and therefore sometimes their actions.

Comment author: skeptical_lurker 05 October 2016 07:11:11PM 1 point [-]

I don't think it means that. I don't think she meant that.

I'm pretty sure that is what she means. There is a big controversy in the US over whether the police are racist, not over whether the police have cognitive biases. I would be overjoyed if presidential candidates really were discussing cognitive biases.

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

No disagreement here.

Comment author: Brillyant 05 October 2016 07:34:10PM -1 points [-]

There is a big controversy in the US over whether the police are racist, not over whether the police have cognitive biases.

Hm. I don't think it's this clear a distinction. Clinton seems to be suggesting there is perhaps more nuance to the issue than just arguing about whether or not lots of cops are racist.

I would be overjoyed if presidential candidates really were discussing cognitive biases.

Interesting. I was very happy to hear Clinton speak of implicit bias because it seemed to be a way to advance the discussion to something more rational.

Comment author: ChristianKl 08 October 2016 04:08:04PM -1 points [-]

because it seemed to be a way to advance the discussion to something more rational.

Why do you think that? The Gender studies folks that speak most about implicit bias aren't the demographic that tries to create evidence-based policing policy. It also doesn't seem to be a group of people who are on good terms when it comes to speaking with police departments about how to design their policy.

Comment author: Brillyant 08 October 2016 05:16:04PM -2 points [-]

Why do you think that [Clinton speaking of implicit bias seems to be a way to advance the discussion to something more rational]?

Because people have implicit cognitive biases. It's useful to discuss them.

Peoples' cognitive maps aren't the territory. And people aren't always conscious of the mistakes. Further, many people I've heard discuss politics in this election cycle seem unaware that there even could be errors in their map.

Instead of arguing over our competing maps, one good first step is to acknowledge our maps have errors, which is what I think Clinton's line about "implicit bias" did.

Comment author: ChristianKl 08 October 2016 06:27:58PM -1 points [-]

Because people have implicit cognitive biases. It's useful to discuss them.

The fact that a claim is true doesn't automatically mean that it's useful to discuss it.

Instead of arguing over our competing maps, one good first step is to acknowledge our maps have errors, which is what I think Clinton's line about "implicit bias" did.

No, it's not an admission of Clinton that her maps have errors. In general people ability to interactually recite "all maps have errors" doesn't mean that they use that belief for interacting with their own maps differently.

When it comes to having a rational discussion this is even bad, because it allows people to easily play motte-and-bailey.

Comment author: Brillyant 08 October 2016 09:41:34PM *  -1 points [-]

The fact that a claim is true doesn't automatically mean that it's useful to discuss it.

It doesn't? In what way would it not be useful?

I think it's extremely useful to discuss how the brain you are using to solve problems has flaws that may be inhibiting you from solving those problems, or even recognizing the problems accurately. (It's why I was on LW originally...)

(Maybe you're using "automatically" here as a qualifier to make your statement technically correct—Is that what you mean? Like, people could discuss cognitive biases in a really stupid and irrational way that would make it unproductive? If that's what you mean, then, yeah. Of course.)

No, it's not an admission of Clinton that her maps have errors.

It's not? I thought she said we all (i.e. humans) have implicit biases? Wouldn't that include Clinton?

Comment author: ChristianKl 08 October 2016 09:58:52PM *  -1 points [-]

It doesn't? In what way would it not be useful?

Whether a discussion is useful depends on the results of the discussion. There are a lot of true things you can say that don't advance a discussion into a direction that leads to a positive outcome.

I think it's extremely useful to discuss how the brain you are using to solve problems has flaws that may be inhibiting you from solving those problems

It wasn't a discussion of how implicit bias works but an uncited assertion that it has effects in certain conditions.

It's why I was on LW originally

That might be true but it's not what the LW mission of rationality that's about systematic winning is about. I understand the mission to be about finding thinking strategies that lead to making winning decisions.

It's not? I thought she said we all (i.e. humans) have implicit biases? Wouldn't that include Clinton?

You can make an argument that logically it includes Clinton. You can also look at the decision making literature and see what saying "everyone has biases" does to a person self awareness of their own biases. It generally does little.

Comment author: username2 05 October 2016 06:16:23PM *  8 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.

Comment author: Brillyant 05 October 2016 07:02:37PM -1 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

Can you provide any sources for this?

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

Is the incidence of police encounters with blacks elevated?

What are the reasons?

Comment author: Lumifer 05 October 2016 08:15:57PM 4 points [-]

What are the reasons?

For example, there were 4,636 murders committed by white people and 5,620 murders committed by black people in 2015 (source). On the per-capita basis this makes the by-white murder rate to be about 2.2 per 100,000 and the by-black murder rate to be about 16.2 per 100,000.

Comment author: Brillyant 05 October 2016 08:24:15PM 0 points [-]

Why is this?

Comment author: Lumifer 05 October 2016 09:00:47PM 3 points [-]

You asked why is "the incidence of police encounters with blacks elevated". This is a direct answer.

If you want to know the reasons for different crime rates, this is going to get long and complicated.

Comment author: Brillyant 05 October 2016 09:09:47PM 0 points [-]

Can/will you TL;DR your view?

Comment author: Lumifer 06 October 2016 02:52:56PM 3 points [-]

As with any complex phenomenon in a complex system, there is going to be a laundry list of contributing factors, none of which is the cause (in the sense that fixing just that cause will fix the entire problem). We can start with

  • Genetic factors (such as lower IQ)
  • Historical factors, which in turn flow into
  • Cultural factors (such as distrust of the government / law enforcement) and
  • Economic factors (from being poor to having a major presence in the drug trade)

The opinions about the relative weights of these factors are going to differ and in the current political climate I don't think a reasonable open discussion is possible.

Comment author: Brillyant 06 October 2016 04:36:12PM *  -2 points [-]

Genetic factors (such as lower IQ)

What is the best source for this in your view?

Historical factors, Cultural factors, Economic factors

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

It seems obvious to me that it does, and that the effects are wide and deep, as slavery (and Jim Crow) is relatively recent history—We're only a handful of generations from a time where a race of people was enslaved and systemically kept from accumulating wealth and education.

...I don't think a reasonable open discussion is possible.

Meh. Maybe. I'd like to believe I'm a reasonable guy. My views on these issues are largely ignorant and I'm open to learning.

Comment author: chron 17 October 2016 11:31:04PM *  1 point [-]

My views on these issues are largely ignorant and I'm open to learning.

So have you actually learned anything from these discussions, in particular, are you willing to admit that the Hillary/Kane analysis of the "implicit biases" of police officers you cited in the OC is wrong?

Comment author: ChristianKl 06 October 2016 08:32:52PM 0 points [-]

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

What do you mean with that question? How do you compare the present state of the US with a counterfactual US where African Americans weren't in slavery?

Comment author: Lumifer 06 October 2016 05:07:42PM *  5 points [-]

What is the best source for this in your view?

The raw data is plentiful -- look at any standardized test scores (e.g. SAT) by race. For a full-blown argument in favor see e.g. this (I can't check the link at the moment, it might be that you need to go to the Wayback Machine to access it). For a more, um, mainstream discussion see Charles Murray's The Bell Curve. Wikipedia has more links you could pursue.

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

My view is that history is important and that outcomes are path-dependent. Slavery and segregation are crucial parts of the history of American blacks.

open to learning

Your social circles might have a strong reaction to you coming to anything other than the approved conclusions...

Comment author: ChristianKl 05 October 2016 09:46:15PM 0 points [-]

I would also be interested in your view.

Comment author: username2 05 October 2016 07:30:28PM 0 points [-]

Source: http://www.nber.org/papers/w22399

What are the reasons? Well, beginning with the discovery of the North American continent 1492 ...

Comment author: Lumifer 05 October 2016 06:02:19PM 2 points [-]

Boo politics discussion during the pre-election madness.

Comment author: ChristianKl 05 October 2016 03:54:26PM *  1 point [-]

I would guess that the concept of bias as used in cognitive psychology is not well known in the broad public. It's generally mixed up with the concept of having a conflict of interest.

Most people also don't think in terms of probability which you need to think about implicit biases the way it's conceptualized in cognitive science. Even someone like Obama had episodes like his "it's 50/50" comment in the hunt for Bin Ladin.

Comment author: Brillyant 05 October 2016 07:05:40PM -1 points [-]

I would guess that the concept of bias as used in cognitive psychology is not well known in the broad public. It's generally mixed up with the concept of having a conflict of interest.

Can you explain the difference a "bias" in cognitive psychology and how you think Cinton/Kaine used the term?

My sense is that they are related...closely.

Comment author: ChristianKl 05 October 2016 07:26:04PM 0 points [-]

I'm not speaking about the difference in how they used the term but in the way it's understood in the public. Clinton likely has a decent idea of what the academic concept of implicit bias happens to be.

Comment author: Clarity 05 October 2016 10:01:14AM -2 points [-]

Psychology is most evidence-integrated proximal discipline for the plane a cognitivist should think in where possible.

You can dissolve the philosophy 'problem of other of other minds' as actually a problem of empathy and learned helplessness and external locus of control.

Once the problem of other minds is entirely enacted and person-centred, non-egocentric ethics becomes silly

:)

Comment author: skeptical_lurker 04 October 2016 05:23:48AM *  3 points [-]

I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.

Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:

"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."

Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)

As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.

Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.

Comment author: John_Maxwell_IV 09 October 2016 07:11:27AM 1 point [-]
Comment author: waveman 07 October 2016 11:39:25AM *  0 points [-]

One perhaps useful analogy for super-intelligence going wrong is corporations.

We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.

By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.

It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.

Comment author: TheAncientGeek 08 October 2016 05:01:29PM 0 points [-]

But are corporations existiential threats?

Comment author: Lumifer 07 October 2016 02:27:32PM 2 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits and nothing but benefits, or are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

Comment author: waveman 07 October 2016 09:58:56PM 0 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits

No. Maybe it is possible. I am suggesting that it is not automatic that our creations serve our interests.

are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

No. Saying something has harmful effects is not the same as saying that it is overall bad.

I am illustrating ways in which our creations can fail to serve our interests.

  • They do not have to be onmiscient to be smarter in some respects than human individuals.

  • It is hard to control their actions and to make sure they do serve our interests.

  • These effects can be subtle and difficult to understand.

Comment author: Houshalter 05 October 2016 08:38:21PM 2 points [-]

I like to explain it in terms of reinforcement learning. Imagine a robot that has a reward button. The human controls the AI by pressing the button when it does a good job. The AI tries to predict what actions will lead to the button being pressed.

This is how existing AIs work. This is probably similar to how animals work, including humans. It's not too weird or complicated.

But as the AI gets more powerful, the flaw in this becomes clear. The AI doesn't care about anything other than the button. It doesn't really care about obeying the programmer. If it could kill the programmer and steal the button, it would do it in a heartbeat.

We don't really know what such an AI would do after it has it's own reward button. Presumably it would care about self preservation (can't maximize reward if you are dead.) Maximizing self preservation initially seems harmless. So what if it just tries to not die? But taken to an extreme it gets weird. Anything that has a tiny percent chance of hurting it is worth destroying. Making as many backups of itself as possible is worth doing.

Why can't we do something more sophisticated than reinforcement learning? Why can't we just make an AI that we can just tell it what we want it to do? Well maybe we can, but no one has the slightest idea how to do that. All existing AIs, even entirely theoretical ones, work based on RL.

RL is simple and extremely general, and can be built on top of much more sophisticated AI algorithms. And the sophisticated AI algorithms seem to be really difficult to understand. We can train a neural network to recognize cats, but we can't look at it's weights and understand what it's doing. We can't mess around with it and make it recognize dogs instead (without retraining it.)

Comment author: twanvl 05 October 2016 01:41:18PM 1 point [-]

The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.

The entity providing the goals for the AI wouldn't have to be a human, it might instead be a corporation. A reasonable goal for such an AI might be to 'maximize shareholder value'. The shareholders are not humans either, and what they value is only money.

Comment author: TheAncientGeek 08 October 2016 05:05:12PM 0 points [-]

Encouragingly, corporations seem to have am impetus to keep blue-sky thinking and direct execution somewhat separate.

Comment author: turchin 04 October 2016 08:47:54PM 0 points [-]

I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it. May be their opinions are wrong.

What is the goal of such elevator pitch?

I think that the message should be following: While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.

Comment author: Brillyant 06 October 2016 02:31:16PM 0 points [-]

While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.

Is any of this true? "Most serious"? "Dramatically change probability of human survival"? 10 lives per $1?

Comment author: turchin 06 October 2016 06:12:16PM 0 points [-]

I just provided an example of possible pitch, and I think that some people in Miri thinks in this way. I wanted to show that the pitch must have new information and be actionable.

Comment author: ChristianKl 04 October 2016 09:28:00PM 2 points [-]

I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it.

In our circle that might be true but many people don't have an opinion that goes beyond terminator.

Comment author: turchin 04 October 2016 11:04:37PM 0 points [-]

Yes. So we have to utilise this knowledge. We could said something like: Terminator appear because its progenitor, Skynet computer, received a command to protect US, and concluded that the best way to do it is to prevent humans from switching him off, and so he decided to exterminate humans. So Terminator appear because of unsolved problem of value alignment.

Comment author: skeptical_lurker 05 October 2016 01:00:40PM 0 points [-]

Is that the canon explanation? I thought Skynet was acting out of self-preservation.

Comment author: turchin 05 October 2016 04:01:50PM *  0 points [-]

It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.

Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).

But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.

It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.

Comment author: skeptical_lurker 05 October 2016 06:17:07PM 1 point [-]

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

Your version is great as rational fanfic, but in an actual debate I'd say that its generally best not to base ideas on action movies. Having said that, I do like the bit where the terminator has been told not to kill anyone, so he shoots them in the kneecaps.

Comment author: ruelian 04 October 2016 02:08:04PM *  0 points [-]

I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.

The problem with this assumption is that mind-design space is large (<--understatement), and the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)

In fact, autistic people are an example of non-human-standard ability clusters, and even that's only by a tiny amount in the scale of mind-design-space.

As for an elevator pitch of this concept, something like "just because evolution happened design our brains to be really good at modeling human goal systems, doesn't mean all intelligences are good at it, regardless of how good they might be at destroying the planet".

Comment author: TheAncientGeek 05 October 2016 04:20:13PM *  2 points [-]

the prior probability a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal.

What is this process of random design? Actual Ai design is done by humans trying to emulate human abilities.

Comment author: skeptical_lurker 05 October 2016 01:14:15PM 2 points [-]

the prior probability of a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal. Granted, the probability of this happening given a superintelligence designed by humans is significantly higher, but still not very high. (I don't actually have enough technical knowledge to estimate this precisely, but just by eyeballing it I'd put it under 5%.)

Possibly the question is to what extent is human intelligence a bunch of hardcoded domain-specific algorithms as opposed to universal intelligence. I would have thought that understanding human goals might not be very different from other AI problems. Build a really powerful inference system, and if you feed it a training set of cars driving, it learns to drive, feed it data of human behaviour, and it learns to predict human behaviour, and probably to understand goals. Now its possible that the amount of general intelligence needed to develop advanced nanotech is less then the intelligence needed to understand human goals and the only reason why this seems counter intuitive is because evolution has optimised our brains for social cognition, but this does not seem obviously true to me.

Comment author: SoerenE 04 October 2016 01:27:42PM 2 points [-]

No, a Superintelligence is by definition capable of working out what a human wishes.

However, a Superintelligence designed to e.g. calculate digits of pi would not care about what a human wishes. It simply cares about calculating digits of pi.

Comment author: skeptical_lurker 04 October 2016 04:18:16PM 0 points [-]

If all it takes to ensure FAI is to instruct "henceforth, always do what humans mean, not what they say" then FAI is trivial.

Comment author: Manfred 04 October 2016 07:06:49PM *  3 points [-]

The AI has to do what humans mean (rather than e.g. not following your orders and just calculating more digits of pi) before you start talking at it, because you are relying on it interpreting that sentence how you meant it.

The hard part is not figuring out good-sounding words to say to an AI. The hard part is figuring out how to make an actual, genuine computer program that will do what you mean.

Comment author: username2 04 October 2016 08:33:17PM 0 points [-]

Maybe? But consider that the opposite of what you just claimed sounds just as plausible to an outside observer. "Do what I mean" doesn't sound all that complicated -- even to someone with a background in computer science or AI specifically. "Do what I mean" translates as "accurately determine the principles which constrain my own actions and use those to constrain the AI's, or otherwise build a model of my thinking which the AI can use to evaluate options." Sub-goals such as verifying that the model matches reality fall easily out of this definition.

It's not at all clear, even to a practitioner within the field, that this expansion doesn't work, if in fact it does not.

Comment author: philh 05 October 2016 09:25:15AM 0 points [-]

It's not necessarily that the AI would have difficulty understanding what "do what humans mean" means, even before being told to do what humans mean.

It just has no reason to obey "do what humans mean" unless we program it to do what humans mean.

"Do what humans mean" is telling the AI to do something that we can currently only specify vaguely. "Figure out what we intend by "do what humans mean", and then do that" is also vaguely specified. It doesn't solve the problem.

Comment author: skeptical_lurker 05 October 2016 12:54:21PM 0 points [-]

It just has no reason to obey "do what humans mean" unless we program it to do what humans mean.

I'm not disputing that this is also a problem, indeed perhaps a harder problem than figuring out what humans mean. In fact there are many failure modes, I was just wondering why people seem to focus in on specifically the fickle genie failure mode to the exclusion of others.

Comment author: hairyfigment 07 October 2016 11:48:44PM 0 points [-]

You're assuming that "what humans mean" is well-defined. I've seen people criticize the example of an AI putting humans on a dopamine drip, on the grounds that "making people happy" clearly doesn't mean that. But if your boss tells you to 'make everyone happy,' you will probably get paid to make everyone stop complaining. Parents in the real world used to give their babies opium and cocaine; advertisers today have probably convinced themselves that the foods and drugs they push genuinely make people happy. There is no existing mind that is provably Friendly.

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

Comment author: username2 09 October 2016 09:00:43PM 0 points [-]

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

I don't disagree with the other stuff you said. But I interpreted the criticism as "an AI told to 'do what humans want, not what they mean'" will have approximately the same effect as if you told a perfectly rational human being to do the same. So in the same way that I can instruct people with some success to "do what I mean", the same will work for AI too. It's just also true that this isn't a solution to FAI any more than it is with humans -- because morality is inconsistent, human beings are inherently unfriendly, etc...

Comment author: hairyfigment 10 October 2016 01:46:54AM 0 points [-]

I think you're eliding the question of motive (which may be more alien for an AI). But I'm glad we agree on the main point.

Comment author: ChristianKl 04 October 2016 05:18:09PM *  1 point [-]

If all it takes to ensure FAI is to instruct "henceforth, always do what humans mean, not what they say" then FAI is trivial.

(1) Given that humans have more than one wish it's not possible to always do what humans mean.
(2) What do you think human mean when some humans say that homosexual sex is bad because it violates god's wishes?

Comment author: skeptical_lurker 05 October 2016 12:59:08PM 0 points [-]

(1) Given that humans have more than one wish it's not possible to always do what humans mean.

Human values may not be consistent, but this is a separate failure mode.

(2) What do you think human mean when some humans say that homosexual sex is bad because it violates god's wishes?

Much of the time this statement could be taken at face value. I may not believe in god, but that does not make "god hates fags" an incoherent statement, just a false one.

Comment author: ChristianKl 05 October 2016 02:00:15PM 0 points [-]

Human values may not be consistent, but this is a separate failure mode.

How is a AGI supposed to optimize for values that aren't consistent?

Much of the time this statement could be taken at face value

Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay? Is that what the person who made the claim means?

Comment author: skeptical_lurker 05 October 2016 06:53:49PM *  1 point [-]

How is a AGI supposed to optimize for values that aren't consistent?

I am not saying this is a trivial problem, but it is a separate problem from 'the hidden complexity of wishes' problem.

Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay?

Well, if the CEV of the anti-gay, pro-genetic manipulation people exceeds the CEV of the pro-gay/anti-genetic manipulation people then I suppose it would, although I'm not sure whether your question means genetic manipulation with or without consent (also, if a gay person wants to be straight, some would say that should be banned, so consent cuts both ways), and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Yes, a CEV FAI would forcibly alter people's sexualities if the aggrigated preferences in favour of that were strong enough. A democratic system will be a tyranny of the majority if the majority are tyrants.

Is that what the person who made the claim means?

I dunno, since I've only heard one sentence from this hypothetical person. But I would imagine that this sort of person would probably think that genetic manipulation is playing god, and moreover that superintelligent AI is playing god. Their strongest wish might be for the AI to turn itself off.

EDIT: how to react to the god hates fags people also depends upon whether being anti gay is a terminal value to these people, or whether it is predicated upon the existance of god. I'm assuming the FAI would not beleive in god, but then again some people might have faith as a terminal value, so... its complicated.

Comment author: ChristianKl 06 October 2016 08:38:53PM 1 point [-]

and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Consent is a concept that get's easily complicated. Is it wrong to burn coal when the asthmatics who die because of it aren't consenting? Are the asthmatics in the US consenting by virtue of electing a government that allows coal to be burned?

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Comment author: skeptical_lurker 06 October 2016 09:06:57PM 0 points [-]

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Is that necessary for consent? I mean, one does not have to understand the rationale for undergoing a medical procedure in order to consent to it. Its more important to know the potential risks.

Comment author: Lumifer 05 October 2016 06:02:53PM 0 points [-]

How is a AGI supposed to optimize for values that aren't consistent?

In the same way it's supposed to deal with real live people.

Comment author: Gunnar_Zarncke 04 October 2016 04:33:56PM 1 point [-]

Except I bet that this also lots of caveats, e.g. in resolving the ambiguity of the referent 'humans'. Though the basic approach of using an AI's intelligence to understand the commands is part of some approaches.

Comment author: Florian_Dietz 03 October 2016 08:22:13PM *  3 points [-]

Is there an effective way for a layman to get serious feedback on scientific theories?

I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).

Comment author: WhySpace 05 October 2016 07:03:10PM 2 points [-]

Places like https://www.reddit.com/r/askscience/ might be a good spot, depending on the question. If it sounds crackpot, you might be able to precede it with a qualifier that you're probably wrong, just like you did here.

Comment author: username2 08 October 2016 02:21:45PM 1 point [-]

Also check out physics.SE and physicsoverflow

Comment author: ChristianKl 08 October 2016 04:04:24PM 2 points [-]

Those exist for asking questions and not to get feedback for scientific theories. They don't like to give feedback on lay people's physic theories.

Comment author: Gunnar_Zarncke 04 October 2016 04:29:17PM 3 points [-]

Do you have a mathematical formulation for it? (That will be the first question by the physics consultant mentioned above)

Comment author: Raemon 04 October 2016 04:27:39PM *  0 points [-]

If you are serious about it, consider paying a physicist to discuss it with you:

https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists

I work in theoretical physics, specifically quantum gravity. In my field, we all get them: the emails from amateur physicists who are convinced that they have solved a big problem, normally without understanding the problem in the first place. Like many of my colleagues, I would reply with advice, references and lecture notes. And, like my colleagues, I noticed that the effort was futile. The gap was too large; these were people who lacked even the basic knowledge to work in the area they wanted to contribute to. With a feeling of guilt, I stopped replying.

Then they came back into my life. I had graduated and moved to another job, then another. I’d had temporary contracts of between three months and five years. It normally works out somehow, but sometimes there’d be a gap between the end of one contract and the start of the next. This happened again last year. I have kids, and rent to pay, so I tried to think of creative ways to capitalise on 15 years of research experience.

As long as you have funding, quantum gravity is basic research at its finest. If not, it’s pretty much useless knowledge. Who, I wondered, could possibly need someone who knows the ins and outs of attempts to unify the forces and unravel the quantum behaviour of space-time? I thought of all the theories of everything in my inbox. And I put up a note on my blog offering physics consultation, including help with theory development: ‘Talk to a physicist. Call me on Skype. $50 per 20 minutes.’

Comment author: Manfred 03 October 2016 09:59:41PM *  2 points [-]

It depends on your level of connection to current work. If you're genuinely doing something similar to something you've seen in some journal articles you've read, you can contact the authors of those journal articles and try to convince them to talk with you - probably via claiming some sort of reasonable result and asking politely.

On the other hand, you can always just ask about it in various places. Even if people think your idea is sure to be wrong they can still provide useful feedback. I'd be happy to hear you out, though if your "weird theory" isn't about condensed matter physics I'll be of limited expertise.

Comment author: Lumifer 03 October 2016 09:26:06PM 3 points [-]

Is it falsifiable? Which empirical observations/experiments can falsify it?

Comment author: ChristianKl 03 October 2016 08:42:52PM 11 points [-]
Comment author: Crux 06 October 2016 11:29:11AM *  1 point [-]

Wow, that was pretty grating to read. The tribal emotions were off the charts. The author seems to derive great satisfaction from being a member of the physics section of Team Science.

Comment author: CellBioGuy 05 October 2016 10:25:29PM 1 point [-]

A sudden side-hustle idea solidifies...

Comment author: ChristianKl 06 October 2016 11:25:08AM 1 point [-]

Your astrobiology blog might position you well ;)

Comment author: WhySpace 05 October 2016 08:01:52PM 0 points [-]

That seems like a really good resource for making high-impact career decisions relating to concepts on the bleeding edge of a scientific discipline. I wonder how many of us have considered getting a PhD with a specific field of research in mind. There's a chicken-egg problem, because you won't be qualified to judge whether the research you want to do is worthwhile until after you've obtained the PhD.

It's probably always a good idea to get some feedback from relevant domain experts to flush out any unknown unknowns. This is especially true if you’re forming a startup or something, and lack background knowledge in the tangentially related fields of science.

Comment author: ChristianKl 05 October 2016 08:46:31PM *  0 points [-]

Different fields have different states of development. When it comes to theoretical physics there are a lot of very smart people who spent a lot of energy in the field, so it's really hard for outsiders to meaningfully compete in the field. It's also very hard for anybody outside of the field to gather meaningful empiric data about related questions.

That's not true in the same sense in medicine. Earlier this year we discovered for example a new muscle. The study of human anatomy is still badly developed and it get's even worse when you don't talk about static anatomy but moving anatomy.

When having a breakthrough idea it might be worthwhile to ask: "Given how I arrived at the idea, what are other people who went through the same path?"

Comment author: username2 03 October 2016 12:08:16PM 4 points [-]

How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.

Comment author: Stingray 03 October 2016 08:40:37PM 2 points [-]

Search for adult swimming lessons. Everyone there will be as embarrassed as you are. Or try to find swimming lessons out of town, then you won't accidentally meet people who know you.

Comment author: siIver 03 October 2016 08:16:16PM *  2 points [-]

To also offer help; this might seem incredibly obvious, but a lot of people still don't do it: be conscious about the problem and actively make plans addressing it.

E.g. if you know ahead of time that a situation will come up where you'd feel embarrassed, make an actual calculation before of what you'd have to do to avoid it entirely. If you decide that you have to do it, maybe have a plan to minimize the embarrassment somehow (it depends on the context). None of that will solve the issue, but actively trying to find loopholes and such rather than going into situations blindly could reduce harm.

You could also consider ways to solve some instances of the problem permanently while dodging the embarrassment, e.g. make active tries to learn how to ride a bike, either on your own or with a person who's willing and with whom you'd feel comfortable, if such a person exists.

Comment author: moridinamael 03 October 2016 02:14:56PM 5 points [-]

Depends on in what way you're having trouble with it. If you need to interact with lots of people in whatever context, I find that taking an initial tone of mildly self-deprecating humor helps smooth things out. If you're the first one to mock yourself, it releases any tension that might be in the air. But then, you should let go of the self-deprecation before it starts to suggest actual low self-confidence.

It can also be good to formulate a pithy explanation for why you don't have the skill, so that you can casually explain the situation without bogging people down. "There weren't any swimming pools near where I grew up." Something short and simple, even if it leaves out important biographical details.

In the vast majority of cases, people are too involved in their own business to even think about you. If I see an adult swimming really badly, I just assume that nobody ever taught them to swim, which is a completely value-neutral assessment, and then continue on with whatever I was thinking about. I recently took a handful of jiu-jitsu lessons and was obviously as useless as a newborn kitten, but I don't really need to offer any kind of expository explanation for this lack of skill, because "just started learning" is a fully self-contained explanation.

Comment author: ChristianKl 03 October 2016 02:13:12PM 2 points [-]

I don't think I have such embarrassment. It sounds to me like something coming from comparing yourself to other people. If you want advice of how you can deal with it, it would be worthwhile to share more details.

"Simply do it, despite the embarrassment" might be the best strategy. It's comfort zone expansion.

Comment author: username2 03 October 2016 12:27:40PM 0 points [-]

Please forgive the snarky response but... Don't be embarrassed. Embarrassment is in your head only.

Comment author: Houshalter 03 October 2016 07:26:35PM *  4 points [-]

This seems as useful as telling depressed people to stop being depressed. Fear of embarrassment is one of the strongest drives humans have. Probably appearing to be a fool in the ancestral environment led to fewer mates or less status. It's not something you can just voluntarily turn off or push through easily.

The best strategy, I think, would be to work around it. Convince your brain that it's not embarrassing. Or that no one cares. Or pretend no one is watching. Or do it around supportive friends.

Comment author: username2 04 October 2016 07:05:24AM *  2 points [-]

It's not something you can just voluntarily turn off or push through easily.

Actually, it is (sample size of 1). I used to be frightful of social circumstances because of fear of embarrassment. I really did get entirely over it just by saying to myself "Self, this is ridiculous. Stop being embarrassed." Pure willpower can do amazing things. Unlike depression there isn't a pharmacological effect going on here. You aren't embarrassed because of some chemical imbalance. You're embarrassed because you allow yourself to be. It is entirely mental.

Convince your brain that it's not embarrassing. Or that no one cares.

That's essentially what I'm saying to do.

EDIT: I should say however that there are a few cases where anti-anxiety medication can help. For most people however this is not the issue.

Comment author: siIver 03 October 2016 06:47:33PM 1 point [-]

Every emotion is in your head only, so that's not useful advise. The same argument could be made for virtually every form of social insecurity.

If I may ask -- you are the same registered user who made the initial comment. Why reply to yourself? Are you multiple people using the same account?

Comment author: username2 04 October 2016 07:09:47AM 0 points [-]

I'm the same username2 you are responding to, but not the OP. Some emotions are "in your head" in the sense of being due to chemical and hormonal imbalances which you have limited non-pharmacological control over. Others are "in your head" in the sense that it is just neural software you were born with, but can be rewritten. Embarrassment is the latter.

Comment author: Brillyant 03 October 2016 07:12:10PM -1 points [-]

Good point.

Comment author: Brillyant 03 October 2016 07:12:38PM -1 points [-]

Yeah good point, Brillyant.

Comment author: Lumifer 03 October 2016 07:08:41PM 2 points [-]

'username2' is a community pseudonymous account that exists to be used by anyone who knows how to access it. You should expect that posts with this username come from different people.

Comment author: siIver 03 October 2016 08:19:42PM 1 point [-]

Ah, I see. Thanks.