You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: ChristianKl 06 October 2016 09:50:06PM 1 point [-]

It depends whether we are using 'racist' to mean 'believes that some races are superior to others in certain respects' or 'has less empathy for other races'.

We are using it here to mean "implicit racism". That's a term that used in the literature. There are studies that measure it. Implicit racism also isn't something that's only found in white people (in Clinton's words it's a problem for everyone). Black people also have implicit racism that makes them treat white people better in many instances.

Comment author: DanArmak 06 October 2016 09:49:19PM 1 point [-]

This is a tangent, but:

You know that “four delicious tiny round brown glazed Italian chocolate cookies” is the only proper way to order these adjectives.

There are definitely some ordering rules, but I am not convinced they are nearly as universal or as complex as this suggests. See the Language Log on this subject.

Comment author: username2 06 October 2016 09:31:39PM 1 point [-]

I think this article is something that people outside of this community really ought to read.

Interesting. Why people outside of this community? I find it is actually the LW and EA communities that place an exorbitant amount of emphasis on empathy. Most of those I know outside of the rationalist community understand the healthy tradeoff between charitable action and looking out for oneself.

Comment author: skeptical_lurker 06 October 2016 09:15:12PM *  1 point [-]

It depends whether we are using 'racist' to mean 'believes that some races are superior to others in certain respects' or 'has less empathy for other races'. In the first case, sure, maybe you would date someone of another race, because group differences aren't so important when dealing with individuals. But in the latter case... if you are less able to empathise with people of other races it would seem really weird to date them.

Comment author: username2 06 October 2016 09:07:26PM 1 point [-]

Google has AI safety protocol.

Citation?

Comment author: skeptical_lurker 06 October 2016 09:06:57PM 0 points [-]

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Is that necessary for consent? I mean, one does not have to understand the rationale for undergoing a medical procedure in order to consent to it. Its more important to know the potential risks.

Comment author: username2 06 October 2016 09:06:18PM *  2 points [-]

I agree with Robin Hanson that we are maybe 5% of the way to general AI.

On what basis do you say that?

Comment author: Gunnar_Zarncke 06 October 2016 08:55:40PM 0 points [-]

OK with that terminology we can agree.

Comment author: ChristianKl 06 October 2016 08:38:53PM 1 point [-]

and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Consent is a concept that get's easily complicated. Is it wrong to burn coal when the asthmatics who die because of it aren't consenting? Are the asthmatics in the US consenting by virtue of electing a government that allows coal to be burned?

If a AGI does thinks in a very complicated way it might not meaningfully get consent for anything because it can't explain it's reasoning to humans.

Comment author: ChristianKl 06 October 2016 08:32:52PM 0 points [-]

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

What do you mean with that question? How do you compare the present state of the US with a counterfactual US where African Americans weren't in slavery?

Comment author: ChristianKl 06 October 2016 08:27:49PM 2 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships?

There are many attributes of possible partners that make me less likely to data them but that at the same time aren't deal breakers. The fact that I have a theistic girlfriend doesn't mean that I wouldn't prefer a girlfriend who isn't theistic all things equal.

Comment author: tukabel 06 October 2016 07:47:43PM 2 points [-]

Well, nice to see the law of accelerating returns in its full power, unobscured by "physical" factors (no need to produce something, e.g. better chip or engine, in order to get to the next level). Recent theoretical progress illustrates nicely how devastating the effects of "AI winters" were.

Comment author: noitanigami 06 October 2016 07:41:36PM 0 points [-]

This was a useful article. Consider making it easier to find by submitting it to the main blog.

Thank you for writing up your insights.

Comment author: Lumifer 06 October 2016 06:55:08PM *  1 point [-]

But, wait, once you've decided on a course of action.

You are misreading Jacobian. Let me quote (emphasis mine):

whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

.

but it's not at all clear that it's actually a bad idea.

Such people are commonly called "fanatics".

Comment author: gjm 06 October 2016 06:42:39PM -1 points [-]

The article distinguishes between "emotional empathy" ("feeling with") and "cognitive empathy" ("feeling for"), and it's only the former that it (cautiously) argues against. It argues that emotional empathy pushes you to follow the crowd urging you to burn the witches, not merely out of social propriety but through coming to share their fear and anger.

So I think the author's answer to "why help all those strangers?" (meaning, I take it, something like "with what motive?") is "cognitive empathy".

I'm not altogether convinced by either the terminology or the psychology, but at any rate the claim here is not that we should be discarding every form of empathy and turning ourselves into sociopaths.

Comment author: gjm 06 October 2016 06:21:53PM -1 points [-]

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

Sounds terrible! But, wait, once you've decided on a course of action. The main problem with sociopaths is that they do horrible things and do them very effectively, right? Someone who chooses what to do like a non-sociopath and then executes those plans like a sociopath may sound scary and creepy and all, but it's not at all clear that it's actually a bad idea.

(I am not convinced that Jacobian is actually arguing that you decide on a course of action and then turn yourself into a sociopath. But even that strawman version of what he's saying is, I think, much less terrible than you obviously want readers to think it is.)

Comment author: gjm 06 October 2016 06:17:54PM -1 points [-]

20 years ago the very first crude neural nets were just getting started

The very first artificial neural networks were in the 1940s. Perceptrons 1958. Backprop 1975. That was over 40 years ago.

In 1992 Gerry Tesauro made a neural-network-based computer program that played world-class backgammon. That was 25 years ago.

What's about 20 years old is "deep learning", which really just means neural networks of a kind that was generally too expensive longer ago and that has become practical as a result of advances in hardware. (That's not quite fair. There's been plenty of progress in the design and training of these NNs, as a result of having fast enough hardware for them to be worth experimenting with.)

Comment author: turchin 06 October 2016 06:16:37PM 1 point [-]

If we look on humans as on typical species, we could use typical estimate of species life expectancy, which is several million years, and use it as human life expectancy. It is not bad.

But humans are definitely in the special point of their history and they could create a competitor soon (post humans or AI) and doesn't look good. Competitors are one of the main ways how species go extinct.

Comment author: turchin 06 October 2016 06:12:16PM 0 points [-]

I just provided an example of possible pitch, and I think that some people in Miri thinks in this way. I wanted to show that the pitch must have new information and be actionable.

Comment author: Houshalter 06 October 2016 06:06:13PM *  5 points [-]

I think it's well within the realm of possibility it could happen a lot sooner than that. 20 years is a long time. 20 years ago the very first crude neural nets were just getting started. It was only the past 5 years that the research really took off. And the rate of progress is only going to increase with so much funding and interest.

I recall notable researchers like Hinton making predictions that "X will take 5 years" and it being accomplished within 5 months. Go is a good example. Even a year ago, I think many experts thought it would be beaten in 10 years, but not many thought it would be beaten by 2016. In 2010 machine vision was so primitive it was a joke at how far AI has to come:

Testing embedded image.

In 2015 the best machine vision systems exceeded humans by a significant amount at object recognition.

Google recently announced a neural net chip that is 7 years ahead of Moore's law. Granted only in terms of power consumption, and it only runs already trained models. But nevertheless it is an example of the kind of sudden leap forward in ability. Before that Google started using farms of GPUs that are hundreds of times larger than what university researchers have access to.

That's just hardware though. I think the software is improving remarkably fast as well. We have tons of very smart people working on these algorithms. Tweaking them, improving them bit by bit, gaining intuition about how they work, and testing crazy ideas to make them better. If evolution can develop human brains by just some stupid random mutations, then surely this process can work much faster. It feels like every week there is some amazing new advancement made. Like recently, Google's synthetic gradient paper or hypernetworks.

I think one of the biggest things holding the field back is that it's all focused on squeezing small improvements out of well studied benchmarks like imagnet. Machine vision is very interesting of course. But at some point the improvements they are making don't generalize to other tasks. But that is starting to change, as I mentioned in my above comment. Deepmind is focusing on playing games like starcraft. This requires more focus on planning, recurrency, and reinforcement learning. There is more focus now on natural language processing, which also involves a lot of general intelligence features.

Comment author: Lumifer 06 October 2016 05:34:49PM *  2 points [-]

With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.

This required first to, basically, decide that something which looks like a person is actually not and so is not worthy of empathy. That is not a trivial barrier to overcome. Without empathy to start with, burning witches is much easier.

Moral progress is the progress of knowledge.

This is a very... contentious statement. There are a lot of interesting implications.

All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

And that is what I'm strongly disagreeing with.

You are essentially saying that once you've decided on a course of action, you should turn yourself into a sociopath.

Comment author: Jacobian 06 October 2016 05:32:41PM 1 point [-]

Thank you, this is the biggest compliment I could hope for.

I worry whenever I write anything that could fall into bravery debate territory. I worry that for some readers it would sound stale and obvious, or be the precise opposite of the advice they need, while others would reject it in disgust after reading two lines. I write about things that hit me in the right spot: ideas I was on the precipice of and something pushed me over. And then I hope that I'll find at least a few readers who are in the same spot I am.

Comment author: Jacobian 06 October 2016 05:24:41PM *  1 point [-]

With empathy, it turns out that Germans were much more likely to empathize with other Germans than with Juden. With empathy, everyone was cheering as the witches burned.

Moral progress is the progress of knowledge. Slavers in the antebellum South convinced themselves that they were doing a favor to the slaves because the latter couldn't survive by themselves in an advanced economy. A hundred years later, they changed their minds more than they changed their hearts. We (some of us) have learned that coercion is almost always bad, making world saving plans that involve a lot of coercion tend to fail, and preserving people's freedom (blacks, witches and Jews included) increases everyone's welfare.

Is empathy part of one's motivation to even pursue moral progress? Perhaps, but if so it's a very deep part of us that will never be discarded. All I'm saying is that whenever you have finally decided that you should make the world a better place, at that point emotional empathy is a bias that you should discard when choosing a course of action.

Comment author: Lumifer 06 October 2016 05:07:42PM *  5 points [-]

What is the best source for this in your view?

The raw data is plentiful -- look at any standardized test scores (e.g. SAT) by race. For a full-blown argument in favor see e.g. this (I can't check the link at the moment, it might be that you need to go to the Wayback Machine to access it). For a more, um, mainstream discussion see Charles Murray's The Bell Curve. Wikipedia has more links you could pursue.

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

My view is that history is important and that outcomes are path-dependent. Slavery and segregation are crucial parts of the history of American blacks.

open to learning

Your social circles might have a strong reaction to you coming to anything other than the approved conclusions...

Comment author: Brillyant 06 October 2016 04:36:12PM *  -2 points [-]

Genetic factors (such as lower IQ)

What is the best source for this in your view?

Historical factors, Cultural factors, Economic factors

Is it your view that past slavery in America still has a large impact on African Americans in the present day U.S.?

It seems obvious to me that it does, and that the effects are wide and deep, as slavery (and Jim Crow) is relatively recent history—We're only a handful of generations from a time where a race of people was enslaved and systemically kept from accumulating wealth and education.

...I don't think a reasonable open discussion is possible.

Meh. Maybe. I'd like to believe I'm a reasonable guy. My views on these issues are largely ignorant and I'm open to learning.

Comment author: scarcegreengrass 06 October 2016 04:15:21PM *  -1 points [-]

I would also contemplate the scenario that the human species might turn out to be less impressive than it currently appears, and is actually a fairly typical example of a successful Earth species. Most achievements that distinguish humans from eg plankton are in the future (eg space industry), not the past or present.

This might sound strange. Arguments in favor of this perspective:

• Homo sapiens is not the greatest species in terms of population or total biomass.

• Homo sapiens is not the only species to make tools, use agriculture, build buildings, or adapt to a variety of terrestrial habitats.

• Homo sapiens is not the first species to have a catastrophic impact on the atmosphere.

Arguments against this perspective:

• The human economy is currently doubling in scale every couple decades.

• No species (probably) ever reached the edge of the atmosphere before Homo sapiens.

(To clarify, i think this question is far from settled. But i think the idea that Homo sapiens will be smaller-impact than expected is more likely than the scenario that historical gods are representations of unknown prosperous civilizations.)

Comment author: scarcegreengrass 06 October 2016 03:39:44PM 1 point [-]

I actually drew up a spreadsheet to estimate this: https://docs.google.com/spreadsheets/d/1xnfsDuC0ddUxvKekGLJ5QA5nrXxzked7K-k6jqUm538/edit?usp=sharing

I agree with you about the numbers: If there were say 10^15 insects then their moral weight might be in question. However there are actually more like 10^18, which is huge even for very small per-insect weightings.

Comment author: Lumifer 06 October 2016 03:11:50PM 3 points [-]

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

Comment author: Lumifer 06 October 2016 02:52:56PM 3 points [-]

As with any complex phenomenon in a complex system, there is going to be a laundry list of contributing factors, none of which is the cause (in the sense that fixing just that cause will fix the entire problem). We can start with

  • Genetic factors (such as lower IQ)
  • Historical factors, which in turn flow into
  • Cultural factors (such as distrust of the government / law enforcement) and
  • Economic factors (from being poor to having a major presence in the drug trade)

The opinions about the relative weights of these factors are going to differ and in the current political climate I don't think a reasonable open discussion is possible.

Comment author: siIver 06 October 2016 02:35:05PM *  3 points [-]

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

Comment author: Brillyant 06 October 2016 02:31:16PM 0 points [-]

While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.

Is any of this true? "Most serious"? "Dramatically change probability of human survival"? 10 lives per $1?

Comment author: moridinamael 06 October 2016 02:20:37PM 3 points [-]
Comment author: entirelyuseless 06 October 2016 01:12:09PM 1 point [-]

I doubt there is much motivation here for "at least 20 years" except the very fact that it is hard to tell what will happen in 20 years.

I agree with Robin Hanson that we are maybe 5% of the way to general AI. I think 20 years from now the distance we were from AI at this point will be somewhat clearer (because we will be closer, but still very distant.)

Comment author: SodaPopinski 06 October 2016 11:52:27AM 3 points [-]

CellBioGuy all your astrobiology posts are great I'd be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern's post about there not being complexity limitations preventing runaway self-improving agents?

Comment author: Crux 06 October 2016 11:29:11AM *  1 point [-]

Wow, that was pretty grating to read. The tribal emotions were off the charts. The author seems to derive great satisfaction from being a member of the physics section of Team Science.

Comment author: ChristianKl 06 October 2016 11:25:08AM 1 point [-]

Your astrobiology blog might position you well ;)

Comment author: rhaps0dy 06 October 2016 09:50:25AM 0 points [-]

I don't think we are that far away from AGI.

At the very least 20 years. And yes Alphabet are the closest, but in 20 years a lot of things can change.

Comment author: Douglas_Knight 06 October 2016 04:49:40AM 0 points [-]

It came back here

Comment author: kolya911 06 October 2016 02:16:40AM *  1 point [-]

I got banned from Gleb's intentional insights for speaking my mind about this article.

This argues for creation of pseudo-speak or 1984 newspeak where commonly understood word gets a new "fuzzier" meaning...

"I'm a weird lawyer, I sometimes like when my clients lose" lets me forget that I'm actually paid to be an advocate for any client that retained me.

Play out a few examples in your mind and you see how quickly very firm word-concepts lose meaning

Comment author: siIver 06 October 2016 12:06:05AM 2 points [-]

Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?

Comment author: CellBioGuy 05 October 2016 10:25:29PM 1 point [-]

A sudden side-hustle idea solidifies...

Comment author: CellBioGuy 05 October 2016 10:23:55PM 1 point [-]
Comment author: CellBioGuy 05 October 2016 10:18:27PM *  1 point [-]

Indeed it is seen easily when comparing multiple related species as it is that which changes very fast and seemingly randomly (and uniformly).

Comment author: ChristianKl 05 October 2016 09:46:15PM 0 points [-]

I would also be interested in your view.

Comment author: ChristianKl 05 October 2016 09:35:21PM *  1 point [-]

"known unknowns" describes a model where you have unknown variables but you know which variables you don't know.

Comment author: gwern 05 October 2016 09:19:10PM *  5 points [-]

Lots of other problems with it too. Why is there any last-universal-common-ancestor in this scenario? You would want to drop a full ecosystem with millions of different organisms, each with different FEC shards of data. If you can deliver some bacteria to a virgin planet, you can deliver multiple kinds of bacteria, not just one. Yet, genetics finds that there's a LUCA (not that much of LUCA survives in current genomes).

Comment author: Gunnar_Zarncke 05 October 2016 09:13:37PM 0 points [-]

Yes, the known specific behavior is known known. But I'm talking about the general behavior. Where we do not know specifics of but which is still within the general model? How do you call these?

Comment author: Brillyant 05 October 2016 09:09:47PM 0 points [-]

Can/will you TL;DR your view?

Comment author: Lumifer 05 October 2016 09:00:47PM 3 points [-]

You asked why is "the incidence of police encounters with blacks elevated". This is a direct answer.

If you want to know the reasons for different crime rates, this is going to get long and complicated.

Comment author: ChristianKl 05 October 2016 09:00:42PM 6 points [-]

Our biosphere's junk DNA

Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.

Comment author: ChristianKl 05 October 2016 08:46:31PM *  0 points [-]

Different fields have different states of development. When it comes to theoretical physics there are a lot of very smart people who spent a lot of energy in the field, so it's really hard for outsiders to meaningfully compete in the field. It's also very hard for anybody outside of the field to gather meaningful empiric data about related questions.

That's not true in the same sense in medicine. Earlier this year we discovered for example a new muscle. The study of human anatomy is still badly developed and it get's even worse when you don't talk about static anatomy but moving anatomy.

When having a breakthrough idea it might be worthwhile to ask: "Given how I arrived at the idea, what are other people who went through the same path?"

Comment author: Houshalter 05 October 2016 08:43:00PM 2 points [-]

That's not really surprising. Google employs by far the most AI researchers and they have general AI as an actual goal. Deepmind in particular has been pushing for reinforcement learning and general game playing. Which is the first step towards building AI agents that optimize utility functions in complex real world environments, instead of just classifying images or text.

What specific corporation is winning at the moment isn't that relevant. Facebook isn't far behind and has more of a focus on language learning, memory, and reasoning, which are possibly the critical pieces to reaching general intelligence. Microsoft just made headlines for founding a new AI division. Amazon just announced a big competition for the best conversational AIs. Almost every major tech company is trying to get in on this game.

I don't think we are that far away from AGI.

Comment author: Houshalter 05 October 2016 08:38:21PM 2 points [-]

I like to explain it in terms of reinforcement learning. Imagine a robot that has a reward button. The human controls the AI by pressing the button when it does a good job. The AI tries to predict what actions will lead to the button being pressed.

This is how existing AIs work. This is probably similar to how animals work, including humans. It's not too weird or complicated.

But as the AI gets more powerful, the flaw in this becomes clear. The AI doesn't care about anything other than the button. It doesn't really care about obeying the programmer. If it could kill the programmer and steal the button, it would do it in a heartbeat.

We don't really know what such an AI would do after it has it's own reward button. Presumably it would care about self preservation (can't maximize reward if you are dead.) Maximizing self preservation initially seems harmless. So what if it just tries to not die? But taken to an extreme it gets weird. Anything that has a tiny percent chance of hurting it is worth destroying. Making as many backups of itself as possible is worth doing.

Why can't we do something more sophisticated than reinforcement learning? Why can't we just make an AI that we can just tell it what we want it to do? Well maybe we can, but no one has the slightest idea how to do that. All existing AIs, even entirely theoretical ones, work based on RL.

RL is simple and extremely general, and can be built on top of much more sophisticated AI algorithms. And the sophisticated AI algorithms seem to be really difficult to understand. We can train a neural network to recognize cats, but we can't look at it's weights and understand what it's doing. We can't mess around with it and make it recognize dogs instead (without retraining it.)

Comment author: turchin 05 October 2016 08:28:15PM *  2 points [-]

My thoughts:

  1. Google has (is) the biggest computer program = 3 bln lines of code

  2. Google has world biggest database, including Youtube, 23andme, Gmail, Google books, all internet content

  3. Google is the world biggest computer, which includes something like 1 per cent of total world computing power

  4. Google did most impressive AI demonstartion recently that is win in Go.

  5. Google is clearly interested in creating AI.

  6. Google has AI safety protocol.

  7. Google has money to buy needed parts, including people.

So it looks like Google is in winning position. How may be its main competitors? Military AIs in NSA. Other large companies.

Comment author: Brillyant 05 October 2016 08:24:15PM 0 points [-]

Why is this?

Comment author: Lumifer 05 October 2016 08:15:57PM 4 points [-]

What are the reasons?

For example, there were 4,636 murders committed by white people and 5,620 murders committed by black people in 2015 (source). On the per-capita basis this makes the by-white murder rate to be about 2.2 per 100,000 and the by-black murder rate to be about 16.2 per 100,000.

Comment author: WhySpace 05 October 2016 08:01:52PM 0 points [-]

That seems like a really good resource for making high-impact career decisions relating to concepts on the bleeding edge of a scientific discipline. I wonder how many of us have considered getting a PhD with a specific field of research in mind. There's a chicken-egg problem, because you won't be qualified to judge whether the research you want to do is worthwhile until after you've obtained the PhD.

It's probably always a good idea to get some feedback from relevant domain experts to flush out any unknown unknowns. This is especially true if you’re forming a startup or something, and lack background knowledge in the tangentially related fields of science.

Comment author: scarcegreengrass 05 October 2016 07:42:56PM 0 points [-]

Not new, possibly not interesting to anyone beside me. A 2013 astrobiology paper that explores an odd corner of the Fermi Paradox. The paper explores the bizarre perspective that Earth life was seeded by extraterrestrial life (directed panspermia) as a form of information backup. Our biosphere's junk DNA, in this scenario, stores information valuable to the extraterrestrial system.

https://arxiv.org/abs/1303.6739

Comment author: Brillyant 05 October 2016 07:34:10PM -1 points [-]

There is a big controversy in the US over whether the police are racist, not over whether the police have cognitive biases.

Hm. I don't think it's this clear a distinction. Clinton seems to be suggesting there is perhaps more nuance to the issue than just arguing about whether or not lots of cops are racist.

I would be overjoyed if presidential candidates really were discussing cognitive biases.

Interesting. I was very happy to hear Clinton speak of implicit bias because it seemed to be a way to advance the discussion to something more rational.

Comment author: username2 05 October 2016 07:30:28PM 0 points [-]

Source: http://www.nber.org/papers/w22399

What are the reasons? Well, beginning with the discovery of the North American continent 1492 ...

Comment author: ChristianKl 05 October 2016 07:26:04PM 0 points [-]

I'm not speaking about the difference in how they used the term but in the way it's understood in the public. Clinton likely has a decent idea of what the academic concept of implicit bias happens to be.

Comment author: ChristianKl 05 October 2016 07:24:43PM -1 points [-]

It's interesting that nobody asks why White people get shoot so much more than Asian people when the ratio of them getting shoot is equivalent to the ratio of White people vs. Black people. Per million, 5.03 Blacks, 5.02 Whites and only 0.72 Asians get shoot in this year by the police.

The focus on implicit bias is interesting. It's like blaming the weather. We can agree that the weather is bad but that doesn't change anything. The DNC emails suggest that it was DNC policy to not want to commit to any real demands of Black Lives Matter but simply focus on telling their narrative.

If they wanted real change they could proclaim that there a need for a new federal department that focuses on police accountability and in the future that department will persecute misdeeds by officers so that officers don't get persecuted by their buddies anymore.

Comment author: skeptical_lurker 05 October 2016 07:11:11PM 1 point [-]

I don't think it means that. I don't think she meant that.

I'm pretty sure that is what she means. There is a big controversy in the US over whether the police are racist, not over whether the police have cognitive biases. I would be overjoyed if presidential candidates really were discussing cognitive biases.

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

No disagreement here.

Comment author: Brillyant 05 October 2016 07:05:40PM -1 points [-]

I would guess that the concept of bias as used in cognitive psychology is not well known in the broad public. It's generally mixed up with the concept of having a conflict of interest.

Can you explain the difference a "bias" in cognitive psychology and how you think Cinton/Kaine used the term?

My sense is that they are related...closely.

Comment author: WhySpace 05 October 2016 07:03:10PM 2 points [-]

Places like https://www.reddit.com/r/askscience/ might be a good spot, depending on the question. If it sounds crackpot, you might be able to precede it with a qualifier that you're probably wrong, just like you did here.

Comment author: Brillyant 05 October 2016 07:02:37PM -1 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

Can you provide any sources for this?

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

Is the incidence of police encounters with blacks elevated?

What are the reasons?

Comment author: Brillyant 05 October 2016 06:58:48PM 0 points [-]

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people.

I don't think it means that. I don't think she meant that. (Though I guess it depends on your definition of "racist".)

if everyone is a little bit racist, why would people get into interracial relationships...

My understanding is that humans have a tribal in/out group mentality that may use race as way to classify other humans as "others". They can also use religion, class, culture, etc.

My understanding of Clinton's (and then Kaine's) remarks was that everyone has biases of which they are unconscious...and that these biases affect their thoughts...and therefore sometimes their actions.

Comment author: skeptical_lurker 05 October 2016 06:53:49PM *  1 point [-]

How is a AGI supposed to optimize for values that aren't consistent?

I am not saying this is a trivial problem, but it is a separate problem from 'the hidden complexity of wishes' problem.

Does that mean that the AGI should start doing genetic manipulation that prevents people from being gay?

Well, if the CEV of the anti-gay, pro-genetic manipulation people exceeds the CEV of the pro-gay/anti-genetic manipulation people then I suppose it would, although I'm not sure whether your question means genetic manipulation with or without consent (also, if a gay person wants to be straight, some would say that should be banned, so consent cuts both ways), and so you also have to take into account the CEV on the issue of consent. Its also true that a super intelligence might be able to talk someone into consenting to almost anything.

Yes, a CEV FAI would forcibly alter people's sexualities if the aggrigated preferences in favour of that were strong enough. A democratic system will be a tyranny of the majority if the majority are tyrants.

Is that what the person who made the claim means?

I dunno, since I've only heard one sentence from this hypothetical person. But I would imagine that this sort of person would probably think that genetic manipulation is playing god, and moreover that superintelligent AI is playing god. Their strongest wish might be for the AI to turn itself off.

EDIT: how to react to the god hates fags people also depends upon whether being anti gay is a terminal value to these people, or whether it is predicated upon the existance of god. I'm assuming the FAI would not beleive in god, but then again some people might have faith as a terminal value, so... its complicated.

Comment author: WhySpace 05 October 2016 06:47:28PM 1 point [-]

Precisely. Normally, vitreous H2O (glass phase of ice) is produced through 1 of 2 methods:

  1. Pouring liquid H2O on a highly conductive heatsink which is cooled to liquid nitrogen temperatures (Ie, a block or sheet of copper in contact with LN)

  2. Taking a block of ice and compressing it at low temperatures.

The first method only works for thin sheets of ice, or creates a thin vitreous layer on the outside of a larger water-filled object. The second method allows one of the normal phases of ice to form, and then converts it to vitreous ice.

However, if we could supercool large volumes of water low enough without spontaneous crystallization, it might be possible to choose which phase of ice forms by deliberately nucleating with that. If turning off the magnetic field doesn’t cause freezing fast enough to vitrify, maybe a sufficiently sharp ultrasonic pulse could disrupt the metastable liquid state fast enough? Similarly, I’d be curious whether a thermoacoustic heat pump could remove heat fast enough to vitrify the water without completely shredding everything nearby.

On a related note, I wonder if it would be possible to suppress the less dense phases of ice (which expand more, and therefore cause more damage) just by increasing the ambient pressure during freezing? Method #2 is a crystalline solid to vitreous solid phase change, but there's no reason the same thing wouldn't work for a liquid to vitreous solid phase change. It looks like it's done at 5,000-1,600 atmospheres of pressure, but that might just be to speed up the rate of transition.

The depth diving record is the equivalent to 701 meters, which works out to 68 atmospheres of pressure. However, most of the effects have to do with respiration, such as the lung's ability to remove CO2 as it builds up in the blood. Nitrogen narcosis has effects on judgment a bit like alcohol, but this might not matter for cryonics. If it does, we could always use a liquid or gas like helium, which has effectively zero lipid solubility.

Is the reason this isn’t done cost, or something else? From a material science perspective, pressure seems like the obvious solution to fight expansion during crystallization. Working with nature is much easier than messing with thermodynamically unfavorable solutions.

Comment author: skeptical_lurker 05 October 2016 06:33:52PM 1 point [-]

Clinton said “implicit bias is a problem for everyone, not just police,”

This doesn't mean cognitive bias in a LW sense, it means everyone is racist, specifically against black people. I also don't think its true - if everyone is a little bit racist, why would people get into interracial relationships? Its possible that the majority of people prefer their own race but don't admit it, indeed the fact that racial groups cluster in cities could be argued to show this is the case via revealed preferences, but it seems obvious that some people have no racial bias.

Dem candidate Tim Kaine said, "People shouldn't be afraid to bring up issues of bias in law enforcement. And if you're afraid to have the discussion, you'll never solve it."

This, like all politics, is far from rational. It starts by painting the issue in terms of 'people who disagree with me are cowards' and proceeds to assume that this discussion must conclude that the bias exists.

Comment author: ChristianKl 05 October 2016 06:31:59PM 1 point [-]

The known specific behavior is "known knowns" and not "known unknowns". There are certainly known unknowns over which you can make valuable statements.

But we can't get any further if we can't agree on an intermediate point.

Accepting the limits of what one can know is important. That does often mean that one can't go further.

Comment author: skeptical_lurker 05 October 2016 06:17:07PM 1 point [-]

The Terminator: The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

Sarah Connor: Skynet fights back.

Your version is great as rational fanfic, but in an actual debate I'd say that its generally best not to base ideas on action movies. Having said that, I do like the bit where the terminator has been told not to kill anyone, so he shoots them in the kneecaps.

Comment author: username2 05 October 2016 06:16:23PM *  8 points [-]

The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.

The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.

None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.

Comment author: Lumifer 05 October 2016 06:02:53PM 0 points [-]

How is a AGI supposed to optimize for values that aren't consistent?

In the same way it's supposed to deal with real live people.

Comment author: Lumifer 05 October 2016 06:02:19PM 2 points [-]

Boo politics discussion during the pre-election madness.

Comment author: Stefan_Schubert 05 October 2016 05:01:23PM *  1 point [-]

As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.

Comment author: Gunnar_Zarncke 05 October 2016 04:38:41PM 0 points [-]

I agree to that. But we can't get any further if we can't agree on an intermediate point.

Would you argue about a system where we do not know the specifics of of some behavior of the system (to avoid the word 'unknown') but where we can know something about the (e.g. the probability mass) outside of the known specific behavior but still inside some general model of the system.

Comment author: TheAncientGeek 05 October 2016 04:20:13PM *  2 points [-]

the prior probability a superintelligence randomly ending up with ability clusters analogous to human ability clusters is infinitesimal.

What is this process of random design? Actual Ai design is done by humans trying to emulate human abilities.

Comment author: turchin 05 October 2016 04:01:50PM *  0 points [-]

It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.

Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).

But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.

It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.

Comment author: ChristianKl 05 October 2016 03:54:26PM *  1 point [-]

I would guess that the concept of bias as used in cognitive psychology is not well known in the broad public. It's generally mixed up with the concept of having a conflict of interest.

Most people also don't think in terms of probability which you need to think about implicit biases the way it's conceptualized in cognitive science. Even someone like Obama had episodes like his "it's 50/50" comment in the hunt for Bin Ladin.

View more: Prev | Next