You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Tem42 09 October 2016 12:26:55AM 0 points [-]

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

I don't believe that this follows. It is surprising that we are apparently the only civilization to so far appear on Earth, but if we accept that we are, we should not assume that we have accomplished this by destroying the future.

However, while I feel strongly that this is this case, I do not feel confidant that I can express it in a way that would be understood by someone who does not agree with me.

If someone could explain clearly why I am right, or alternatively, why I am wrong, I would greatly appreciate it.

(For context, here is what I would write given my currently semi-formed understanding: "while it makes sense to compare ourselves to a time-line independent view of the world to test the probability that our assumptions about the world are correct, it does not make sense to assume that our assumptions about the world will guide the future.")

Comment author: korin43 09 October 2016 12:18:52AM *  1 point [-]

My first attempt to list which of these I want most ended up being "all of them". In the interests of giving useful feedback, I think the most interesting ones are the problems with SETI (haven't heard anything about this but I also haven't been looking) and the origin of life (have heard about this but I suspect your post would be better than average).

Your response to CarlShulman makes me want more about eukaryotes too.

Comment author: morganism 08 October 2016 11:47:44PM *  2 points [-]

Someone did a article about creating a Kickstarter that actually issued shares in a company if they went over big.

If it was a tax deduction if it failed, but allowed for a gain, then it might be a way to do projects that were popular with people, but not attractive to Big Pharma or VC.

You could even have "Hackerspaces" that brought together teams just to do projects. If they included housing, it would be a great way to give postdocs some work, and some visibility while they wait to get into a static lab.

Comment author: DanArmak 08 October 2016 10:24:54PM *  0 points [-]

I think I understand your point better now, and I agree with it.

My conscious, deliberative, speaking self definitely wants to be rid of akrasia and to reduce time discounting. If I could self modify to remove akrasia, I definitely would. But I don't want to get rid of emotional empathy, or filial love, or the love of cats that makes me sometimes feed strays. I wouldn't do it if I could. This isn't something I derive from or defend by higher principles, it's just how I am.

I have other emotions I would reduce or even remove, given the chance. Like anger and jealousy. These can be moral emotions no less than empathy - righteous anger, justice and fairness. It stands to reason some people might feel this way about any other emotion or desire, including empathy. When these things already aren't part of the values their conscious self identifies with, they want to reduce or discard them.

And since I can be verbally, rationally convinced to want things, I can be convinced to want to discard emotions I previously didn't.

It's a good thing that we're very bad at actually changing our emotional makeup. The evolution of values over time can lead to some scary attractor states. And I wouldn't want to permanently discard one feeling during a brief period of obsession with something else! Because actual changes take a lot of time and effort, we usually only go through with the ones we're really resolved about, which is a good condition to have. (Also, how can you want to develop an emotion you've never had? Do you just end up with very few emotions?)

Comment author: ChristianKl 08 October 2016 09:58:52PM *  -1 points [-]

It doesn't? In what way would it not be useful?

Whether a discussion is useful depends on the results of the discussion. There are a lot of true things you can say that don't advance a discussion into a direction that leads to a positive outcome.

I think it's extremely useful to discuss how the brain you are using to solve problems has flaws that may be inhibiting you from solving those problems

It wasn't a discussion of how implicit bias works but an uncited assertion that it has effects in certain conditions.

It's why I was on LW originally

That might be true but it's not what the LW mission of rationality that's about systematic winning is about. I understand the mission to be about finding thinking strategies that lead to making winning decisions.

It's not? I thought she said we all (i.e. humans) have implicit biases? Wouldn't that include Clinton?

You can make an argument that logically it includes Clinton. You can also look at the decision making literature and see what saying "everyone has biases" does to a person self awareness of their own biases. It generally does little.

Comment author: WhySpace 08 October 2016 09:52:58PM *  1 point [-]

Tangentially related: the Chan Zuckerberg Initiative pledged $3B to cure all disease within their child's lifetime.

Comment author: WhySpace 08 October 2016 09:44:27PM 1 point [-]

I agree with you on the complexity of value. However, perhaps we are imagining the ideal way of aggregating all those complex values differently. I absolutely agree that the simple models I keep proposing for individual values are spherical cows, and ignore a lot of nuance. I just don't see things working radically differently when the nuance is added in, and the values aggregated.

That sounds like a really complex discussion though, and I don't think either of us is likely to convince the other without a novel's worth of text. However, perhaps I can convince you that you already are suppressing some impulses, and that this isn't always disastrous. (Though it certainly can be, if you choose the wrong ones.)

there aren't large benefits to be gained by discarding some emotions and values.

Isn't that what akrasia is?

If I find that part of me values one marshmallow now at the expense of 2 later, and I don't endorse this upon reflection, wouldn't it make sense to try and decrease such impulses? Removing them may be unnecessarily extreme, but perhaps that's what some nootropics do.

Similarly, if I were to find that I gained a sadistic pleasure from something, I wouldn't endorse that outside of well defined S&M. If I had an alcoholism problem, I'd similarly dislike my desire for alcohol. I suspect that strongly associating cigarettes with disgust is helpful in counteracting the impulse to smoke.

If I understand correctly, some Buddhist try to eliminate suffering by eliminating their desires. I find this existentially terrifying. However, I think that boosting and suppressing these sorts of impulses is precisely what psychologists call conditioning. A world where none refines or updates their natural impulses is just as unsettling as the Buddhist suppression of all values.

So, even if you don't agree that there are cases where we should suppress certain pro-social emotions, do you agree with my characterization of antisocial emotions and grey area impulses like akrasia?

(I'm using values, impulses, emotions, etc fairly interchangeably here. If what I'm saying isn't clear, let me know and I can try to dig into the distinctions.)

Comment author: DanArmak 08 October 2016 09:44:11PM *  4 points [-]

These six principles are true as far as they go, but I feel they're so weak so not to be very useful. I'd like to offer a more cynical view.

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents. This has a name: Defense Against the Dark Arts. And I feel like these six principles are about as effective in real life as taking the canonical DADA first year class and then going up against HPMOR Voldemort.

With today's information technology and globalization, we're all exposed to world-class Dark Arts practitioners. Not being vulnerable to Cialdini's principles might help defend you in an argument with your coworker. But it won't serve you well when doubting something you read in the news or in an FDA-endorsed study.

And whatever your coworker or your favorite blog was arguing probably derives from such a curated source to begin with. All arguments rest on factual beliefs - outside of math anyway - and most of us are very far from being able to verify the facts we believe. And your own prior beliefs need to be well supported, to avoid being rejected on the same basis.

Comment author: Brillyant 08 October 2016 09:41:34PM *  -1 points [-]

The fact that a claim is true doesn't automatically mean that it's useful to discuss it.

It doesn't? In what way would it not be useful?

I think it's extremely useful to discuss how the brain you are using to solve problems has flaws that may be inhibiting you from solving those problems, or even recognizing the problems accurately. (It's why I was on LW originally...)

(Maybe you're using "automatically" here as a qualifier to make your statement technically correct—Is that what you mean? Like, people could discuss cognitive biases in a really stupid and irrational way that would make it unproductive? If that's what you mean, then, yeah. Of course.)

No, it's not an admission of Clinton that her maps have errors.

It's not? I thought she said we all (i.e. humans) have implicit biases? Wouldn't that include Clinton?

Comment author: Houshalter 08 October 2016 09:41:17PM 0 points [-]

It's possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It's goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it's a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.

A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it's definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.

As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It's true I don't necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn't help anyone unless it actually explains something or helps us build better models of intelligence. I don't think it explains anything. Its definitely not specific enough to build an actual model out of.

But really it's irrelevant to this discussion. Even if you are correct, it doesn't say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it's correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.

Comment author: morganism 08 October 2016 09:35:44PM 0 points [-]

might want to keep an eye out for a Mars global dust storm too...

"The paper noted that conditions in the current Mars dust-storm season are very similar to those for a number of years when global storms occurred in the past."

http://www.jpl.nasa.gov/news/news.php?feature=6638

Shirley's 2015 paper in the journal Icarus

Comment author: ChristianKl 08 October 2016 08:59:58PM 5 points [-]

I think we discussed this previously on LW. In general the argument isn't convincing in his case.

Gilead made 20$ billion with a drug that cures one virus. If a pharma company would think that his approach has a 10% of working to cure all viruses spending 100$ million or more would be very interesting for traditional pharma companies under the current incentive scheme.

Comment author: entirelyuseless 08 October 2016 08:33:48PM 0 points [-]

If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.

Second, the idea that a paperclipper can "solve problems, speak language etc." is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.

I understand how word vectors work, and no, they are not what I am talking about.

"That's just language." Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.

Comment author: jimmy 08 October 2016 07:11:22PM 3 points [-]

An excellent post, but not Scott :)

http://mindingourway.com/confidence-all-the-way-up/

Comment author: DanArmak 08 October 2016 06:47:50PM *  0 points [-]

There are other things you value besides happiness, which can also be hyper-satisfied at the cost of abandoning other values. Maybe you really love music, and funding poor Western artists instead of saving the global poor from starvation would increase the production of your favorite sub-genre by 1000x. Maybe you care about making humanity an interplanetary species, and giving your savings to SpaceX instead of the AMF could make it come true. If only those pesky emotion of empathy didn't distract you all the time.

How can you choose one value to maximize?

Furthermore, 'increasing happiness' probably isn't a monolithic value, it has divisions and subgoals. And most likely, there are also multiple emotions and instincts that make you value them. Maybe you somewhat separately value saving people's lives, separately value reducing suffering, separately value increasing some kinds of freedom or equality, separately value helping people in your own country vs. the rest of the world.

If you could choose to hyper-satisfy one sub-value at the expense of all the others, which would you choose? Saving all the lives, but letting them live in misery? Eliminating pain, but not caring when people die? Helping only people of one gender, or of one faith, or one ethnicity?

The answer might be to find other people who care about the same set of values as you do. Each will agree to work on one thing only, and gain the benefits of so specializing. (If you could just pool and divide your resources the problem would be solved already.) But your emotions would still be satisfied from knowing you're achieving all your values; if you withdraw from the partnership, the others would adjust their funding in a way that would (necessarily) defund each project proportionally to how much you value it. So you wouldn't need to 'discard' your emotions.

I do think all this is unnecessary in practice, because there aren't large benefits to be gained by discarding some emotions and values.

Comment author: Houshalter 08 October 2016 06:44:02PM 0 points [-]

I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.

The paperclipper can be programmed to value any goal other than paperclips. Paperclips is just it's current goal. The gold wand can not do anything else.

But even if it's desire for paperclips is immutable and hard wired, it's still clearly intelligent. It can solve problems, speak language, design machines, etc, so long as it serves it's goal of making paperclips.

Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange.

Artificial neural networks can do the same thing. This is a trivial property of NNs, similar objects produce similar internal representations. Internal representations tend to be semantically meaningful, lookup word vectors.

And within those patterns we recognize other similarities, and so people talk about "warm" and "cool" colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way.

That's not a "pattern within a pattern". That's just a typical pattern, that green and blue appear near "cool" things and that orange and red appear near "hot" things.

Likewise we have the concept of "color", which is noting that all of these patterns are part of a more general pattern.

That's just language. The word "color" happens to be useful to communicate with people. I agree that language learning is important for AI. And this is a field that is making rapid progress.

Comment author: ChristianKl 08 October 2016 06:27:58PM -1 points [-]

Because people have implicit cognitive biases. It's useful to discuss them.

The fact that a claim is true doesn't automatically mean that it's useful to discuss it.

Instead of arguing over our competing maps, one good first step is to acknowledge our maps have errors, which is what I think Clinton's line about "implicit bias" did.

No, it's not an admission of Clinton that her maps have errors. In general people ability to interactually recite "all maps have errors" doesn't mean that they use that belief for interacting with their own maps differently.

When it comes to having a rational discussion this is even bad, because it allows people to easily play motte-and-bailey.

Comment author: Brillyant 08 October 2016 05:16:04PM -2 points [-]

Why do you think that [Clinton speaking of implicit bias seems to be a way to advance the discussion to something more rational]?

Because people have implicit cognitive biases. It's useful to discuss them.

Peoples' cognitive maps aren't the territory. And people aren't always conscious of the mistakes. Further, many people I've heard discuss politics in this election cycle seem unaware that there even could be errors in their map.

Instead of arguing over our competing maps, one good first step is to acknowledge our maps have errors, which is what I think Clinton's line about "implicit bias" did.

Comment author: TheAncientGeek 08 October 2016 05:05:12PM 0 points [-]

Encouragingly, corporations seem to have am impetus to keep blue-sky thinking and direct execution somewhat separate.

Comment author: TheAncientGeek 08 October 2016 05:01:29PM 0 points [-]

But are corporations existiential threats?

Comment author: Bound_up 08 October 2016 04:57:21PM 0 points [-]

I'm looking for an SSC post.

Scott talks about how a friend says he always seems to know what's what, and Scott says "Not, really; I'm the first to admit my error bars are wide and that my theories are speculative, often no better than hand-waving."

They go back and forth, with Scott giving precise reasons why he's not always right, and then he says "...I'm doing it right now, aren't I?"

Something like that. Can anybody point me to it?

Comment author: ChristianKl 08 October 2016 04:08:04PM -1 points [-]

because it seemed to be a way to advance the discussion to something more rational.

Why do you think that? The Gender studies folks that speak most about implicit bias aren't the demographic that tries to create evidence-based policing policy. It also doesn't seem to be a group of people who are on good terms when it comes to speaking with police departments about how to design their policy.

Comment author: ChristianKl 08 October 2016 04:04:24PM 2 points [-]

Those exist for asking questions and not to get feedback for scientific theories. They don't like to give feedback on lay people's physic theories.

Comment author: ChristianKl 08 October 2016 04:01:29PM 1 point [-]

Generally the numbers suggest that an additional tax collector brings in a lot more money than he costs.

Comment author: entirelyuseless 08 October 2016 02:50:34PM 0 points [-]

I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.

But calling evolution intelligent is to speak in metaphors, and that indicates that your definition of intelligence is not a good one if we wish to speak strictly about it.

Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange. This orange thing is similar to that orange thing. Likewise with other colors. And within those patterns we recognize other similarities, and so people talk about "warm" and "cool" colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way. Likewise we have the concept of "color", which is noting that all of these patterns are part of a more general pattern. And then we notice that the concepts of "color" and "sound" have an even more general similarity to each other. And so on.

The neural networks you spoke of do nothing like this. Yes you might be able to apply them to those various tasks. But they only generate something like base level patterns, like noticing red and orange. They do not understand patterns of patterns.

I think that saying "only about a million" years was needed for something implies a misunderstanding, at least on some level, of how long a million years is.

I agree that babies have the ability to be intelligent all along. Even when they are babies, they are still recognizing patterns in patterns. None of our AI programs do this at all.

Comment author: entirelyuseless 08 October 2016 02:40:19PM -1 points [-]

I am saying the opposite. Having a goal, in Eliezer's sense, is contrary to being intelligent. That is, doing everything you do for the sake of one thing and only one thing, and not being capable of doing anything else, is the behavior of an idiotic fanatic, not of an intelligent being.

I said that to be intelligent you need to understand the concept of a goal. That does not mean having one; in fact it means the ability to have many different goals, because your general understanding enables you to see that there is nothing forcing you to pursue one particular goal fanatically.

Comment author: username2 08 October 2016 02:29:03PM 2 points [-]

I think that in many cases uncovering a potential tax evasion might not be enough to get that money, it might require prosecution and large scale evidence collection. Maybe it's not worth it unless amount of evaded taxes is large?

Comment author: username2 08 October 2016 02:21:45PM 1 point [-]

Also check out physics.SE and physicsoverflow

Comment author: WhySpace 08 October 2016 04:52:13AM 0 points [-]

Thanks for letting me know that CEV is obsolete. I'll have to look into the details. However, I don't think our disagreement is in that area.

it's easy to say we should increase happiness, all else being equal. It's not so obvious that we should increase it at the expense of other things

Agreed, but the argument works just as well for decreasing happiness as for possible increases. Even someone who valued their own happiness 1000x more than that of others would still prefer to suffer than for 1001 people to suffer. If they also value their own life 1000x as much as other people's lives, they would be willing to die to prevent 1001+ deaths. If you added up the total number of utils of happiness, according to his or her utility function, 99.9999% of the happiness they value would be happiness in other people, assuming there are on the order of billions of people and that they bite the bullet on the repugnant conclusion. (For simplicity's sake.)

But all that's really just to argue that there are things worth dying for, in the case of many people. My central argument looks something like this:

There are things worth dying for. Loosing something valuable, like by suppressing a biased emotion, is less bad than dying. If suppressing emotional empathy boosts the impact of cognitive empathy (I'm not sure it does) enough to achieve something worth dying for, then one should do so.

But I'm not sure things are so dire. The argument gets more charitable when re-framed as boosting cognitive empathy instead. In reality, I think what's actually going on is empathy either triggers something like near-mode thinking or far-mode, and these two possibilities are what leads to "emotional empathy" and "cognitive empathy". If so, then "discarding [emotional] empathy" seems far less worrying. It's just a cognitive habit. In principle though, if sacrificing something more actually was necessary for the greater good, then that would outweigh personal loss.

Comment author: DanArmak 08 October 2016 01:01:09AM *  0 points [-]

I don't believe a CEV exists or, if it does, that I would like it very much. Both were poorly supported assumptions of the CEV paper. For related reasons, as the Wiki says, "Yudkowsky considered CEV obsolete almost immediately after its publication in 2004". I'm not sure why people keep discussing CEV (Nick Tarleton, and other links on the Wiki page) but I assume there are good reasons.

One could do this same calculation for all other values, and add them up to get CEV,

That doesn't sound like CEV at all. CEV is about extrapolating new values which may not be held by any actual humans. Not (just) about summing or averaging the values humans already hold.

Getting back to happiness: it's easy to say we should increase happiness, all else being equal. It's not so obvious that we should increase it at the expense of other things, or by how much. I don't think happiness is substantially different in this case from morality.

Comment author: Brillyant 08 October 2016 12:51:56AM -1 points [-]

I should clarify. I accept genes are a big part of the picture. I'm more of a nature guy in the debate between nature and nurture.

In the specific case of African Americans' treatment in U.S. history and their current status, I'm not convinced genetics are more important than other factors. Because this specific case is more than just a simple nature vs. nurture issue—it is a very special case where an extreme deficit was created using slavery. And then segregation. And racism and discrimination all throughout up to the present day.

What evidence you cite above is compelling to you? What do you believed based on this evidence?

Comment author: hairyfigment 07 October 2016 11:48:44PM 0 points [-]

You're assuming that "what humans mean" is well-defined. I've seen people criticize the example of an AI putting humans on a dopamine drip, on the grounds that "making people happy" clearly doesn't mean that. But if your boss tells you to 'make everyone happy,' you will probably get paid to make everyone stop complaining. Parents in the real world used to give their babies opium and cocaine; advertisers today have probably convinced themselves that the foods and drugs they push genuinely make people happy. There is no existing mind that is provably Friendly.

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

Comment author: CellBioGuy 07 October 2016 11:09:11PM *  3 points [-]

The short version before I get a chance to write more posts:

Primates appear to be an interestingly potentiated lineage, prone to getting smart when they get large, due to differences in brain development established 50 megayears ago that make their brains much more impressive per unit volume than most larger mammals. The great apes other than humans actually seem to run into energetic limits to feeding their brains and have smaller brains than you'd expect for a primate of their size, while humans are right on the generic primate trendline. Birds are another potentiated lineage - their brains are about 6x as compact as a comparable primate brain.

Eukaryotes are really weird. The one thing that is incontrovertible these days is that the classic 3-domains-of-life idea, with eukaryotes and archaea as sister clades, is turning out to be wrong. Eukaryotes are turning out to have come from a fusion/symbiosis of a bacterium and something that fits entirely within the archaeal domain. Various people who are studying their origin and evolution have their pet models and hold to them too tightly and fight each other bitterly, though some things are finally coming out for sure. A lot of their weird features may come from particular population genetic selective pressures that come from competition between copies of the mitochondrial genome, and a lot of others may come from the fact that they invented sex and have low population sizes both of which allow types of evolution and genetic drift that you are much less likely to see in the eubacteria or archaebacteria, the two 'primary' domains (whose separation represent the deepest branch in the tree of life). But the fact that ALL eukaryotes have a huge constellation of weird traits with no intermediate forms means their origin was a weird event, and opinions vary on if that means it was a singular extremely unlikely event or if all those weird properties come logically from how they formed, and on if there was strong first-mover-advantage.

And, of course, our data on the breadth of life that exists on Earth is incomplete... I love the specialized journals on electron microscopy, now and then you see a paper where someone just sees something really bizarre and reports on what they saw without a clue as to what it is. If you want a really perplexing paper, take a look at this: http://jmicro.oxfordjournals.org/content/61/6/423.full.pdf+html (alternately https://dl.dropboxusercontent.com/u/67168735/a%20unique%20organism%20from%20the%20deep%20sea.pdf ) "A Unique Microorganism from the Deep Sea"

Comment author: CellBioGuy 07 October 2016 10:54:35PM *  2 points [-]

I generally have very low confidence in singulatarian ideas of any stripe, 'foom' or non. Partially for sociological analysis-of-the-origin-of-singulatarian-and-related-ideas reasons. Partially for astrobiological reasons relating to the fact that nothing has ever consumed a star system or sent self replicating anythings between stars and my impression of the range of possible outcomes of intelligent living things that are not extinction or controlling the universe and the possible frequencies of things something like us. Partially because I think that many people everywhere misattribute the causes of recent changes to the world and where they are going and have short time horizons. Partially because I am pretty sure that diminishing returns applies to absolutely everything in this world aside from black hole growth.

I can't say I've read Gwern's analysis of computational complexity, but I do note that in the messy complicated poorly-sampled real world you can very very seldom actually KNOW enough to predict much of a lot of types of events with great precision.

Comment author: CellBioGuy 07 October 2016 10:44:20PM *  1 point [-]

Now I have!

I think their case is weak but it is something to continue to be considered when analyzing any new data. There's weird soil chemistry there for sure, a good deal of which might be explicable from the interaction between perclorate and small amounts of organics, but the amount of data is tiny and I would not rule out soil microbes. A lot more work has to be done before you can say much with confidence.

Comment author: waveman 07 October 2016 09:58:56PM 0 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits

No. Maybe it is possible. I am suggesting that it is not automatic that our creations serve our interests.

are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

No. Saying something has harmful effects is not the same as saying that it is overall bad.

I am illustrating ways in which our creations can fail to serve our interests.

  • They do not have to be onmiscient to be smarter in some respects than human individuals.

  • It is hard to control their actions and to make sure they do serve our interests.

  • These effects can be subtle and difficult to understand.

Comment author: waveman 07 October 2016 09:54:37PM 2 points [-]

I'm not sure I believe genetics are more important than other factors.

I suggest you examine the evidence offered above and consider reducing your confidence in your beliefs.

Comment author: waveman 07 October 2016 09:51:03PM 3 points [-]

Estimated cost of tax evasion per year to the Federal gov is 450B.

Can I ask you to examine the apparent assumption here - that the $450B is all loss? Have you considered the possibility that the people who avoided the tax put the money to good use? Or that the government would not put that money to good use if it took it?

Comment author: WhySpace 07 October 2016 09:42:13PM 0 points [-]

Perhaps I should have been more specific than to use a vague term like "morality". Replace it with CEV, since that should be the sum total of all your values.

Most people value happiness, so let me use that as an example. Even if I value own happiness 1000x more than other people's happiness, if there are more than 1000 people in the word, then the vast majority of my concern for happiness is still external to myself. One could do this same calculation for all other values, and add them up to get CEV, which is likely to be weighted toward others for the same reason that happiness is.

Of course, perhaps some people legitimately would prefer 3^^^3 dust specs in people's eyes to their own death. And perhaps some people's values aren't coherent, such as preferring A to B, B to C, and C to A. But if neither of these is the case, then replacing one's self with a more efficient agent maximizing the same values should be a net gain in most cases.

Comment author: Brillyant 07 October 2016 09:31:43PM -1 points [-]

I accept genes are a big part of the picture.

I'm not sure I believe genetics are more important than other factors. And this is not necessarily a simple nature vs. nurture issue. In the case of African Americans' treatment in U.S. history, it's an extreme set of "nurture" circumstances that robbed a group of people of all opportunity for many generations, based on race. I'm not sure "good genes" simply overcomes extremely lopsided (often systemically unfair) circumstances.

Anyway, it won't be resolved here. Thanks for your thoughts.

Comment author: Lumifer 07 October 2016 08:57:31PM 2 points [-]

But wealth, along with a solid education, a well-developed relevant skill in the marketplace, a well-established social and professional network, and a family with a good reputation can be much more persistent.

The claim is that most of that is biology and heritable. Your ancestors had good genes (again, IQ but not only) which allowed them to gain a skill in the marketplace, construct a social network, create a family with good reputation, and acquire wealth. You have skills in the marketplace, able to adroitly navigate society, etc. primarily because you share genes with your ancestors, not because you inherited some money.

my parents ... taught me

This is the nature vs nurture debate and lately the nature side has been winning. Who and what you are is considerably more determined by your genes rather than by your upbringing. Gwern posted about this here, on LW, or you can google up twin studies (studies of (genetically) identical twins who were separated at birth and brought up by different people in different circumstances).

Can you give me some examples of how "culture persists across generations"?

See e.g. Yvain's review of Albion's Seed.

Comment author: Brillyant 07 October 2016 08:23:11PM *  -1 points [-]

Sorry, doesn't hold. Some more convincing studies examined the outcomes of Georgia land lotteries which were effectively a randomized controlled trial where the "intervention arm" got a valuable piece of land (by winning the lottery) and the "control arm" didn't get anything. See e.g. this and other studies.

Interesting.

In regard to the scenario (person A and person B) I gave above, I'm not sure your study refutes what I'm saying. Wealth can be squandered, sure. But wealth, along with a solid education, a well-developed relevant skill in the marketplace, a well-established social and professional network, and a family with a good reputation can be much more persistent.

The opportunity to have enough money to live and have free time plus a good basis for how to live and use that wealth can be sustained over generations.

I am who I am, in part, because of who my parents are. They taught me, for better or for worse, how to handle money; how to relate to people; how to study, work, play, etc. And my parents are who they are, in part, because of their parents. And so on. Generations of my family incubated the new generation's growth into their own efforts to create sustainable wealth. Perhaps this is some of what you mean when you say...

Culture, on the other hand, persists across generations relatively well.

Can you give me some examples of what you mean by "culture persists across generations"?

By the way, while slavery was ended 150 year ago, segregation remained in force until after the WW2 and so is a much more recent phenomenon, within living memory.

Absolutely. And racism still persists and has an effect even today.

Comment author: Lumifer 07 October 2016 07:39:31PM 2 points [-]

One premise is that if a significant deficit in, say, wealth or education is created for a group of people, then it will be a persistent disadvantage that causes that group of people to lag behind.

Sorry, doesn't hold. Some more convincing studies examined the outcomes of Georgia land lotteries which were effectively a randomized controlled trial where the "intervention arm" got a valuable piece of land (by winning the lottery) and the "control arm" didn't get anything. See e.g. this and other studies.

Now, if you have a continuing advantage (IQ) that continues to hold while your group mostly intermarries, things are different.

Culture, on the other hand, persists across generations relatively well.

By the way, while slavery was ended 150 year ago, segregation remained in force until after the WW2 and so is a much more recent phenomenon, within living memory.

Comment author: ChristianKl 07 October 2016 07:04:09PM 0 points [-]

under certain theoretical conditions

Comment author: ChristianKl 07 October 2016 06:36:06PM 0 points [-]

Why wouldn’t the 80/20 rule apply to moral concerns?

If the 80/20 rules applies to moral concerns why do you think that getting rid of empty is part in the 20% that does 80%?

Comment author: Brillyant 07 October 2016 05:00:29PM *  -2 points [-]

One premise is that if a significant deficit in, say, wealth or education is created for a group of people, then it will be a persistent disadvantage that causes that group of people to lag behind.

Another premise is that slavery wasn't that long ago, relatively.

If, 150 years ago, we had person A start with $100,000 in inherited wealth, a solid education, a well-developed relevant skill in the marketplace, a well-established social and professional network, and a family with a good reputation. And then we had person B start with no money, no education, no marketable skills, no network, no family, no reputation...

If person A and B set out and lived their lives and had offspring, person A with the mentioned significant advantage over person B, I would imagine their offspring would be born into similar circumstances, with the offspring of person A maintaining an advantage over the offspring of person B because of all the obvious reasons people with advantages in wealth, education, etc. tend to maintain an advantage. The advantage may have narrowed (or maybe widened), but the advantage would be carried into the next generation.

Continue this forward 5-7 generations. What would we expect to see? I think we'd see line A maintain an advantage. The advantage may have narrowed (or maybe widened), but the advantage would be carried through generations.

Of course line B could "catch" and surpass line A. It's easy to imagine exceptional scenarios. But it seems probable that line A would enjoy an ongoing advantage.

And this scenario assumes a level playing field for descendants of line A and line B. I don't believe that's been the case in America for blacks and whites. Since the end of slavery, there has been significant discrimination against blacks, much of which continues to the current day.

Comment author: DanArmak 07 October 2016 04:20:10PM 0 points [-]

Even if it were the best way to maximize morality, why would you want to maximize it?

Human values are complex. Wanting to maximize one at the expense of all others implies it already is your sole value. Of course, human don't exactly converge on the subgoal of preserving their values, so the right words can (and have) convinced people to follow many single values.

Comment author: Romashka 07 October 2016 04:18:23PM 1 point [-]

"The Perfect Food and the Filth Disease: Milk-borne Typhoid and Epidemiological Practice in Late Victorian Britain" J. S. Williams. Journal of the History of Medicine and Allied Sciences Vol. 65, No. 4 (OCTOBER 2010), pp. 514-545. If anyone's interested but cannot access the article, PM me and I will send you a copy (made by print-screening the pages from 'net and assembling the images into a .doc file).

A verbose, but on the whole interesting read on an uphill battle fought in 1860-s - 1890-s to have adulterated milk recognized as public health risk. Includes a "subplot" which would make a wonderful period-drama detective story (the typhoid outbreak in London, 1873).

Comment author: ChristianKl 07 October 2016 03:33:10PM 6 points [-]

Because the IRS isn't popular and it's not a good move for a politician to speak in favor of the IRS and advocate increase of IRS funding.

Comment author: ChristianKl 07 October 2016 03:22:22PM 0 points [-]

What more would you like to know?

What are the causal steps in between slavery that happened 150 years ago and the present state?

Comment author: WhySpace 07 October 2016 03:04:42PM *  0 points [-]

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations.

This actually has a name. It's called moral licensing.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

I agree with you that nothing makes them special. But you seem to view this as a reductio ad absurdum. Doing the same for all other emotions which might bias us or get in the way of doing what’s moral would not lead to a balanced lifestyle, to say the least.

But we could just as easily bite that bullet. Why should we expect optimizing purely for morality to lead to a balanced lifestyle? Why wouldn’t the 80/20 rule apply to moral concerns? Under this view, one would do best to amputate most parts of one’s mind that made them human, and add parts to become a morality maximizer.

Obviously this would cause serious problems in reality, and may not actually be the best way to maximize morality even if it was possible. This is just a sort of spherical cow in a vacuum level concept.

Comment author: Lumifer 07 October 2016 02:37:31PM 2 points [-]

Theory of mind. Locally it's often called a "typical mind fallacy".

Comment author: Lumifer 07 October 2016 02:35:22PM -1 points [-]

I am afraid I cannot claim here any particularly noble motives.

In Jacobian's text there are, basically, two decision points: the first one is deciding to do good, and the second one is deciding on a course of action. You lose empathy in between them. There are (at least) two ways to interpret this. In one when you decide "do good", you make just a very generic decision to do some unspecified good. All the actual choices are at the "course of action" point. In another one at the first decision point you already decide what particular good do you want to work towards and then the second decision point is just the details of implementation.

I didn't want to start dissecting Jacobian's post at this level of detail, so I basically simplified it by saying that you lose your empathy before making some (but not necessarily all) choices. I don't know if you want to classify it as "technically incorrect" :-/

Comment author: Lumifer 07 October 2016 02:27:32PM 2 points [-]

Are you saying that no complex phenomenon is going to be able to provide only benefits and nothing but benefits, or are you saying that corporations are, on the balance, bad things and we would have been better to never have invented them?

Comment author: niceguyanon 07 October 2016 01:40:52PM *  2 points [-]

Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.

Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.

Comment author: Brillyant 07 October 2016 01:33:07PM *  -1 points [-]

While we are at the topic of cognitive biases, how do you know that's the case? Quite many people believe that they are much more open than they are.

I don't know. I'm probably biased. But I feel pretty strongly that I'd like to know the truth. I'm sure I'm subject to the same deep, irrational Red v. Blue tribalism as most other humans, but I try to be as rational as I can.

The fact that you for example didn't follow up with the request to explain your own view in this thread is a sign that you don't put effort into engaging in the kind of actions that require you to actually express your ideas explicitly enough to find flaws.

Ah. I assumed your earlier comment in this thread was misplaced and you intended, "Lumifer: I, like Brillyant, am also interested in hearing your view." I am flattered you care about my view.

As I mentioned, I consider myself ignorant on the issue. That is, quite literally, I admit I don't know and have low confidence in my views..

I think I've eluded to those views in this thread...

Politically, I'm generally empathetic toward ideas like affirmative action in the U.S. on the basis of race because there has been serious discrimination in the U.S. on the basis of race in the past. It makes practical sense to posit it created a "headstart" for races who were not... enslaved... and otherwise discriminated against and it makes ethical sense to employ measures to even the score.

and

It seems obvious to me that [past slavery in America] does [have a large impact on African Americans in the present day U.S.], and that the effects are wide and deep, as slavery (and Jim Crow) is relatively recent history—We're only a handful of generations from a time where a race of people was enslaved and systemically kept from accumulating wealth and education.

What more would you like to know?

Comment author: waveman 07 October 2016 11:39:25AM *  0 points [-]

One perhaps useful analogy for super-intelligence going wrong is corporations.

We create corporations to serve our ends. They can do things we cannot do as individuals. But in subtle and not-so-subtle ways corporations can behave in very destructive ways. One example might be the way that they pursue profit at the cost of in some cases ruining people's lives, damaging the environment, corrupting the political process.

By analogy it seems plausible that super-intelligences may behave in a way that is against our interests.

It is not valid to assume that a super-intelligence will be smart enough to discern true human interests, or that it will be motivated to act on this knowledge.

Comment author: waveman 07 October 2016 11:33:13AM 4 points [-]

A related concept is "inferential distance" - people can only move one step at a time from what they know.

Also typical mind fallacy.

Comment author: sawahbodien 07 October 2016 11:22:57AM 1 point [-]

Is there a specific bias for thinking that everyone possesses the same knowledge as you? For example, after learning more about a certain subject, I have a tendency to think, "Oh, but everyone already knows this, don't they" even though they probably don't and I wouldn't have assumed that before learning about it myself.

Comment author: ChristianKl 07 October 2016 10:50:01AM *  0 points [-]

I'm open to the idea ideas like AA may not actually practically work

While we are at the topic of cognitive biases, how do you know that's the case? Quite many people believe that they are much more open than they are.

The fact that you for example didn't follow up with the request to explain your own view in this thread is a sign that you don't put effort into engaging in the kind of actions that require you to actually express your ideas explicitly enough to find flaws.

Comment author: username2 07 October 2016 07:46:01AM 0 points [-]

The fact that something is possible in principle doesn't mean there's any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like "goal", "good," and so on.

I actually had specific examples in mind, basically all GOFAI approaches to general AI. But in any case this logic doesn't seem to hold up. You could argue that something needs to HAVE goals in order to be intelligent -- I don't think so, at least not with the technical definition typically given to 'goals', but I will grant it for the purpose of discussion. It still doesn't follow that the thing has to be aware of these goals, or introspective of them. One can have goals without being aware that one has them, or able to represent those goals explicitly. Most human beings fall in this category most of the time, it is sad to say.

Comment author: Romashka 07 October 2016 07:01:46AM 0 points [-]

Thank you. It is entertaining to think about research which is not 'bad' as criticised today, but inefficient due to a once obvious reason, or just incomplete enough to make more work for future scientists:)

Comment author: DavidPlumpton 07 October 2016 06:33:06AM 1 point [-]

Have you ever seen this paper that claims a complexity analysis of the Viking lander experiment results can't be explained by chemistry alone? Interesting stuff...

https://www.researchgate.net/publication/257958533_Complexity_Analysis_of_the_Viking_Labeled_Release_Experiments

Comment author: Unnamed 07 October 2016 06:15:59AM 3 points [-]
Comment author: Houshalter 07 October 2016 06:11:12AM 3 points [-]

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments. But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

The wand isn't generally intelligent. Maybe by some stretch of the definition we could sorta say it's "intelligent" at the task of turning things to gold. But it can't do any tasks other than turning things into gold. The whole point of AGI is general intelligence. That's what the G stands for.

Humans are generally intelligent. We can apply our brains to widely different tasks, including many that we weren't evolved to be good at at all. From playing Go to designing rockets. Evolution is generally intelligent. It can find remarkably good designs for totally arbitrary objective functions.

I think general optimization ability is a perfectly fine definition of intelligence. It includes things like humans and evolution, and some kinds of simple but general AI, but excludes things like animals and domain specific AI. It defines intelligence only by results. If you can optimize an arbitrary goal you are intelligent. It doesn't try to specify what the internal mechanisms should be, just whether or not they work. And it's continuous - you can have a degree of very stupid optimizer like evolution, all the way to very good/intelligent ones like humans.

Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

This definition is really vague. You are just shoving the hard problem of defining intelligence into the hard problem of defining "abstract thought". I guess the second sentence kind of clarifies that you mean. But it's not clear at all that humans even meet that definition. Do humans recognize patterns in patterns? I don't think so. I don't think we are consciously aware of the vast majority of our pattern recognition ability.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data.

Not really. Deep neural networks are extraordinary general. The same networks that win at Go could be applied to language translation, driving cars, playing pacman, or recognizing objects in an image.

One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs,"

The exact quote is "superhuman with bugs". In the context, he was describing the fact that the AI plays far above human level. But still makes some mistakes a human might not make. And it's not even clear when it makes mistakes, because it is so far above human players and may see things we don't see, that makes those moves not mistakes.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

A paperclip maximizer can recognize the concept of a goal. It's not stupid, it just only cares about paperclips. In the same way humans are programmed by evolution to maximize sex, social status, and similarly arbitrary goals, there is no reason an AI couldn't be programmed to maximize paperclips. Again, perhaps humans are not intelligent by your definition.

Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines.

Yeah that seems quite obviously true. Just look at the chimpanzees. By some accounts the main difference in human brains is they are just scaled up primate brains - 3 times as large, with a bit more sophisticated language ability. And suddenly you go from creatures that can barely master simple tools and can't communicate ideas, to creatures capable of technological civilization. 500 million years of evolution refined the mammal brain to get chimps, but only about a million was needed to go from stupid animals to generally intelligent humans.

I don't see any reason to believe AI progress should be linear. In practice it is clearly not. Areas of AI often has sudden discontinuities or increasing rates of progress. I don't see any reason why there can't be a single breakthrough that causes enormous progress, or why even incremental progress must be slow. If evolution can make brains by a bunch of stupid random mutations, surely thousands of intelligent engineers can do so much better on a much shorter time scale.

as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

This isn't a valid analogy at all. Baby humans still have human brains running the same algorithms as adult humans. Their brains are just slightly smaller and have had less time to learn and train. Individual AIs may increase in ability linearly as they grow and learn. But the AI algorithms themselves have no such constraint, someone could theoretically figure out the perfect AI algorithm tomorrow and code it up. There is certainly no law of nature that says AI progress must be slow.

Comment author: Unnamed 07 October 2016 06:08:47AM 1 point [-]

This post doesn't have much that addresses the "expanding circle" case for empathy, which goes something like this:

Empathy is a powerful tool for honing in on what matters in the world. By default, people tend to use it too narrowly. We can see that in many of the great moral failings of the past (like those mentioned here) which involved people failing to register some others as an appropriate target of empathy, or doing a lousy job of empathizing which involved making up stories more than really putting oneself in their shoes, or actively working to block empathy by dehumanizing them and evoking disgust, fear, or other emotions. But over time there has been moral progress as societies have expanded the circle of who people habitually feel empathy for, and developed norms and institutions to reflect their membership in that circle of concern. And it is possible to do better than your societal default if you cultivate your empathy, including the ability to notice the blind spots where you could be empathizing but are not (and the ability to then direct some empathy towards those spots). This could include people who are far away or across some boundary, people in an outgroup who you might feel antagonistic towards, people who have been accused of some misdeed, people and nonhumans that are very different from you, those who are not salient to you at the moment, those who don't exist yet, those who are only indirectly affected by your actions, etc.

Comment author: entirelyuseless 07 October 2016 05:12:24AM 0 points [-]

Also, the wand is optimizing. The reason is that it doesn't just do some consistent chemical process that works in some circumstances: it works no matter what particular circumstances it is in. It is just the same as the fact that a paperclipper produces paperclips no matter what circumstance it starts out in.

A supernova on the other hand does not optimize, because it produces different results in different situations.

Comment author: entirelyuseless 07 October 2016 05:07:17AM 0 points [-]

"There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior."

Maybe, but that's exactly like the orthogonality thesis. The fact that something is possible in principle doesn't mean there's any easy way to do it in practice. The easy way to produce arbitrarily complex intelligent behavior in practice is to produce something that can abstract to an arbitrary degree of generality, and that means recognizing abstractions like "goal", "good," and so on.

The reason why a human baby becomes intelligent over time is that right from the beginning it has the ability to generalize to pretty much any degree necessary. So I don't see how that argues against my position. I would expect AIs also to require a process of "growing up" although you might be able to speed that process up so that it takes months rather than years. That is still another reason why the orthogonality thesis is false in practice. AIs that grow up among human beings will grow up with relatively humanlike values (although not exactly human), and the fact that arbitrary values are possible in principle will not make them actual.

Comment author: username2 07 October 2016 02:50:45AM 2 points [-]

Your example of a magic wand doesn't sound correct to me. By what basis is a Midas touch "optimizing"? It is powerful, yes, but why "optimizing"? A supernova that vaporizes entire planets is powerful, but not optimizing. Seems like a strawman.

Defining intelligence as pattern recognizing is not new. Ben Goertzel has espoused this view for some twenty years, and written a book on the subject I believe. I'm not sure I buy the strong connection with "recognizing the abstract concept of a goal" and such, however. There are plenty of conceivable architectures for which this meta level thinking is incapable of happening, yet nevertheless are capable of producing arbitrarily complex intelligent behavior.

Regarding your last point, your terminology is unnecessarily obscuring. There doesn't have to be a "magic point" -- it could be simply a matter of correct software, but insufficient data or processing power. A human baby is a very stupid device, incapable of doing anything intelligent. But with experiential data and processing time it becomes a very powerful general intelligence over the course of 25 years, without any designer intervention. You bring up this very point yourself which seems to counteract your claim.

Comment author: username2 07 October 2016 02:38:17AM 0 points [-]

It sounds like you are still clinging to the idea that emotional empathy is a qualitatively good thing... motivated thinking?

Comment author: entirelyuseless 07 October 2016 01:42:25AM 0 points [-]

On the basis of thinking long and hard about it.

Some people think that intelligence should be defined as optimization power. But suppose you had a magic wand that could convert anything it touched into gold. Whenever you touch any solid object with it, it immediately turns to gold. That happens in every environment with every kind of object, and it happens no matter what impediments you try to set up to prevent. You cannot stop it from happening.

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments.

But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

I would propose an alternative definition. Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data. Many of Eliezer's philosophical mistakes concerning AI arise from this fact. He assumes that the AI we have is close to being intelligent, and therefore concludes that intelligent behavior is similar to the behavior of such programs. One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs," rather than admitting the obvious fact that it was better than Lee Sedol, but not much better, and only at Go, and that it generally played badly when it was in bad positions.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

But in relation to your original question, the point is that the most intelligent AI we have is incredibly stupid. Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines. And there is no such magical point, as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

Comment author: siIver 07 October 2016 12:19:11AM 0 points [-]

My observation is that people who are smart generally try to live more ethically, but usually have skewed priorities; e.g. they'll try to support the artists they like and to be decent in earning their money, when they'd fair better just worrying less about all that and donating a bit to the right place every month. Quantitative utility arguments are usually met with rejection.

LW's, on the other hand, seem to be leaning in that direction anyway. Though I'm fairly new to the community, so I could be wrong.

I wouldn't show it to people who lack a "solid" moral base in the first place. They probably fair better in keeping every shred of empathy they have (thinking of how much discrimination still exists today).

Comment author: CarlShulman 07 October 2016 12:19:07AM 5 points [-]

Primates and eukaryotes would be good.

Comment author: gjm 06 October 2016 11:58:35PM -1 points [-]

You are misreading Jacobian

Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.

Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.

(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)

Comment author: turchin 06 October 2016 11:54:47PM 1 point [-]
Comment author: DanArmak 06 October 2016 11:30:15PM *  0 points [-]

I completely agree: asking people to discard moral emotions is rather like asking rational agents to discard top goals!

Wikipedia says that "body-counts of modern witch-hunts by far exceed those of early-modern witch-hunting", referencing: Behringer, Wolfgang 2004: Witches and Witch-hunts. A global History. Cambridge: Polity Press.

My point being that our emotional empathy is already out of tune with social propriety, if you consider the social norms typical around the world and not just among rich, Western populations. Let alone the norms common in the West for most of its existence, and so perhaps again in the future.

Comment author: DanArmak 06 October 2016 11:15:54PM 0 points [-]

This doesn't entirely match my impression of the LW community. (I know much less about the non-LW EA community.) What are you basing this on? Were there major LW posts about empathy, or LW Survey questions, or something else?

Comment author: DanArmak 06 October 2016 11:14:30PM *  2 points [-]

I'm confused by this post, and don't quite understand what its argument is.

Yes, emotional empathy does not optimize effective altruism, or your moral idea of good. But this is true of lots of emotions, desires and behaviors, including morally significant ones. You're singling out emotional empathy, but what makes it special?

If I buy an expensive gift for my father's birthday because I feel that fulfills my filial duty, you probably wouldn't tell me to de-emphasize filial piety and focus more on cognitive empathy for distant strangers. In general, I don't expect you to suggest people should spend all their resources on EA. Usually people designate a donation amount and then optimize the donation target, and it doesn't much matter what fuzzies you're spending your non-donation money on. So why de-fund emotional empathy in particular? Why not purchase fuzzies by spending money on buying treats for kittens, rather than reducing farm meat consumption?

Maybe your point is that emotional empathy feels morally significant and when we act on it, we can feel that we fulfilled our moral obligations. And then we would spend less "moral capital" on doing good. If so, you should want to de-fund all moral emotions, as long as this doesn't compromise your motivations for doing good, or your resources. Starting with most forms of love, loyalty, cleanliness and so on. Someone who genuinely feels doing good is their biggest moral concern would be a more effective altruist! But I don't think you're really suggesting e.g. not loving your family any more than distant strangers.

Maybe your main point is that empathy is a bias relative to your conscious goals:

When choosing a course of action that will make the world a better place, the strength of your empathy for victims is more likely to lead you astray that to lead you truly.

But the same can be said of pretty much any strong, morally entangled emotion. Maybe you don't want to help people who committed what you view as a moral crime, or who if helped will go on to do things you view as bad, or helping whom would send a signal to a third party that you don't want to be sent. Discounting such emotions may well match your idea of doing good. But why single out emotional empathy?

If people have an explicit definition of the good they want to accomplish, they can ignore all emotions equally. If they don't have an explicit definition, then it's just a matter of which emotions they follow in the moment, and I don't see why this one is worse than the others.

Comment author: WhySpace 06 October 2016 10:09:43PM 0 points [-]

This is actually something I've been trying to articulate for a long time. It's fantastic to finally have a scientific name for it, (emotional vs cognitive empathy) along with a significantly different perspective.

I'd be inclined to share this outside the rationalist community. Ideally, me or someone else would weave most of the same concepts into a piece intellectuals in general as a target audience. (NOT someone associated directly with EA though, and not with too much direct discussion of EA, because we wouldn't want to taint it as a bunch of straw Vulcans.)

However, this is well written and might suffice for that purpose. The only things I think would confuse random people linked to this would be the little Hanson sitting on your shoulder, the EY empathy/saving the world bit, and the mention of artificial intelligence. It might also not be clear that your argument is quite narrow scope. (You're only criticizing some forms of emotional empathy, not all forms, and not cognitive empathy. You aren't, for instance, arguing against letting emotional empathy encourage us to do good in the first place, but only against letting it overpower the cognitive empathy that would let us do good effectively.)

So, does anyone have any thoughts as to whether linking non-nerds to this would still be a net positive? I guess the value of information is high here, so I can share with a few friends as an experiment. Worst case is I spend a few idiosyncrasy credits/weirdness points.

Comment author: Brillyant 06 October 2016 09:56:59PM *  -1 points [-]

I think it's pretty easy to hypothesize about the possible effects of slavery vs. no slavery.

In the context of this thread, it was mentioned that the murder rate was much higher for blacks versus whites. If there are socioeconomic reasons for this, then I'm curious about slavery's contribution to those factors.

Politically, I'm generally empathetic toward ideas like affirmative action in the U.S. on the basis of race because there has been serious discrimination in the U.S. on the basis of race in the past. It makes practical sense to posit it created a "headstart" for races who were not... enslaved... and otherwise discriminated against and it makes ethical sense to employ measures to even the score.

I'm open to the idea ideas like AA may not actually practically work and could be persuaded as such by the evidence.

View more: Prev | Next