Comment author: Metus 08 July 2014 11:15:06AM *  3 points [-]

I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go "well, seems like we disagree" and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.

Why?

Comment author: BaconServ 09 July 2014 06:43:11PM 0 points [-]

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

Comment author: DanielDeRossi 09 July 2014 01:05:33PM 3 points [-]

Are there any resources (on lesswrong or elsewhere) , I can use for improving my social effectiveness and social intelligence ? Its something I'd really like to improve on so I can understand social situations better and perhaps improve the quality of my social interactions.

Comment author: BaconServ 09 July 2014 06:32:40PM 2 points [-]

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

Comment author: BaconServ 12 January 2014 09:46:28PM 2 points [-]

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

In response to MIRI strategy
Comment author: [deleted] 30 October 2013 12:05:00AM *  9 points [-]

I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern.

It would be far more useful if MIRI provided technical argumentation for its Scary Idea. There are a lot of AGI researchers, myself included, which remain entirely unconvinced. AGI researchers - the people who would actually create an UFAI - are paying attention and not sufficiently convinced to change their behavior. Shouldn't that be of more concern than a non-technical audience?

A decade of effort on EY's part has taken the idea of friendliness mainstream. It is now accepted as fact by most AGI researchers that intelligence and morality are orthonormal concepts, despite contrary intuition, and that even with the best of intentions a powerful, self-modifying AGI could be a dangerous thing. The degree of difference in belief is in the probability assigned to that could.

Has the community responded? Yes. Quite a few mainstream AGI researchers have proposed architectures for friendly AGI, or realistic boxing/oracle setups, or a friendliness analysis of their own AGI design. To my knowledge MIRI has yet to engage with any of these proposals. Why?

I want a believable answer to that before a non-technical pamphlet or video, please.

In response to comment by [deleted] on MIRI strategy
Comment author: BaconServ 30 October 2013 12:15:45AM 0 points [-]

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

In response to comment by BaconServ on MIRI strategy
Comment author: passive_fist 29 October 2013 06:46:46PM -1 points [-]

Based on my (subjective and anecdotal, I'll admit) personal experiences, I think it would be bad. Look at climate change.

In response to comment by passive_fist on MIRI strategy
Comment author: BaconServ 29 October 2013 11:41:02PM 0 points [-]

Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?

It's easy to look back after the fact and say, "The market handled it!" But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.

Looking at the two markets:

  1. MIRI's warning of uFAI is popularized.
  2. MIRI's warning of uFAI continues in obscurity.

The latter just seems a ton less likely to mitigate uFAI risks than the former.

In response to comment by lukeprog on MIRI strategy
Comment author: ColonelMustard 29 October 2013 12:44:57PM *  1 point [-]

Thanks, Luke. This is an informative reply, and it's great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?

It's also great to hear that MIRI has tried one pamphlet. I would agree that "This one pamphlet we tried didn't work" points us in the direction that "No pamphlet MIRI can produce will accomplish much", but that proposition is far from certain. I'd still be interested in the general case of "Can MIRI reduce the chance of UFAI x-risk through pamphlets?"

Pamphlets...don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this), and anything that can do that process in a manner that scales well would have a huge impact. What's the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I've sent to a number of people with the instruction "set aside what you're doing and read this"). I definitely agree that they filter nicely for "able to think". But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.

Comment author: BaconServ 29 October 2013 11:21:02PM *  0 points [-]

It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.

Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.

Comment author: Lumifer 29 October 2013 12:51:59AM *  0 points [-]

why not put out pamphlets in which Jesus, Muhammad and Krishna discuss AGI?

Jesus: We excel at absorbing external influences and have no problems with setting up new cults (just look at Virgin Mary) -- so we'll just make a Holy Quadrinity! Once you go beyond monotheism there's no good reason to stop at three...

Muhammad: Ah, another prophet of Allah! I said I was the last but maybe I was mistaken about that. But one prophet more, one prophet less -- all is in the hands of Allah.

Krishna: Meh, Kali is more impressive anyways. Now where are my girls?

In response to comment by Lumifer on MIRI strategy
Comment author: BaconServ 29 October 2013 01:08:11AM 1 point [-]

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 29 October 2013 12:09:00AM -3 points [-]

Really, it's not. Tons of people discuss politics without getting their briefs in a knot about it. It's only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn't that common elseways.

There's a reason why it's general advice to not talk about religion, sex and politics. It's not because the average person does well in discussing politics.

Dismiss your opponent out-of-hand as unintelligent isn't the only failure mode of politics mindkill. I don't even think it's the most important one.

You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.

How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?

You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.

Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.

The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.

CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn't. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.

Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it.

If you want to do something you can, earn to give and give money to MIRI.

People researching AI who've argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky's argument has gained widespread attention, but if it pressures them to properly address Yudkowsky's arguments, then it has legitimately helped.

You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you.

UFAI is an important problem but we probably don't have to solve it in the next 5 years. We do have some time to do things right.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 29 October 2013 12:53:46AM -3 points [-]

How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?

NSA spying isn't a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it's just against NSA spying doesn't seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn't need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.

If you want to do something you can, earn to give and give money to MIRI.

That is not a valid path if MIRI is willfully ignoring valid solutions.

You don't get points for pressuring people to address arguments. That doesn't prevent an UFAI from killing you.

It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.

We probably don't have to solve it in the next 5 years.

Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI's warning isn't properly heeded?

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 28 October 2013 09:23:36PM -1 points [-]

Is there reason to believe someone in the field of genetic engineering would make such a mistake?

Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.

The commerical incentives that exist encourage the development of such products. A customer in a store doesn't see whether a potato is engineered to have more vitamins. He doesn't see whether it's engineered to produce pesticides.

He buys a potato. It's cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don't.

In the case of potatos it might be harmless. We don't eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.

It seems like the FUD should just be motivating them to understand the risks even more

It's not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.

This just doesn't seem very realistic when you consider all the variables.

According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it's very difficult to explain to people who are invested into their AGI design that it's dangerous because that part needs complicated math.

It easy to say in abstract that some AGI might become UFAI, but it's hard to do the assessment for any individual proposal.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 09:47:51PM *  -7 points [-]

Politics is the mindkiller.

Really, it's not. Tons of people discuss politics without getting their briefs in a knot about it. It's only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn't that common elseways. People, on large, are willing to seriously debate political issues. "Politics is the mind-killer" is a result of some pretty severe selection bias.

Even ignoring that, you've only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can't get the concept to penetrate the academia where AI is likely to be developed, we're not yet mitigating the threat. A thousand angry letters demanding this research, "Stop at once," or, "Address the issue of friendliness," isn't something that is easy to ignore—no matter how bad you think the arguments for uFAI are.

You're not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who've argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky's argument has gained widespread attention, but if it pressures them to properly address Yudkowsky's arguments, then it has legitimately helped.

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 28 October 2013 08:09:43PM 2 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers?

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn't sound like a road that we should go on.

I wouldn't want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins.

Then there are various issues with introducing new species. Issues about monocultures. Bioweapons.

after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

The whole work is dangerous. Safety is really hard.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 08:53:46PM -2 points [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue.

Your reasoning for why the "bad" publicity would have severe (or any notable) repercussions isn't apparent.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

This just doesn't seem very realistic when you consider all the variables.

View more: Next