Comment author: Metus 08 July 2014 11:15:06AM *  3 points [-]

I know politics is the mindkiller and arguments are soldiers yet still the question looms large: What makes some people more suceptible to arguing about politics and ideology? I know of people I can talk to while having differing points of view and just go "well, seems like we disagree" and carry on a conversation. Conversations with other people invariably disintegrate into political discussion with neither side yielding.

Why?

Comment author: BaconServ 09 July 2014 06:43:11PM 0 points [-]

Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?

If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.

Comment author: DanielDeRossi 09 July 2014 01:05:33PM 3 points [-]

Are there any resources (on lesswrong or elsewhere) , I can use for improving my social effectiveness and social intelligence ? Its something I'd really like to improve on so I can understand social situations better and perhaps improve the quality of my social interactions.

Comment author: BaconServ 09 July 2014 06:32:40PM 2 points [-]

Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?

Comment author: BaconServ 12 January 2014 09:46:28PM 2 points [-]

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

In response to MIRI strategy
Comment author: [deleted] 30 October 2013 12:05:00AM *  9 points [-]

I do not understand why MIRI hasn’t produced a non-technical (pamphlet/blog post/video) to persuade people that UFAI is a serious concern.

It would be far more useful if MIRI provided technical argumentation for its Scary Idea. There are a lot of AGI researchers, myself included, which remain entirely unconvinced. AGI researchers - the people who would actually create an UFAI - are paying attention and not sufficiently convinced to change their behavior. Shouldn't that be of more concern than a non-technical audience?

A decade of effort on EY's part has taken the idea of friendliness mainstream. It is now accepted as fact by most AGI researchers that intelligence and morality are orthonormal concepts, despite contrary intuition, and that even with the best of intentions a powerful, self-modifying AGI could be a dangerous thing. The degree of difference in belief is in the probability assigned to that could.

Has the community responded? Yes. Quite a few mainstream AGI researchers have proposed architectures for friendly AGI, or realistic boxing/oracle setups, or a friendliness analysis of their own AGI design. To my knowledge MIRI has yet to engage with any of these proposals. Why?

I want a believable answer to that before a non-technical pamphlet or video, please.

In response to comment by [deleted] on MIRI strategy
Comment author: BaconServ 30 October 2013 12:15:45AM 0 points [-]

In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?

In response to comment by BaconServ on MIRI strategy
Comment author: passive_fist 29 October 2013 06:46:46PM -1 points [-]

Based on my (subjective and anecdotal, I'll admit) personal experiences, I think it would be bad. Look at climate change.

In response to comment by passive_fist on MIRI strategy
Comment author: BaconServ 29 October 2013 11:41:02PM 0 points [-]

Is there something wrong with climate change in the world today? Yes, it's hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in "green" and alternative energy if not for the publicity surrounding climate change?

It's easy to look back after the fact and say, "The market handled it!" But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.

Looking at the two markets:

  1. MIRI's warning of uFAI is popularized.
  2. MIRI's warning of uFAI continues in obscurity.

The latter just seems a ton less likely to mitigate uFAI risks than the former.

In response to comment by lukeprog on MIRI strategy
Comment author: ColonelMustard 29 October 2013 12:44:57PM *  1 point [-]

Thanks, Luke. This is an informative reply, and it's great to hear you have a standard talk! Is it publicly available, and where can I see it if so? Maybe MIRI should ask FOAFs to publicise it?

It's also great to hear that MIRI has tried one pamphlet. I would agree that "This one pamphlet we tried didn't work" points us in the direction that "No pamphlet MIRI can produce will accomplish much", but that proposition is far from certain. I'd still be interested in the general case of "Can MIRI reduce the chance of UFAI x-risk through pamphlets?"

Pamphlets...don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.

You may be right. But, it is possible to convince intelligent non-rationalists to take UFAI x-risk seriously in less than an hour (I've tested this), and anything that can do that process in a manner that scales well would have a huge impact. What's the Value of Information on trying to do that? You mention the Sequences and HPMOR (which I've sent to a number of people with the instruction "set aside what you're doing and read this"). I definitely agree that they filter nicely for "able to think". But they also require a huge time commitment on the part of the reader, whereas a pamphlet or blog post would not.

Comment author: BaconServ 29 October 2013 11:21:02PM *  0 points [-]

It could be useful to attach a, "If you didn't like/agree with the contents of this pamphlet, please tell us why at," note to any given pamphlet.

Personally I'd find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.

Comment author: Lumifer 29 October 2013 12:51:59AM *  0 points [-]

why not put out pamphlets in which Jesus, Muhammad and Krishna discuss AGI?

Jesus: We excel at absorbing external influences and have no problems with setting up new cults (just look at Virgin Mary) -- so we'll just make a Holy Quadrinity! Once you go beyond monotheism there's no good reason to stop at three...

Muhammad: Ah, another prophet of Allah! I said I was the last but maybe I was mistaken about that. But one prophet more, one prophet less -- all is in the hands of Allah.

Krishna: Meh, Kali is more impressive anyways. Now where are my girls?

In response to comment by Lumifer on MIRI strategy
Comment author: BaconServ 29 October 2013 01:08:11AM 1 point [-]

That would probably upset many existing Christians. Clearly Jesus' second coming is in AI form.

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 28 October 2013 08:09:43PM 2 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers?

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn't sound like a road that we should go on.

I wouldn't want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins.

Then there are various issues with introducing new species. Issues about monocultures. Bioweapons.

after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

The whole work is dangerous. Safety is really hard.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 08:53:46PM -2 points [-]

Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.

Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn't someone in the field be more aware of that and other potential dangers, despite the GE FUD they've no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people's misconceptions on the issue.

Your reasoning for why the "bad" publicity would have severe (or any notable) repercussions isn't apparent.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

This just doesn't seem very realistic when you consider all the variables.

In response to MIRI strategy
Comment author: passive_fist 28 October 2013 08:08:17PM 5 points [-]

Overexposure of an idea can be harmful as well. Look at how Kurzweil promoted his idea of the singularity. While many of the ideas (such as intelligence explosion) are solid, people don't take Kurzweil seriously anymore, to a large extent.

It would be useful debating why Kurzweil isn't taken seriously anymore. Is it because of the fraction of wrong predictions? Or is it simply because of the way he's presented them? Answering these questions would be useful to avoid ending up like Kurzweil has.

In response to comment by passive_fist on MIRI strategy
Comment author: BaconServ 28 October 2013 08:39:21PM *  1 point [-]

While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 28 October 2013 07:27:05PM 4 points [-]

Politically people who fear AI might go after companies like google.

but if the public at large started really worrying about uFAI, that's kind of the goal here.

I don't think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 07:37:04PM 0 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.

View more: Next