In response to MIRI strategy
Comment author: passive_fist 28 October 2013 08:08:17PM 5 points [-]

Overexposure of an idea can be harmful as well. Look at how Kurzweil promoted his idea of the singularity. While many of the ideas (such as intelligence explosion) are solid, people don't take Kurzweil seriously anymore, to a large extent.

It would be useful debating why Kurzweil isn't taken seriously anymore. Is it because of the fraction of wrong predictions? Or is it simply because of the way he's presented them? Answering these questions would be useful to avoid ending up like Kurzweil has.

In response to comment by passive_fist on MIRI strategy
Comment author: BaconServ 28 October 2013 08:39:21PM *  1 point [-]

While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn't taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?

In response to comment by BaconServ on MIRI strategy
Comment author: ChristianKl 28 October 2013 07:27:05PM 4 points [-]

Politically people who fear AI might go after companies like google.

but if the public at large started really worrying about uFAI, that's kind of the goal here.

I don't think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.

If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I'm smart enough to program an AGI that does what I want.

I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 07:37:04PM 0 points [-]

Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we'll only need to do once; after it's known and taken seriously, the people who work on AI will be under intense pressure to ensure they're avoiding the dangers here.

Google probably already has an AI (and AI-risk) team internally that they've just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they'd make it known they were taking their own precautions.

In response to MIRI strategy
Comment author: lukeprog 28 October 2013 06:24:28PM *  36 points [-]
  • Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
  • Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
  • Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
  • I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
In response to comment by lukeprog on MIRI strategy
Comment author: BaconServ 28 October 2013 07:22:59PM *  -2 points [-]

Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.

There has got to be enough writing by now that an effective chain mail can be written.

ETA: The chain mail suggestion isn't knocked down in luke's comment. If it's not relevant or worthy of acknowledging, please explain why.

ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.

In response to MIRI strategy
Comment author: ChristianKl 28 October 2013 04:33:42PM *  4 points [-]

One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?

Convincing people in Greenpeace that an UFAI presents a risk that they should care about has it's own dangers. There a risk that you associate caring about UFAI with luddites.

If you get a broad public to care about the topic without really understanding it, it gets political. It makes sense to push the idea in a way, where a smart MIT kid doesn't hear the first time about the dangers of UFAI from a luddite but from someone that he can intellectually respect.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 07:12:11PM 0 points [-]

Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.

Comment author: Viliam_Bur 27 October 2013 10:14:01PM 3 points [-]

That's odd and catches me completely off guard.

How specifically can you be surprised to hear "be specific" on LessWrong? (Because that's more or less what Nancy said.) If nothing else, this suggests that your model of LessWrong is seriously wrong.

Giving specific examples of "LessWrong is unable to discuss X, Y, Z" is so much preferable to saying "you know... LessWrong is a hivemind... there are things you can't think about..." without giving any specific examples.

Comment author: BaconServ 27 October 2013 10:27:50PM -4 points [-]

How specifically? Easy. Because LessWrong is highly dismissive, and because I've been heavily signalling that I don't have any actual arguments or criticisms. I do, obviously, but I've been signalling that that's just a bluff on my part, up to an including this sentence. Nobody's supposed to read this and think, "You know, he might actually have something that he's not sharing." Frankly, I'm surprised that with all the attention this article got that I haven't been downvoted a hell of a lot more. I'm not sure where I messed up that LessWrong isn't hammering me and is actually bothering to ask for specifics, but you're right; it doesn't fit the pattern I've seen prior to this thread.

I'm not yet sure where the limits of LessWrong's patience lie, but I've come too far to stop trying to figure that out now.

Comment author: shinoteki 27 October 2013 09:43:45PM *  3 points [-]

LessWrong is sci-fi. Check what's popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...

It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Who is to say there even are concepts that the human mind simply can't grasp? I can't visualize in n-dimensional space, but I can certainly understand the concept

The human mind is finite, and there are infinitely many possible concepts. If you're interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .

Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be?

Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology "Scientific"? and How probable is Molecular Nanotech?.

Comment author: BaconServ 27 October 2013 10:14:57PM -2 points [-]

People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we'd need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I'd love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I'm not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don't have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.

The human mind is finite, and there are infinitely many possible concepts.

I need backing on both of these points. As far as I know, there isn't enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don't actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don't know the maximum conceptual complexity of the human brain.

As far as there being infinitely many concepts, "flying car" isn't terribly more complicated than "car" and "flying." Even if something in the far future is given a name other than "car," we can still grasp the concept of "transportation device," paired with any number of accessory concepts like, "cup holder," "flies," "transforms," "teleports," and so on. Maybe it's closer to a "suit" than anything we would currently call a "car;" some sort of "jetpack" or other. I'd need an expansion on "concept" before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I'm aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make "infinite" simply be inapplicable/nonsensical.

Comment author: TheOtherDave 27 October 2013 08:26:44PM 0 points [-]

(nods) IOW, it merely demonstrates our inadequate levels of self-awareness and meta-cognition.

Comment author: BaconServ 27 October 2013 09:00:48PM -4 points [-]

This doesn't actually counter my argument, for two main reasons:

  1. That wasn't my argument.
  2. That doesn't counter anything.

Please don't bother replying to me unless you're going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you're not willing to explain that, I'm not interested.

What, do you just ignore it?

Comment author: TheOtherDave 27 October 2013 06:11:31PM 3 points [-]

Sarcasm.
We get the "oh this is just like theism!" position articulated here every ten months or so.
Those of us who have been here a while are kind of bored with it.
(Yes, yes, yes, no doubt that simply demonstrates our inadequate levels of self-awareness and metacognition.)

Comment author: BaconServ 27 October 2013 08:13:30PM -4 points [-]

What, and you just ignore it?

No, I suppose you'll need a fuller description to see why the similarity is relevant.

  1. LessWrong is sci-fi. Check what's popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
  2. These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can't grasp? I can't visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don't need an entire ecosystem on board? If complex information processing nanites aren't feasible, is reanimation? These concepts aren't new, they've been around for ages. It's Magic 2.0.
  3. If it's not about evidence, what is it about? I'm not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It's not something people are believing in because "it only makes sense." It's fantasy at it's base, and if it turns out to be halfway possible, great. What if it doesn't? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don't have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we're closer to finally realizing the technology! Grow up already. This stuff isn't reasonable, it's just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
  4. If it's not rationa—No, you've stopped following along by now. It's not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can't make an argument within the context that it's irrational because you've heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because "it's obvious" and you don't like the implications?

Seriously. Grow up. If there's a reason for me to think LessWrong isn't filled with children who like to believe in Magic 2.0, I'm certainly not seeing it.

Comment author: Creutzer 27 October 2013 11:37:57AM *  3 points [-]

A habit I find my mind practicing incredibly often is simulation of the worst case scenario. [...]

I'm not saying this is generally inadvisable, but it seems dangerous for some kinds of people because of a serious possible failure mode: by focussing on the half-plausible worst-case scenario, you will cause yourself to assign additional probability to them. Furthermore, they will come true sometimes, which will give you a feeling that you were right to imagine them, an impression of confirmation, which could lead to a problematic spiral. If you have any inclination towards social anxiety, practice with extreme caution!

Comment author: BaconServ 27 October 2013 07:13:52PM 0 points [-]

That's true. The process does rely on finding a solution to the worst case scenario. If you're going to be crippled by fear or anxiety, probably a very bad practice to emulate.

Comment author: BaconServ 27 October 2013 07:52:29AM 0 points [-]

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

View more: Prev | Next