In response to MIRI strategy
Comment author: lukeprog 28 October 2013 06:24:28PM *  36 points [-]
  • Pamphlets work for wells in Africa. They don't work for MIRI's mission. The inferential distance is too great, the ideas are too Far, the impact is too far away.
  • Eliezer spent SIAI's early years appealing directly to people about AI. Some good people found him, but the people were being filtered for "interest in future technology" rather than "able to think," and thus when Eliezer would make basic arguments about e.g. the orthogonality thesis or basic AI drives, the responses he would get were basically random (except for the few good people). So Eliezer wrote The Sequences and HPMoR and now the filter is "able to think" or at least "interest in improving one's thinking," and these people, in our experience, are much more likely to do useful things when we present the case for EA, for x-risk reduction, for FAI research, etc.
  • Still, we keep trying direct mission appeals, to some extent. I've given my standard talk, currently titled "Effective Altruism and Machine Intelligence," at Quixey, Facebook, and Heroku. This talk explains effective altruism, astronomical stakes, the x-risk landscape, and the challenge of FAI, all in 25 minutes. I don't know yet how much good effect this talk will have. There's Facing the Intelligence Explosion and the forthcoming Smarter Than Us. I've spent a fair amount of time promoting Our Final Invention.
  • I don't think we can get much of anywhere with a 1-page pamphlet, though. We tried a 4-page pamphlet once; it accomplished nothing.
In response to comment by lukeprog on MIRI strategy
Comment author: BaconServ 28 October 2013 07:22:59PM *  -2 points [-]

Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.

There has got to be enough writing by now that an effective chain mail can be written.

ETA: The chain mail suggestion isn't knocked down in luke's comment. If it's not relevant or worthy of acknowledging, please explain why.

ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.

In response to MIRI strategy
Comment author: ChristianKl 28 October 2013 04:33:42PM *  4 points [-]

One possible response is “it’s not possible to persuade people without math backgrounds, training in rationality, engineering degrees, etc”. To which I reply: what’s the data supporting that hypothesis? How much effort has MIRI expended in trying to explain to intelligent non-LW readers what they’re doing and why they’re doing it? And what were the results?

Convincing people in Greenpeace that an UFAI presents a risk that they should care about has it's own dangers. There a risk that you associate caring about UFAI with luddites.

If you get a broad public to care about the topic without really understanding it, it gets political. It makes sense to push the idea in a way, where a smart MIT kid doesn't hear the first time about the dangers of UFAI from a luddite but from someone that he can intellectually respect.

In response to comment by ChristianKl on MIRI strategy
Comment author: BaconServ 28 October 2013 07:12:11PM 0 points [-]

Is "bad publicity" worse than "good publicity" here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that's kind of the goal here.

Comment author: shinoteki 27 October 2013 09:43:45PM *  3 points [-]

LessWrong is sci-fi. Check what's popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...

It is true that people have written unrealistic books about these things. People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Who is to say there even are concepts that the human mind simply can't grasp? I can't visualize in n-dimensional space, but I can certainly understand the concept

The human mind is finite, and there are infinitely many possible concepts. If you're interested in the limits of human intelligence and the possibilities of artificial intelligence, you might want to read The Hanson-Yudkowsky Debate .

Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be?

Drexler wrote a PhD thesis which probably answers this. For discussion on LessWrong, see Is Molecular Nanotechnology "Scientific"? and How probable is Molecular Nanotech?.

Comment author: BaconServ 27 October 2013 10:14:57PM -2 points [-]

People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.

Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we'd need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I'd love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I'm not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don't have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.

The human mind is finite, and there are infinitely many possible concepts.

I need backing on both of these points. As far as I know, there isn't enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don't actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don't know the maximum conceptual complexity of the human brain.

As far as there being infinitely many concepts, "flying car" isn't terribly more complicated than "car" and "flying." Even if something in the far future is given a name other than "car," we can still grasp the concept of "transportation device," paired with any number of accessory concepts like, "cup holder," "flies," "transforms," "teleports," and so on. Maybe it's closer to a "suit" than anything we would currently call a "car;" some sort of "jetpack" or other. I'd need an expansion on "concept" before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I'm aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make "infinite" simply be inapplicable/nonsensical.

Comment author: Creutzer 27 October 2013 11:37:57AM *  3 points [-]

A habit I find my mind practicing incredibly often is simulation of the worst case scenario. [...]

I'm not saying this is generally inadvisable, but it seems dangerous for some kinds of people because of a serious possible failure mode: by focussing on the half-plausible worst-case scenario, you will cause yourself to assign additional probability to them. Furthermore, they will come true sometimes, which will give you a feeling that you were right to imagine them, an impression of confirmation, which could lead to a problematic spiral. If you have any inclination towards social anxiety, practice with extreme caution!

Comment author: BaconServ 27 October 2013 07:13:52PM 0 points [-]

That's true. The process does rely on finding a solution to the worst case scenario. If you're going to be crippled by fear or anxiety, probably a very bad practice to emulate.

Comment author: BaconServ 27 October 2013 07:52:29AM 0 points [-]

Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.

I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.

Damn I hope nobody replies to my comments...

Comment author: Dagon 27 October 2013 07:32:14AM 2 points [-]

"I didn't need to read this" is probably close to what prompted my comment. Along with "and I suspect most readers also won't get much out of it",

I should have just said "this should have gone in discussion first, then (if it was popular) rewritten as a top-level post with a clearer summary". Since it's gotten a reasonable amount of comments and upvotes, I think I was incorrect in my assessment that most readers would be like me,

Comment author: BaconServ 27 October 2013 07:45:19AM 0 points [-]

Thank you. I no longer suspect you of being mind-killed by "politics is the mind-killer." Retracted.

Maybe I'm being too hasty trying to pinpoint people being mind-killed here, but it's hard to ignore that it's happening. I think I probably need to take my own advice right about now if I'm trying to justify my jumping to conclusions with statements like, "It's hard to ignore that it's happening."

I was planning to make a top-level comment here to the effect of, "INB4obvious mind-kill," but I think I just realized why the thoughts that thought that up were flawed from a basic level. Still, I think someone should point out that the comments here are barely touching the content of this article, which is odd for LessWrong.

Comment author: Dagon 27 October 2013 07:24:47AM 0 points [-]

IMO, that's not helpful advice. It provides very few tools for diagnosing when you're overreacting, and no techniques for actually implementing this refusal.

More importantly, it ignores the fact that you need mutual knowledge, not just calm, that you AND ALL READERS are interpreting this as only a value-free fact estimate, and not the overwhelmingly more common cluster of topics that includes how to act on it.

Comment author: BaconServ 27 October 2013 07:35:19AM *  1 point [-]

We can only go a step at a time. The other recent post about politics in Discussion was rife with obvious mind-kill. I'm seeing this thread filling up with it too. I'd advocate downvoting of obvious mind-kill, but it's probably not very obvious at all and would just result in mind-killed people voting politically without giving the slightest measure of useful feedback. I'm really at a loss for how to get over the mind-kill of politics and the highly paired autocontrarian mind-kill of "politics is the mind-killer" other than just telling people to shut the fuck up, stop reading comments, stop voting, go lie down, and shut the fuck up.

Comment author: Dagon 27 October 2013 07:12:31AM 0 points [-]

Fair enough - it's not all that long if it was necessary for a novel or interesting point. It's too long for something relatively simply that I already have in my toolbox, and there was no way to figure out if that's all it was without reading the whole thing.

Comment author: BaconServ 27 October 2013 07:18:39AM *  0 points [-]

So because you already have the tool, nobody else needs to be told about it? I feel like I'm strawmanning here, but I'm not sure what your point is if not, "I didn't need to read this."

Comment author: Douglas_Knight 27 October 2013 12:49:50AM -1 points [-]

The fact that the author puts a piece in main, or that the community votes it highly, or that the administrators do not remove it from main, is only very weak evidence that I want to read it.

Comment author: BaconServ 27 October 2013 07:15:21AM *  0 points [-]

Do you have an actual complaint here or are you disagreeing for the sake of disagreeing

Because it sounds a damn lot like you're upset about something but know better than to say what you actually think, so you're opting to make sophomoric objections instead.

Comment author: Jack 27 October 2013 05:22:14AM *  7 points [-]

I'm just trying to encourage you to make you contributions moderately interesting. I don't really care how special you think you are.

Beliefs about strong AI are pretty qualitatively similar to religious ideas of god, up to and included, "Works in mysterious ways that we can't hope to fathom."

Wow, what an interesting perspective. Never heard that before.

Comment author: BaconServ 27 October 2013 06:30:34AM -1 points [-]

I don't really care how special you think you are.

See, that's the kind of stance I can appreciate. Straight to the point without any wasted energy. That's not the majority response LessWrong gives, though. If people really wanted me to post about this as the upvotes on the posts urging me to post about this would suggest, why is each and every one of my posts getting downvoted? How am I supposed to actually do what people are suggesting when they are actively preventing me from doing so?

...Or is the average voter simply not cognizant enough to realize this...?

Worst effect of having sub-zero karma? Having to wait ten minutes between comments.

Wow, what an interesting perspective. Never heard that before.

Not sure if sarcasm or...

View more: Prev | Next