But why do you want it in the first place?
Emotionally -- for the feeling that something new and great is happening here, and I can see it growing.
Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.
Okay, what exactly are the "great things" I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?
When Eliezer was writing the Sequences, merely the fact that "there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom" seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.
Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.
Internet vs real life -- things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy -- if it isn't documented, it didn't happen -- is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.
By the way, if you are unhappy about the "decline" of LW because it will make a worse impression on new people you would like to introduce to LW culture -- point them towards the book instead.
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?
It's extremely hard to ban the research worldwide, and then it's extremely hard to enforce such decision.
Firstly, you'll have to convince all the world's governments (btw, there are >200) to pass such laws.
Then, you'll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.
And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?
The humanity hasn't prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?