This is tangential, but "More Safe" is a pretty silly name because of the English word "safer". That, however, would be a terrible name.
I suggest the more memorable, more sensational, more badass sounding name, "Less Risky".
longer survival of an individual
well...
or a small group.
uh-oh
There are several small organizations which are created by one person and limited to a small website that nobody reads...So efforts of different researchers are scattered and lost in noise.
In my view the collective intelligence as an instrument should consist of
...holding off on proposing solutions!
Now i am going to start sequence of posts about main problems of global risk prevention and i invite everyone to post about global risks under 'moresafe' tag.
I think starting off with a discussion post, as you did here, was a good idea, to see what people already think, and already know, and so on.
Lifeboat Foundation ... ...The popular blog "extinction protocol"
I suggest contacting these groups and trying to get them, as groups or as members, to join in, provided they will accept terms that you also find acceptable. If they're not willing to accept a strict moderation policy, for example, you're probably better off without them.
Generally, to me, a cross-linking dedicated x-risk-forum, a literature list and perhaps also a wiki seem like useful things which don't already exist.
There are several small organizations which are created by one person and limited to a small website that nobody reads.
Seems like a good idea to write a discussing post gathering the names and links to the websites of such groups (or persons), as well as more well-known groups, aiming to get a comprehensive list. In addition to being useful in itself such a list could provide a starting point for people to contact about your ideas of a forum and a wiki.
- Open complete library of all literature on existential risks.
Similarly, it seems like a comprehensive bibliography of X-risk literature would be a useful resource. Seth Baum's GCR bibliography might be a good starting point.
Even the discussion of such risks could bring terrorist's attention to areas they may not have otherwise looked at, even if there aren't specific details.
Terrorists do not in general seek to destroy humanity so this isn't a substantial worry. You don't outside Tom Clancy books see terrorists trying to start nuclear wars or spread a virus that kills off the vast majority of the human population.
Yet, at least. Hypothetical example: I wonder if something like the Voluntary Human Extinction Movement will eventually switch out the "Voluntary" in favor of "Mandatory". But that's speculative, and you are right empirically.
I am writing to propose an online discussion of global risk within the discussion part of Less Wrong. We might call this discussion "More Safe". In future it could be a site where anyone interested could discuss global risks, possibly to aggregate all existing information about global risks. The idea comes from discussions at the Singularity summit that I had with Anna Solomon, Seth Baum, and others.
I propose labeling such discussions "More Safe". Less wrong means more safe. Fewer mistakes means fewer catastrophes.
At Seth's suggestion, we should be careful to follow safety guidelines for such discussions. For example, no technical detail should be posted online about topics which could be used by potential terrorists especially in biology. The point of the discussion is to try to reduce risk, not to have open discussion of risky ideas.
Here are some further thoughts on my idea:
Intelligence appeared as an instrument of adaptation which lead to longer survival of an individual or a small group. Unfortunately it was not adapted as an instrument for survival of technological civilizations. So we have to somehow update our intelligence. One way to do it is to reduce personal cognitive biases.
Another way is to make our intelligence collective. Collective intelligence is more effective in finding errors and equilibrium - democracy and free markets are examples. Several people and organizations dedicated themselves to preventing existential risks.
But we do not see a place which is accepted as main discussion point on existential risks.
Lifeboat Foundation has a mailing list and blog, but its themes are not strictly about existential risks (a lot of star travel) and no open forum exists.
Future of Humanity Institute has excellent collection of articles and the book but it's not a place where people could meet online.
Less Wrong was not specially dedicated to existential risks and many risks are out of its main theme (nuclear, climate and so on).
Immortality Institute has a subforum about global risks but it is not a professional discussing point.
J.Hughes has a mailing list on existential risks [x-risk].
The popular blog "extinction protocol" is full of fear mongering and focuses mostly on natural disasters like earthquakes and floods.
There are several small organizations which are created by one person and limited to a small website that nobody reads.
So efforts of different researchers are scattered and lost in noise.
In my view the collective intelligence as an instrument should consist of following parts:
1. An open-structured forum in which everyone could post but a strict moderation policy should prevent any general discussion about Obama, poverty, and other themes that are interesting but not related to existential risks. It should have several level of accesses and several levels of proof - strict science, hypotheses, and unprovable claims. I think that such forum should be all inclusive, but niburu-stuff should be moved in separate section for debunking.
Good example of such organization is site flutrackers.com which is about flu.
2. Wiki-based knowledge base
3. Small and effective board of experts who really take responsibility to work on content ( but no paid stuff and fund rising problems). Also there will be no head, otherwise it will not be effective collective intelligence. All work for the site should be done on free volunteering.
4. Open complete library of all literature on existential risks.
5. The place should be friendly to other risks sites and should cross-link interesting posts there.
In future the site will became a database of knowledge on global risks.
Now i am going to start sequence of posts about main problems of global risk prevention and i invite everyone to post about global risks under 'moresafe' tag.