All of TimoRiikonen's Comments + Replies

I hope the front page would be utilize some graphics. See this image for AGI risks: http://lesswrong.com/lw/mid/agi_safety_solutions_map/

If there would be similar graphics which you can press on, that would be great.

Actually the number in Wikipedia https://en.wikipedia.org/wiki/Asteroid_mining is even larger than that: $20,000,000,000,000 .

This amount seems so large that I would expect metal prices to decrease substantially, but even if they would do so, the potential value is huge when someone finds a commercially viable manner to extract and especially fetch the ore.

0CCC
Hmmm, looking at the citation brings me to this page... It seems that, as you point out, that value entirely fails to take into account the severe drop in price caused by the vast amount of metal suddenly on the market...

Documentation root

This picture is absolutely great!

One more BIG addition to it and it will be what I have been dreaming of: Documentation root. To do: To create a link from each box to a document in lesserwrong that describes it with more detail. I suggest adding a new icon for this purpose, such as arrow.

Also if you have referred to a third-party document. adding links to them would be great if they are public.

2turchin
I can add links into the pdf of the map, and I did some (they are in blue). Also there a lot of links in the Sotala article. But it would be great to make links, if you could help me in it I would appreciate it.

Multilevel Safe AGI

I didn't quite understand this one. I am not sure if I understood this correctly.

Suggested updates if I did understand it:

1) There is one Nanny (although it may have multiple copies of itself, but basically one evolution path only)
2) AGIs must give Nanny access to verify that they are and remain peaceful
3) Separate which rules are on Nanny and which are valid to other AGIs...
0turchin
Multilevel Safe AGI is the way of integration of different solutions which will make them more stable. Basically, what you are suggesting is under the topic " Use AI to create Safe AI". I may call it "Nanny Judge AI".