Will_Newsome comments on Holden Karnofsky's Singularity Institute Objection 1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
(Though you'd likely want to make a weaker form of Thiel's argument if possible, as it's not been convincingly demonstrated that a scenario with a non-Friendly superintelligence is necessarily or likely a scenario where humans no longer exist. As a special case, there are some who are especially worried about "hell worlds"—if pushing for Friendliness increases the probability of hell worlds, as has sometimes been argued, then it's not clear that you should discount such possible futures. More generally I have a heuristic that this "discount and then renormalize" approach to strategizing is generally not a good one; in my personal experience it's proven a bad idea to assume, even provisionally, that there are scenarios that I can't affect.)
"Many acquire the serenity to accept what they cannot change, only to find the 'cannot change' is temporary and the serenity is permanent." — Steven Kaas
Would you mind sharing a concrete example?
The Forbidden Toy is the classic. Google scholar on "forbidden toy" provides more on the subject, with elaboration and alternate hypothesis testing and whatnot.
Thanks.
I couldn't immediately remember the experience that led me to strongly believe it, but luckily the answer came to me in a dream. Turns out it's just personal stuff having to do with a past relationship that I cared a lot about. There are other concrete examples but they probably don't affect my decision calculus nearly as much in practice. (Fun fact: I learned many of my rationality skillz via a few years in high school dating a really depressed girl.)
— T. S. Eliot, Little Gidding