I occasionally experience this, but I've never assigned it strong positive or negative affect/valence. I'm high in openness to experience, so I just kinda thought it was an academically interesting phenomenon, and haven't thought much of it, much less lost any sleep over it. It's just interesting in the same way that this is interesting.
If anyone takes it too seriously, I recommend this approach. Just clicking the xkcd link will help your brain more strongly associate the humorous and fascinating bits about it with the call of the void. I recommend trying to set up a Trigger Action Plan, so that you think of the xkcd any time you experience the call of the void.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I don't that's a good description of the orthogonality thesis. An AI that optimizes for a single human value like purity could still produce huge problems.
Human's don't effectively self modify to achieve specific objectives in the way an AGI could.
Why do you believe that?
Probably not, but it highlights the relevant (or at least related) portion. I suppose I could have been more precise by specifying terminal values, since things like paperclips are obviously instrumental values, at least for us.
Agreed, except in the trivial case where we can condition ourselves to have different emotional responses. That's substantially less dangerous, though.
I'm not sure I do, in the sense that I wouldn't assign the proposition >50% probability. However, I might put the odds at around 25% for a Reduced Impact AI architecture providing a useful amount of shortcuts.
That seems like decent odds of significantly boosting expected utility. If such an AI would be faster to develop by even just a couple years, that could make the difference between winning and loosing an AI arms race. Sure, it'd be at the cost of a utopia, but if it boosted the odds of success enough it'd still have enough expected utility to compensate.