The Open Thread posted at the beginning of the month has gotten really, really big, so I've gone ahead and made another one. Post your new discussions here!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Verifying a proof is quite a bit simpler that coming up with the proof in the first place.
Verifying is hard. Specifying what a FAI is well enough that you've even got a chance of having your Unspecified AI developing one is a whole 'nother sort of challenge.
Are there convenient acronyms for differentiating between Uncaring AIs and AIs actively opposed to human interests?
I was assuming that xamdam's AGI will invent an FAI if people can adequately specify it and it's possible, or at least it won't be looking for ways to make things break.
There's some difference between Murphy's law and trying to make a deal with the devil. This doesn't mean I hav... (read more)