Haven't had one of these for awhile. This thread is for questions or comments that you've felt silly about not knowing/understanding. Let's try to exchange info that seems obvious, knowing that due to the illusion of transparency it really isn't so obvious!
It could be that it's just impossible to build a safe FAI under the utilitarian framework and all AGI's are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren't signed up for cryonics.
Take a figure like Nassim Taleb. He's frequently quoted on LessWrong so he's not really outside the LessWrong memeplex. But he's also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don't take to their full conclusion.
It's a topic that's very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can't put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don't get the point before they spent years experiencing the system from the inside.
It's the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won't be misunderstood.
That's not LW-memeplex being wrong, that's just a LW-meme which is slightly more pessimistic than the more customary "the vast majority of all UFAI's are unfriendly but we might be able to make this work" view. I don't think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, "Actually, all the fears abo... (read more)