Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI's fundraising efforts.
See also this old post where Robin Hanson basically predicted that this would happen.
...The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.
But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important t
Previous Open Thread
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.