lukeprog comments on Intelligence explosion in organizations, or why I'm not worried about the singularity - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (187)
"Yes" to (a), "no" to (b) and (c).
We can definitely make progress on Friendliness without superintelligent optimizers (see here), but we can't make some non-foomy process (say, a corporation) Friendly in order to test our theories of Friendliness.
Ok. I am currently diagnosing the source of our disagrement as me being more agnostic about which AI architectures might succeed than you. I am willing to consider the kinds of minds that resemble modern messy non-foomy optimizers (e.g. communities of competing/interacting agents) as promising. That is, "bazaar minds," not just "cathedral minds." Given this agnosticism, I see value in "straight science" that worries about arranging possibly stupid/corrupt/evil agents in useful configurations that are not stupid/corrupt/evil.