timtyler comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 03 September 2010 11:38:35PM *  12 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It's true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it's even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality -- so there's nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).

Comment author: timtyler 04 September 2010 11:56:28PM 0 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se.

What about the strategy of "refining the art of human rationality" by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn't that count as "refining"?