jacob_cannell comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 03 September 2010 11:38:35PM *  12 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It's true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it's even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality -- so there's nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).

Comment author: jacob_cannell 03 September 2010 11:43:15PM 4 points [-]

point well taken.

I thought it was an interesting thought experiment and relates to that alien message. Not a "this is how we should do FAI".

But if ever get positive karma again, at least now I know the unwritten rules.

Comment author: Mitchell_Porter 04 September 2010 02:49:31AM 3 points [-]

if I ever get positive karma again

If you stick around, you will. I have a -15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)