komponisto comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 03 September 2010 11:38:35PM *  12 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It's true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it's even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality -- so there's nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).

Comment author: jacob_cannell 03 September 2010 11:43:15PM 4 points [-]

point well taken.

I thought it was an interesting thought experiment and relates to that alien message. Not a "this is how we should do FAI".

But if ever get positive karma again, at least now I know the unwritten rules.

Comment author: Mitchell_Porter 04 September 2010 02:49:31AM 3 points [-]

if I ever get positive karma again

If you stick around, you will. I have a -15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)

Comment author: nhamann 04 September 2010 04:27:49PM *  2 points [-]

Nevertheless, the fact remains that posts like this really aren't, strictly speaking, on-topic for this blog.

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

The alternative is that LWers who want to discuss "off-topic" issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Comment author: wnoise 04 September 2010 04:57:12PM 2 points [-]

While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.

Comment author: RichardKennaway 04 September 2010 05:08:40PM 0 points [-]

I realize that it says "a community blog devoted to refining the art of human rationality" at the top of every page here, but it often seems that people here are interested in "a community blog for topics which people who are devoted to refining the art of human rationality are interested in," which is not really in conflict at all with (what I presume is) LW's mission of fostering the growth of a rationality community.

I've seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.

A certain amount of that sort of thing is ok, but if there's too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.

If there's the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.

Comment author: Pavitra 04 September 2010 05:13:10PM -1 points [-]

(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual ("off-topic") discussion of rationality.)

Better yet, we could call them Overcoming Bias and Less Wrong, respectively.

Comment author: timtyler 04 September 2010 11:56:28PM 0 points [-]

Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se.

What about the strategy of "refining the art of human rationality" by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn't that count as "refining"?