Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: figor888 17 September 2014 03:32:29AM 1 point [-]

I certainly believe Artificial Intelligence can and will perform many mundane jobs that are nothing more than mindless repetition and even in some instances create art, that said what about world leaders or position that require making decisions that affect segments of the population in unfair ways, such as storage of nuclear waste, transportation systems, etc?

To me, the answer is obvious, only a fool would trust an A.I. to make such high-level decisions. Without empathy, the A.I. could never provide a believable decision that any sane person should trust, no matter how many variables are used for its cost-benefit analysis.

Comment author: ChrisHallquist 17 September 2014 11:30:07AM 1 point [-]

Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.

Comment author: John_Maxwell_IV 27 August 2014 12:32:22AM *  4 points [-]

I hope the forum's moderators will take care to squash unproductive and divisive conversations about race, gender, social justice, etc., which seem have been invading and hindering nearby communities like the atheism/secularism world and the rationality world.

To play devil's advocate: Will MacAskill reported that this post of his criticizing the popular ice bucket challenge got lots of attention for the EA movement. Scott Alexander reports that his posts on social justice bring lots of hits to his blog. So it seems plausible to me that a well-reasoned, balanced post that made an important and novel point on a controversial topic could be valuable for attracting attention. Remember that this new EA forum will not have been seeded with content and a community quite the way LW was. Also, there are lots of successful group blogs (Huffington Post, Bleacher Report, Seeking Alpha, Daily Kos, etc.) that seem to have a philosophy of having members post all they want and then filtering the good stuff out of that.

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor. The important thing is to make it easy for users to find the good stuff, and suppressing the bad stuff is only one (rather blunt) way of accomplishing this. Ultimately the best way to help users find quality stuff depends on your forum software. It might be interesting to try to do a study of successful and unsuccessful subreddits to see what successful intellectual subreddits do that unsuccessful ones don't, given that the LW userbase and forum software are a bit similar to those of reddit.

(It's possible that strategies that work for HuffPo et al. will not transfer well at all to a blog focused more on serious intellectual discussion. So it might be useful to decide whether the new EA forum is more about promoting EA itself or promoting serious intellectual discussion of EA topics.)

(Another caveat: I've talked to people who've ditched LW because they get seriously annoyed and it ruins their day when they see a comment that they regard as insufficiently rational. I'm not like this and I'm not sure how many people are, but these people seem likely to be worth keeping around and catering to the interests of.)

Comment author: ChrisHallquist 27 August 2014 05:29:37AM 1 point [-]

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.

Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.

Comment author: XiXiDu 09 July 2014 06:07:46PM *  2 points [-]

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”

Question 3 was about takeoff speeds.

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.

Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.

Comment author: ChrisHallquist 10 July 2014 01:23:46AM 1 point [-]

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.

Comment author: ChrisHallquist 08 July 2014 04:01:43PM *  4 points [-]

I like the idea of this fanfic, it seems like it could have been executed much better.

EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."

Comment author: TheAncientGeek 24 April 2014 09:55:41AM 3 points [-]

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Comment author: ChrisHallquist 05 July 2014 11:41:11PM 1 point [-]

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Comment author: IlyaShpitser 03 July 2014 06:38:14PM 8 points [-]

Have you guys given any thought to doing pagerankish stuff with karma?

Comment author: ChrisHallquist 05 July 2014 06:35:29AM *  1 point [-]

Have you guys given any thought to doing pagerankish stuff with karma?

Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.

Comment author: David_Gerard 03 July 2014 09:26:54PM *  4 points [-]

Wow, I picked the culprit.

(I have no signed prior statement to prove this, but I certainly guessed it.)

Edit: And there's my points going up already!

Comment author: ChrisHallquist 04 July 2014 03:59:15PM *  3 points [-]

Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.

Comment author: shminux 03 July 2014 06:00:39PM *  24 points [-]

I seem to be the lone dissenter here, but I am unhappy about the ban. Not that it is unjustified, it definitely is. However, it does not address the main issue (until jackk fiddles with karma): preventing Eugine from mass downvoting. So this is mainly retribution, rather than remediation, which seems anti-rational to me, if emotionally satisfying, as one of the victims.

Imagine for a moment that Eugine did not engage in mass downvoting. He would be a valuable regular on this site. I recall dozens of insightful comments he made (and dozens of poor ones, of course, but who am I to point fingers), and I only stopped engaging him in the comments after his mass-downvoting habits were brought to light for the first time. So, I would rather see him exposed and dekarmified, but allowed to participate.

TL;DR: banning is a wrong decision, should have been exposed and stripped of the ability of downvote instead. Optionally, all his votes ever could have been reversed, unless it's hard.

EDIT: apparently not the lone dissenter, just the first to speak up.

Comment author: ChrisHallquist 03 July 2014 08:28:38PM 9 points [-]

I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.

Comment author: paper-machine 02 July 2014 04:22:10PM 1 point [-]

Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?

Why don't you leave armchair Internet psychoanalysis to experts?

Comment author: ChrisHallquist 03 July 2014 04:13:33AM 9 points [-]

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

Comment author: shminux 02 July 2014 03:01:33AM *  8 points [-]

is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving

I don't believe that it's more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that's it.

Comment author: ChrisHallquist 02 July 2014 05:31:11AM 7 points [-]

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.

I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.

View more: Next