Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: XiXiDu 09 July 2014 06:07:46PM *  2 points [-]

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”

Question 3 was about takeoff speeds.

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.

Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.

Comment author: ChrisHallquist 10 July 2014 01:23:46AM 1 point [-]

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.

Comment author: ChrisHallquist 08 July 2014 04:01:43PM *  4 points [-]

I like the idea of this fanfic, it seems like it could have been executed much better.

EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."

Comment author: TheAncientGeek 24 April 2014 09:55:41AM 3 points [-]

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Comment author: ChrisHallquist 05 July 2014 11:41:11PM 1 point [-]

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Comment author: IlyaShpitser 03 July 2014 06:38:14PM 8 points [-]

Have you guys given any thought to doing pagerankish stuff with karma?

Comment author: ChrisHallquist 05 July 2014 06:35:29AM *  1 point [-]

Have you guys given any thought to doing pagerankish stuff with karma?

Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.

Comment author: David_Gerard 03 July 2014 09:26:54PM *  4 points [-]

Wow, I picked the culprit.

(I have no signed prior statement to prove this, but I certainly guessed it.)

Edit: And there's my points going up already!

Comment author: ChrisHallquist 04 July 2014 03:59:15PM *  3 points [-]

Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.

Comment author: shminux 03 July 2014 06:00:39PM *  24 points [-]

I seem to be the lone dissenter here, but I am unhappy about the ban. Not that it is unjustified, it definitely is. However, it does not address the main issue (until jackk fiddles with karma): preventing Eugine from mass downvoting. So this is mainly retribution, rather than remediation, which seems anti-rational to me, if emotionally satisfying, as one of the victims.

Imagine for a moment that Eugine did not engage in mass downvoting. He would be a valuable regular on this site. I recall dozens of insightful comments he made (and dozens of poor ones, of course, but who am I to point fingers), and I only stopped engaging him in the comments after his mass-downvoting habits were brought to light for the first time. So, I would rather see him exposed and dekarmified, but allowed to participate.

TL;DR: banning is a wrong decision, should have been exposed and stripped of the ability of downvote instead. Optionally, all his votes ever could have been reversed, unless it's hard.

EDIT: apparently not the lone dissenter, just the first to speak up.

Comment author: ChrisHallquist 03 July 2014 08:28:38PM 9 points [-]

I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.

Comment author: paper-machine 02 July 2014 04:22:10PM 1 point [-]

Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?

Why don't you leave armchair Internet psychoanalysis to experts?

Comment author: ChrisHallquist 03 July 2014 04:13:33AM 8 points [-]

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

Comment author: shminux 02 July 2014 03:01:33AM *  8 points [-]

is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving

I don't believe that it's more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that's it.

Comment author: ChrisHallquist 02 July 2014 05:31:11AM 7 points [-]

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.

I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.

Comment author: BrienneStrohl 05 May 2014 04:03:47AM *  14 points [-]

I was not signaling. Making it a footnote instead of just editing it outright was signaling. Revering truth, and stating that I do so, was not.

Now that I've introspected some more, I notice that my inclination to prioritize the accuracy of information I attend to above its competing features comes from the slow accumulation of evidence that excellent practical epistemology is the the strongest possible foundation for instrumental success. To be perfectly honest, deep down, my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

I have long been more viscerally motivated by things that are interesting or beautiful than by things that correspond to the territory. So it's not too surprising that toward the beginning of my rationality training, I went through a long period of being so enamored with a-veridical instrumental techniques that I double-thought myself into believing accuracy was not so great.

But I was wrong, you see. Having accurate beliefs is a ridiculously convergent incentive, so whatever my goal structure, it was only a matter of time before I'd recognize that. Every utility function that involves interaction with the territory--interaction of just about any kind!--benefits from a sound map. Even if "beauty" is a terminal value, "being viscerally motivated to increase your ability to make predictions that lead to greater beauty" increases your odds of success.

Recognizing only abstractly that map-territory correspondence is useful does not produce the same results. Cultivating a deep dedication to ensuring every motion precisely engages reality with unfailing authenticity prevents real-world mistakes that noting the utility of information, just sort of in passing, will miss.

For some people, dedication to epistemic rationality may most effectively manifest as excitement or simply diligence. For me, it is reverence. Reverence works in my psychology better than anything else. So I revere the truth. Not for the sake of the people watching me do so, but for the sake of accomplishing whatever it is I happen to want to accomplish.

"Being truth-seeking" does not mean "wanting to know ALL THE THINGS". It means exhibiting patters of thought and behavior that consistently increase calibration. I daresay that is, in fact, necessary for being well-calibrated.

Comment author: ChrisHallquist 06 May 2014 09:31:32PM -1 points [-]

...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?

Truth: It's Not That Great

33 ChrisHallquist 04 May 2014 10:07PM

Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

-Yvain, "Extreme Rationality: It's Not That Great"

The folks most vocal about loving "truth" are usually selling something. For preachers, demagogues, and salesmen of all sorts, the wilder their story, the more they go on about how they love truth...

The people who just want to know things because they need to make important decisions, in contrast, usually say little about their love of truth; they are too busy trying to figure stuff out.

-Robin Hanson, "Who Loves Truth Most?"

A couple weeks ago, Brienne made a post on Facebook that included this remark: "I've also gained a lot of reverence for the truth, in virtue of the centrality of truth-seeking to the fate of the galaxy." But then she edited to add a footnote to this sentence: "That was the justification my brain originally threw at me, but it doesn't actually quite feel true. There's something more directly responsible for the motivation that I haven't yet identified."

I saw this, and commented:

<puts rubber Robin Hanson mask on>

What we have here is a case of subcultural in-group signaling masquerading as something else. In this case, proclaiming how vitally important truth-seeking is is a mark of your subculture. In reality, the truth is sometimes really important, but sometimes it isn't.

</rubber Robin Hanson mask>

In spite of the distancing pseudo-HTML tags, I actually believe this. When I read some of the more extreme proclamations of the value of truth that float around the rationalist community, I suspect people are doing in-group signaling—or perhaps conflating their own idiosyncratic preferences with rationality. As a mild antidote to this, when you hear someone talking about the value of the truth, try seeing if the statement still makes sense if you replace "truth" with "information."

This standard gives many statements about the value of truth its stamp of approval. After all, information is pretty damn valuable. But statements like "truth seeking is central to the fate of the galaxy" look a bit suspicious. Is information-gathering central to the fate of the galaxy? You could argue that statement is kinda true if you squint at it right, but really it's too general. Surely it's not just any information that's central to shaping the fate of the galaxy, but information about specific subjects, and even then there are tradeoffs to make.

This is an example of why I suspect "effective altruism" may be better branding for a movement than "rationalism." The "rationalism" branding encourages the meme that truth-seeking is great we should do lots and lots of it because truth is so great. The effective altruism movement, on the other hand, recognizes that while gathering information about the effectiveness of various interventions is important, there are tradeoffs to be made between spending time and money on gathering information vs. just doing whatever currently seems likely to have the greatest direct impact. Recognize information is valuable, but avoid analysis paralysis.

Or, consider statements like:

  • Some truths don't matter much.
  • People often have legitimate reasons for not wanting others to have certain truths.
  • The value of truth often has to be weighed against other goals.

Do these statements sound heretical to you? But what about:

  • Information can be perfectly accurate and also worthless. 
  • People often have legitimate reasons for not wanting other people to gain access to their private information. 
  • A desire for more information often has to be weighed against other goals. 

I struggled to write the first set of statements, though I think they're right on reflection. Why do they sound so much worse than the second set? Because the word "truth" carries powerful emotional connotations that go beyond its literal meaning. This isn't just true for rationalists—there's a reason religions have sayings like, "God is Truth" or "I am the way, the truth, and the life." "God is Facts" or "God is Information" don't work so well.

There's something about "truth"—how it readily acts as an applause light, a sacred value which must not be traded off against anything else. As I type that, a little voice in me protests "but truth really is sacred"... but once we can't say there's some limit to how great truth is, hello affective death spiral.

Consider another quote, from Steven Kaas, that I see frequently referenced on LessWrong: "Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever." Interestingly, the original blog included a caveat—"we may have to count everyday social interactions as a partial exception"—which I never see quoted. That aside, the quote has always bugged me. I've never had my tires slashed, but I imagine it ruins your whole day. On the other hand, having less than maximally accurate beliefs about something could ruin your whole day, but it could very easily not, depending on the topic.

Furthermore, sometimes sharing certain information doesn't just have little benefit, it can have substantial costs, or at least substantial risks. It would seriously trivialize Nazi Germany's crimes to compare it to the current US government, but I don't think that means we have to promote maximally accurate beliefs about ourselves to the folks at the NSA. Or, when negotiating over the price of something, are you required to promote maximally accurate beliefs about the highest price you'd be willing to pay, even if the other party isn't willing to reciprocate and may respond by demanding that price?

Private information is usually considered private precisely because it has limited benefit to most people, but sharing it could significantly harm the person whose private information it is. A sensible ethic around information needs to be able to deal with issues like that. It needs to be able to deal with questions like: is this information that is in the public interest to know? And is there a power imbalance involved? My rule of thumb is: secrets kept by the powerful deserve extra scrutiny, but so conversely do their attempts to gather other people's private information. 

"Corrupted hardware"-type arguments can suggest you should doubt your own justifications for deceiving others. But parallel arguments suggest you should doubt your own justifications for feeling entitled to information others might have legitimate reasons for keeping private. Arguments like, "well truth is supremely valuable," "it's extremely important for me to have accurate beliefs," or "I'm highly rational so people should trust me" just don't cut it.

Finally, being rational in the sense of being well-calibrated doesn't necessarily require making truth-seeking a major priority. Using the evidence you have well doesn't necessarily mean gathering lots of new evidence. Often, the alternative to knowing the truth is not believing falsehood, but admitting you don't know and living with the uncertainty.

View more: Next