Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Ixiel 26 May 2016 10:05:30PM 0 points [-]

Ok, I have to hold my breath as I ask this, and I'm really not trying to poke any bears, but I trust this community's ability to answer objectively more than other places I can ask, including more than my weak weak Google fu, given all the noise:

Is Sanders actually more than let's say 25% likely to get the nod?

I had written him off early, but I don't get to vote in that primary so I only just started paying attention. I'm probably voting Libertarian anyway, but Trump scares me almost as much as Clinton, so I'd sleep a little better during the meanwhile if it turns out I was wrong.

Thanks in advance. If this violates the Politics Commandment I accept the thumbs, but I'd love to also hear an answer I can trust.

Comment author: knb 28 May 2016 12:52:12AM 2 points [-]

I'd estimate Sanders' chances as less than 10%, maybe a bit more than 5%.He would need a mass defection of superdelegates at this point, and it's possible they would be directed to jump en masse to someone else (like Biden) even if the DNC decides to dump Clinton.

Comment author: ShardPhoenix 14 May 2016 01:58:47PM *  5 points [-]

Asian (East Asian): -0.600% 80 3.300%
Asian (Indian subcontinent): +0.300% 60 2.500%

Something I've been curious about for a while is the low proportion of Asian and Indian people in the LWsphere compared to STEM communties in general. Any ideas?

Comment author: knb 15 May 2016 03:14:59AM 2 points [-]

I'm not sure that "STEM communities" is a valid reference group for LW.

Comment author: morganism 10 May 2016 08:21:22PM -2 points [-]

For the media thread, Aeon magazine philosophical article

https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible

Comment author: knb 11 May 2016 01:49:34AM 8 points [-]

For anyone curious about this link, I'll save you some time:

From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own.

It's that type of article.

Comment author: Arshuni 09 May 2016 06:11:29AM 3 points [-]

Which psychological findings have great practical implications, if they are indeed true?

Overjustification comes to mind, as an example.

On a related note: if it is true, does that suggest that, as far as we take the diminishing utility of money for granted, by using extrinsic rewards, we are reducing the number of extreme performers? (in so far as we can't keep giving exponential rewards, and money/tokens/what have you motivates in proportion to their utility) I have seen it argued, that if you are not doing well enough to be expecting a non-interrupted stream of extrinsic rewards, you probably shouldn't be doing that thing. Does that lose any validity in this context?

Still, it seems like whether it's true should have some implications.

A more certain finding seems to be the poor transfer of learning. It SEEMS like this SHOULD have implications for the education system.

What else would? (like, even if stereotype threat existed as a significant force, it seems far less clear to me how that finding could realistically impact any policies or our behaviors)

Comment author: knb 10 May 2016 03:02:45AM 0 points [-]

On a related note: if it is true, does that suggest that, as far as we take the diminishing utility of money for granted, by using extrinsic rewards, we are reducing the number of extreme performers? (in so far as we can't keep giving exponential rewards, and money/tokens/what have you motivates in proportion to their utility).

I think the positional qualities of money compensate for this somewhat. People still work hard because they want to keep ahead of their neighbor/coworker.

Comment author: Daniel_Burfoot 05 May 2016 03:56:46PM *  1 point [-]

Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated. Societies that are dynamic and competent in one area, such as physics research, will also be dynamic and competent in other areas, such as infrastructure and good governance.

What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world's best research and technology in the field of microbiology. Or we might observe that Indonesia had the best set of laws, courts, and legal knowledge. Such observations would falsify my hypothesis.

If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly. One obvious thing that could derail American innovation is catastrophic social turmoil.

Optimists could accept the civilizational competence correlation idea, but believe that US competence in areas like infotech is going to "pull up" our performance in other areas, at which we are presently failing abjectly.

Comment author: knb 05 May 2016 04:50:09PM 1 point [-]

Part of my worldview is that progress, innovation and competence in all areas of science, technology, and other aspects of civilization are correlated.

I'm sure they're correlated but not all that tightly.

What would the world look like if that hypothesis were false? Well, we could find a country that is not particularly competent overall, but was very competent and innovative in one specific civilizational subfield. As a random example, imagine it turned out that Egypt actually had the world's best research and technology in the field of microbiology.

I think there are some pretty good examples. The soviets made great achievements in spaceflight and nuclear energy research in spite of having terrible economic and social policies. The Mayans had sophisticated astronomical calendars but they also practiced human sacrifice and never invented the wheel.

If the theory is true, then the fact that the US still seems innovative in CS-related fields is probably a transient anomaly.

I doubt it, but even if true it doesn't save us, since plenty of other countries could develop AGI.

Comment author: halcyon 05 May 2016 12:40:35PM *  -2 points [-]

On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country". He also said it's impossible to tell with absolute certainty whether a Syrian was Christian or Muslim, so he'd have to assume they're all Muslims. This suggests that telling US officials that I'm a LW transhumanist might not convince them that I have no connection with ISIS. I'm not from Syria, but I have an Arabic name and my family is Muslim.

I've read Cory Doctorow's Little Brother, and this might be a generalization from fictional evidence, but I can't help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason? Should I drop everything and make a break for it before it's too late? Initially, many Germans didn't take Hitler's extremist rhetoric seriously either, right? (If I get deported in a civilized manner, well, no harm done to me as far as I'm concerned.)

I normally assume, as a rule of thumb, that politicians intend to fulfill all their promises. If a politician says he wants to invade Mars, that could be pure rhetoric, but I'd typically assume that he might try it in the worst case scenario. I have observed it is often the case that when we think other people are joking, they are in fact exaggerating their true desires and presenting them in an ironic/humorous light.

Comment author: knb 05 May 2016 03:03:02PM 6 points [-]

Seems like you're just falling for partisan media histrionics and conflating a lot of different things out of context.

On Fox News, Trump said that regarding Muslims in the US, he would do "unthinkable" things, "and certain things will be done that we never thought would happen in this country".

In context, Trump is giving a tough-sounding but vague and non-committal response to questions about whether there should be a digital database of Muslims in the country. He later partially walked this back, saying it was a leading question from a reporter and he meant we should have terrorism watch lists. Which obviously already exist.

I've read Cory Doctorow's Little Brother, and this might be a generalization from fictional evidence, but I can't help asking: As a foreign student in the US, how likely is Trump to have me tortured for no reason?

I'd say it's about as likely as you giving yourself a heart attack reading political outrage porn.

Comment author: Daniel_Burfoot 04 May 2016 02:02:56PM 8 points [-]

Though I enthusiastically endorse the concept of rationality, I often find myself coming to conclusions about Big Picture issues that are quite foreign to the standard LW conclusions. For example, I am not signed up for cryonics even though I accept the theoretical arguments in favor of it, and I am not worried about unfriendly AI even though I accept most of EY's arguments.

I think the main reason is that I am 10x more pessimistic about the health of human civilization than most other rationalists. I'm not a cryonicist because I don't think companies like Alcor can survive the long period of stagnation that humanity is headed towards. I don't worry about UFAI because I don't think our civilization has the capability to achieve AI. It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.

Now, I don't know whether my pessimism is more rational than others' optimism. LessWrong, and rationalists in general, probably have a blind spot relative to questions of civilizational inadequacy because those questions relate to political issues, and we don't talk about politics. Is there a way we can discuss civilizational issues without becoming mind-killed? Or do we simply have to accept that civilizational issues are going to create a large error bar of uncertainty around our predictions?

Comment author: knb 04 May 2016 09:46:52PM 5 points [-]

It's not that I think AI is spectacularly hard, I just don't think we can do Hard Things anymore.

I'm sympathetic to the idea that we can't do Hard Things, at least in the US and much of the rest of the West. Unfortunately progress in AI seems like the kind of Hard Thing that still is possible. Stagnation has hit atoms, not bits. There does seem to be a consensus that AI is not a stagnant field at all, but rather one that is consistently progressing.

Comment author: knb 02 May 2016 10:51:30AM *  1 point [-]

BBC News is running a story claiming that the creator of Bitcoin known as Satoshi Nakamoto is an Australian named Craig Wright.

In response to Positivity Thread :)
Comment author: knb 28 April 2016 11:04:44PM 1 point [-]

I found this to be a cheerful video, about people working on fusion. (It's a promo, so dark arts warning applies.)

Comment author: username2 28 April 2016 09:58:42AM 1 point [-]

I don't like this idea, but people, please do not downvote Daniel just because you disagree. Downvote thumb is not for disagreements, it's for comments that don't add anything to the discussion.

Comment author: knb 28 April 2016 10:55:38PM 1 point [-]

Downvote thumb is not for disagreements, it's for comments that don't add anything to the discussion.

Who says?

View more: Next