Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TheAncientGeek 24 October 2014 03:31:19PM 11 points [-]

"Social democrat" and "liberal" have been given almost identical descriptions. Don't know if that's deliberate.

Comment author: ChrisHallquist 25 October 2014 03:54:55AM 7 points [-]

Duplicate comment, probably should be deleted.

Comment author: TheAncientGeek 24 October 2014 01:44:14PM 20 points [-]

"Social democrat" and "liberal" have been given almost identical descriptions. Don't know if that's deliberate.

Comment author: ChrisHallquist 25 October 2014 03:53:57AM 1 point [-]

Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.

Comment author: VAuroch 23 October 2014 05:53:57AM 29 points [-]

Is Anti-Agathics a strict superset of Cryonics? That is to say, would someone becoming cryonically frozen and then restored, and then living for 1000 years from that date, count as a success for the anti-agathics question?

Comment author: ChrisHallquist 25 October 2014 03:52:12AM 1 point [-]

I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.

Comment author: ChrisHallquist 25 October 2014 03:45:59AM 31 points [-]

Done, except for the digit ratio, because I do not have access to a photocopier or scanner.

Comment author: ChrisHallquist 25 October 2014 02:50:10AM 4 points [-]

Liberal here, I think my major heresy is being pro-free trade.

Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.

You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today would be branded a socialist if a Democrat, and a RINO if a Republican.

(Friedman, among other things, supported a version of guaranteed basic income. To which today's GOP mainstream would probably say, "but if we do that, it will just make poor people even lazier!")

Political labels are weird.

Comment author: lmm 24 October 2014 06:38:35PM 15 points [-]

You'd expect Silicon Valley working practices to be less optimal than those in mature industries, because, well, the industries aren't mature. The companies are often run by people with minimal management experience, and the companies themselves are too short-lived to develop the kind of institutional memory that would be able to determine whether such policies were good or bad. Heck, most of SV still follows interview practices that have been actively shown to be useless, to the extent that they've been abandoned by the company that originated them (Microsoft). Success is too random for these things to be noticeable; the truth is that in SV, being 50% less efficient probably has negligible effects on your odds of success, because the success or failure of a given company is massively overdetermined (in one direction or the other) by other factors.

The only people in a position to figure this kind of thing out, and then act on that knowledge, are the venture capitalists - and they're a long way removed from the action (and anyone smart has already left the business since it's not a good way of making money). Eventually I'd expect VCs to start insisting that companies adopt 40-hour policies, but it's going to take a long time for the signal to emerge from the noise.

Comment author: ChrisHallquist 25 October 2014 02:43:25AM 10 points [-]

and anyone smart has already left the business since it's not a good way of making money.

Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.

A similar pattern appears to occur in the hedge fund industry. In both cases, if you just look at the industry-wide stats, they look terrible, but that doesn't mean that Peter Thiel or George Soros aren't smart because they're still in the game.

Comment author: figor888 17 September 2014 03:32:29AM 1 point [-]

I certainly believe Artificial Intelligence can and will perform many mundane jobs that are nothing more than mindless repetition and even in some instances create art, that said what about world leaders or position that require making decisions that affect segments of the population in unfair ways, such as storage of nuclear waste, transportation systems, etc?

To me, the answer is obvious, only a fool would trust an A.I. to make such high-level decisions. Without empathy, the A.I. could never provide a believable decision that any sane person should trust, no matter how many variables are used for its cost-benefit analysis.

Comment author: ChrisHallquist 17 September 2014 11:30:07AM 1 point [-]

Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.

Comment author: John_Maxwell_IV 27 August 2014 12:32:22AM *  4 points [-]

I hope the forum's moderators will take care to squash unproductive and divisive conversations about race, gender, social justice, etc., which seem have been invading and hindering nearby communities like the atheism/secularism world and the rationality world.

To play devil's advocate: Will MacAskill reported that this post of his criticizing the popular ice bucket challenge got lots of attention for the EA movement. Scott Alexander reports that his posts on social justice bring lots of hits to his blog. So it seems plausible to me that a well-reasoned, balanced post that made an important and novel point on a controversial topic could be valuable for attracting attention. Remember that this new EA forum will not have been seeded with content and a community quite the way LW was. Also, there are lots of successful group blogs (Huffington Post, Bleacher Report, Seeking Alpha, Daily Kos, etc.) that seem to have a philosophy of having members post all they want and then filtering the good stuff out of that.

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor. The important thing is to make it easy for users to find the good stuff, and suppressing the bad stuff is only one (rather blunt) way of accomplishing this. Ultimately the best way to help users find quality stuff depends on your forum software. It might be interesting to try to do a study of successful and unsuccessful subreddits to see what successful intellectual subreddits do that unsuccessful ones don't, given that the LW userbase and forum software are a bit similar to those of reddit.

(It's possible that strategies that work for HuffPo et al. will not transfer well at all to a blog focused more on serious intellectual discussion. So it might be useful to decide whether the new EA forum is more about promoting EA itself or promoting serious intellectual discussion of EA topics.)

(Another caveat: I've talked to people who've ditched LW because they get seriously annoyed and it ruins their day when they see a comment that they regard as insufficiently rational. I'm not like this and I'm not sure how many people are, but these people seem likely to be worth keeping around and catering to the interests of.)

Comment author: ChrisHallquist 27 August 2014 05:29:37AM 1 point [-]

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.

Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.

Comment author: XiXiDu 09 July 2014 06:07:46PM *  2 points [-]

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”

Question 3 was about takeoff speeds.

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.

Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.

Comment author: ChrisHallquist 10 July 2014 01:23:46AM 1 point [-]

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.

Comment author: ChrisHallquist 08 July 2014 04:01:43PM *  4 points [-]

I like the idea of this fanfic, it seems like it could have been executed much better.

EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."

View more: Next