Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChrisHallquist 26 July 2015 11:38:52PM -1 points [-]

On philosophy, I think it's important to realize that most university philosophy classes don't assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell's History of Western Philosophy or The Great Conversation (both of which I've read), I'd recommend something like The Norton Introduction to Philosophy.

Comment author: VAuroch 29 January 2015 09:49:04AM 14 points [-]

As was first proposed on /r/rational (and EY has confirmed that he got the idea from that proposal)

Comment author: ChrisHallquist 29 January 2015 11:17:20AM 1 point [-]


Comment author: solipsist 29 January 2015 02:03:58AM *  66 points [-]

Confirmation the prophecy isn't about Neville:

Neville Longbottom... who took this test in the Longbottom home... received a grade of Outstanding.


Harry raised the parchment with its EE+, still silent.

The Defense Professor smiled, and it went all the way to those tired eyes.

"It is the same grade... that I received in my own first year."


Comment author: ChrisHallquist 29 January 2015 11:16:48AM 2 points [-]


Comment author: TheAncientGeek 24 October 2014 03:31:19PM 17 points [-]

"Social democrat" and "liberal" have been given almost identical descriptions. Don't know if that's deliberate.

Comment author: ChrisHallquist 25 October 2014 03:54:55AM 9 points [-]

Duplicate comment, probably should be deleted.

Comment author: TheAncientGeek 24 October 2014 01:44:14PM 29 points [-]

"Social democrat" and "liberal" have been given almost identical descriptions. Don't know if that's deliberate.

Comment author: ChrisHallquist 25 October 2014 03:53:57AM 1 point [-]

Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.

Comment author: VAuroch 23 October 2014 05:53:57AM 35 points [-]

Is Anti-Agathics a strict superset of Cryonics? That is to say, would someone becoming cryonically frozen and then restored, and then living for 1000 years from that date, count as a success for the anti-agathics question?

Comment author: ChrisHallquist 25 October 2014 03:52:12AM 2 points [-]

I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.

Comment author: ChrisHallquist 25 October 2014 03:45:59AM 39 points [-]

Done, except for the digit ratio, because I do not have access to a photocopier or scanner.

Comment author: ChrisHallquist 25 October 2014 02:50:10AM 3 points [-]

Liberal here, I think my major heresy is being pro-free trade.

Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.

You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today would be branded a socialist if a Democrat, and a RINO if a Republican.

(Friedman, among other things, supported a version of guaranteed basic income. To which today's GOP mainstream would probably say, "but if we do that, it will just make poor people even lazier!")

Political labels are weird.

Comment author: lmm 24 October 2014 06:38:35PM 15 points [-]

You'd expect Silicon Valley working practices to be less optimal than those in mature industries, because, well, the industries aren't mature. The companies are often run by people with minimal management experience, and the companies themselves are too short-lived to develop the kind of institutional memory that would be able to determine whether such policies were good or bad. Heck, most of SV still follows interview practices that have been actively shown to be useless, to the extent that they've been abandoned by the company that originated them (Microsoft). Success is too random for these things to be noticeable; the truth is that in SV, being 50% less efficient probably has negligible effects on your odds of success, because the success or failure of a given company is massively overdetermined (in one direction or the other) by other factors.

The only people in a position to figure this kind of thing out, and then act on that knowledge, are the venture capitalists - and they're a long way removed from the action (and anyone smart has already left the business since it's not a good way of making money). Eventually I'd expect VCs to start insisting that companies adopt 40-hour policies, but it's going to take a long time for the signal to emerge from the noise.

Comment author: ChrisHallquist 25 October 2014 02:43:25AM 10 points [-]

and anyone smart has already left the business since it's not a good way of making money.

Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.

A similar pattern appears to occur in the hedge fund industry. In both cases, if you just look at the industry-wide stats, they look terrible, but that doesn't mean that Peter Thiel or George Soros aren't smart because they're still in the game.

Comment author: figor888 17 September 2014 03:32:29AM 1 point [-]

I certainly believe Artificial Intelligence can and will perform many mundane jobs that are nothing more than mindless repetition and even in some instances create art, that said what about world leaders or position that require making decisions that affect segments of the population in unfair ways, such as storage of nuclear waste, transportation systems, etc?

To me, the answer is obvious, only a fool would trust an A.I. to make such high-level decisions. Without empathy, the A.I. could never provide a believable decision that any sane person should trust, no matter how many variables are used for its cost-benefit analysis.

Comment author: ChrisHallquist 17 September 2014 11:30:07AM 1 point [-]

Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.

View more: Next