Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Caspian 29 March 2016 03:13:44AM 32 points [-]

I have taken the survey.

Comment author: dvasya 11 July 2014 10:56:12PM 2 points [-]

Well, perhaps a bit too simple. Consider this. You set your confidence level at 95% and start throwing a coin. You observe 100 tails out of 100. You publish a report saying "the coin has tails on both sides at a 95% confidence level" because that's what you chose during design. Then 99 other researchers repeat your experiment with the same coin, arriving at the same 95%-confidence conclusion. But you would expect to see about 5 reports claiming otherwise! The paradox is resolved when somebody comes up with a trick using a mirror to observe both sides of the coin at once, finally concluding that the coin is two-tailed with a 100% confidence.

What was the mistake?

In response to comment by dvasya on Too good to be true
Comment author: Caspian 14 July 2014 11:15:37PM 2 points [-]

One mistake is treating 95% as the chance of the study indicating two-tailed coins, given that they were two-tailed coins. More likely it was meant as the chance of the study not indicating two-tailed coins, given that they were not two-tailed coins.

Try this:

You want to test if a coin is biased towards heads. You flip it 5 times, and consider 5 heads as a positive result, 4 heads or fewer as negative. You're aiming for 95% confidence but have to get 31/32 = 96.875%. Treating 4 heads as a possible result wouldn't work either, as that would get you less than 95% confidence.

Comment author: XiXiDu 20 June 2014 09:33:25AM 10 points [-]

Scott Says:

There’s a crucial observation that I took for granted in the post but shouldn’t have, so let me now make it explicit. The observation is this:

No system for aggregating preferences whatsoever—neither direct democracy, nor representative democracy, nor eigendemocracy, nor anything else—can possibly deal with the “Nazi Germany problem,” wherein basically an entire society’s value system becomes inverted to the point where evil is good and good evil.

Comment author: Caspian 28 June 2014 02:09:00AM 0 points [-]

If we're aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.

That's not to say you couldn't still find tricky example societies where the system evaluation isn't doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.

Comment author: SaidAchmiz 21 January 2014 05:19:13AM 27 points [-]

making physical backups of data

Oh boy, is this ever a good example.

I used to work retail, selling and repairing Macs and Mac accessories. When I'd sell someone a computer, I'd tell them — no, beg them — to invest in a backup solution. "I'm not trying to sell you anything!", I'd say. "You don't have to buy your backup device from us — though we'd be glad to sell you one for a decent price — but please, get one somewhere! Set it up — heck, we'll set it up for you — and please... back up! When you come to us after your hard drive has inevitably failed — as all hard drives do eventually, sure as death or taxes — with your life's work on it, you'll be glad you backed up."

And they'd smile, and nod, and come back some time later with a failed hard drive, no backup, and full of outrage that we couldn't magic their data back into existence. And they'd pay absurd amounts of money for data recovery.

Back up your data, people. It's so easy (if you've got a Mac, anyway). The pain of losing months or years of work is really, really, really painful.

Comment author: Caspian 29 January 2014 03:22:14AM 1 point [-]

Back up your data, people. It's so easy (if you've got a Mac, anyway).

Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.

Comment author: NancyLebovitz 03 January 2014 10:36:00PM 0 points [-]

That seems high.

Smoking shortens life by about ten years-- but not so much if you stop by age 40. This may imply that if we get decent anti-aging tech, smoking won't be a serious risk. How hard would the tech be to not be bothered by cigarette smoke?

Let's assume someone who didn't stop smoked for 40 years-- two packs a day. That's 40 x 365 x 40 = 584,000 cigarettes. Divide that into 10 years worth of minutes, and it comes out as .9 minutes, assuming I set up the calculations properly.

Comment author: Caspian 05 January 2014 01:07:32AM 0 points [-]

I think there's an error in your calculations.

If someone smoked for 40 years and that reduced their life by 10 years, that 4:1 ratio translates to every 24 hours of being a smoker reducing lifespan by 6 hours (360 minutes). Assuming 40 cigarettes a day, that's 360/40 or 9 minutes per cigarette, pretty close to the 11 given earlier.

Comment author: Caspian 15 December 2013 03:32:52AM 1 point [-]

This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.

cancer treatment link

Comment author: gjm 11 December 2013 01:47:04PM 1 point [-]

What counts as a causal problem?

A sufficiently good predictor might be able to answer questions of the form "if I do X, what will happen thereafter?" and "if I do Y, what will happen thereafter?" even though what-will-happen-thereafter may be partly caused by doing X or Y.

Is your point that (to take a famous example with which I'm sure you're already very familiar) in a world where the correlation between smoking and lung cancer goes via a genetic feature that makes both happen, if you ask the machine that question it may in effect say "he chose to smoke, therefore he has that genetic quirk, therefore he will get lung cancer"? Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

I do agree that there are important questions a pure predictor can't help much with. For instance, the machine may be as good as you please at predicting the outcome of particle physics experiments, but it may not have (or we may not be able to extract from it in comprehensible form) any theory of what's going on to produce those outcomes.

Comment author: Caspian 15 December 2013 12:37:27AM 0 points [-]

Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".

But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.

Comment author: passive_fist 11 December 2013 09:24:10PM 0 points [-]

You're talking about predicting the actions of an intelligent agent.

LeCun is talking about predicting the environment. These are two different concepts.

Comment author: Caspian 15 December 2013 12:01:30AM 0 points [-]

In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"

But not predicting everything they do and exactly what they'll type.

Comment author: [deleted] 15 November 2013 02:10:01AM 1 point [-]

Not quite, as SquallMage had correctly answered that 27, 33, 39 and 49 were not prime.

Comment author: Caspian 24 November 2013 03:05:39AM 0 points [-]

I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.

Comment author: satt 22 July 2013 01:22:41PM 1 point [-]

I would always find people in aeroplanes less threatening than in trains.

Hadn't noticed that before but now you mention it, I think I have a weaker version of the same intuition.

Comment author: Caspian 24 July 2013 11:53:38PM *  1 point [-]

I expect part of it's based on status of course, but part of it could be that it would be much harder for a mugger to escape on a plane. No crowd of people standing up to blend into, and no easy exits.

Also on some trains you have seats facing each other, so people get used to deliberately avoiding each others gaze (edit: I don't think I'm saying that quite right. They're looking away), which I think makes it feel both awkward and unsafe.

View more: Next