I have taken the survey.
One mistake is treating 95% as the chance of the study indicating two-tailed coins, given that they were two-tailed coins. More likely it was meant as the chance of the study not indicating two-tailed coins, given that they were not two-tailed coins.
Try this:
You want to test if a coin is biased towards heads. You flip it 5 times, and consider 5 heads as a positive result, 4 heads or fewer as negative. You're aiming for 95% confidence but have to get 31/32 = 96.875%. Treating 4 heads as a possible result wouldn't work either, as that would get you less than 95% confidence.
If we're aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.
That's not to say you couldn't still find tricky example societies where the system evaluation isn't doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.
Back up your data, people. It's so easy (if you've got a Mac, anyway).
Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.
I think there's an error in your calculations.
If someone smoked for 40 years and that reduced their life by 10 years, that 4:1 ratio translates to every 24 hours of being a smoker reducing lifespan by 6 hours (360 minutes). Assuming 40 cigarettes a day, that's 360/40 or 9 minutes per cigarette, pretty close to the 11 given earlier.
This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.
Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".
But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.
In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"
But not predicting everything they do and exactly what they'll type.
I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.
This initially seemed like it would still be very difficult to use.
I didn't find any easier descriptions of TAPs available on lesswrong for a long time after this was written, but I just had another look and found some more recent posts that suggested a practice step after planning the trigger-action pair.
For example, here:
What are Trigger-Action Plans (TAPs)?
You can either practise with the real trigger, or practise with visualising the trigger.
There's lots more about TAPs on lesswrong now I that I haven't read yet but the practice idea stood out as particularly important.