I have taken the survey.
One mistake is treating 95% as the chance of the study indicating two-tailed coins, given that they were two-tailed coins. More likely it was meant as the chance of the study not indicating two-tailed coins, given that they were not two-tailed coins.
Try this:
You want to test if a coin is biased towards heads. You flip it 5 times, and consider 5 heads as a positive result, 4 heads or fewer as negative. You're aiming for 95% confidence but have to get 31/32 = 96.875%. Treating 4 heads as a possible result wouldn't work either, as that would get you less than 95% confidence.
If we're aggregating cooperation rather than aggregating values, we certainly can create a system that distinguishes between societies that apply an extreme level of noncooperation (i.e. killing) to larger groups of people than other societies, and that uses our own definition of noncooperation rather than what the Nazi values judge as noncooperation.
That's not to say you couldn't still find tricky example societies where the system evaluation isn't doing what we want, I just mean to encourage further improvement to cover moral behaviour towards and from hated minorities, and in actual Nazi Germany.
Back up your data, people. It's so easy (if you've got a Mac, anyway).
Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.
I think there's an error in your calculations.
If someone smoked for 40 years and that reduced their life by 10 years, that 4:1 ratio translates to every 24 hours of being a smoker reducing lifespan by 6 hours (360 minutes). Assuming 40 cigarettes a day, that's 360/40 or 9 minutes per cigarette, pretty close to the 11 given earlier.
This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.
Surely any prediction device that would be called "intelligent" by anyone less gung-ho than, say, Ray Kurzweil would enable you to ask it questions like "suppose I -- with my current genome -- chose to smoke; then what?" and "suppose I -- with my current genome -- chose not to smoke; then what?".
But it would be better if you could ask: "suppose I chose to smoke, but my genome and any other similar factors I don't know about were to stay as they are, then what?" where the other similar factors are things that cause smoking.
In part of the interview LeCun is talking about predicting the actions of Facebook users, e.g. "Being able to predict what a user is going to do next is a key feature"
But not predicting everything they do and exactly what they'll type.
I believe that was part of the mistake, answering whether or not the numbers were prime, when the original question, last repeated several minutes earlier, was whether or not to accept a deal.
I expect part of it's based on status of course, but part of it could be that it would be much harder for a mugger to escape on a plane. No crowd of people standing up to blend into, and no easy exits.
Also on some trains you have seats facing each other, so people get used to deliberately avoiding each others gaze (edit: I don't think I'm saying that quite right. They're looking away), which I think makes it feel both awkward and unsafe.
...Q. Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?
A. Conventional economic theory says this shouldn't happen. Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns. If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot
Nonlinear utility functions (as a function of resources) do not accurately model human risk aversion. That could imply that we should either change our (or they/their) risk aversion or not be maximising expected utility.
Nonlinear jumps in utility from different amounts of a resource seem common for humans at least at some points in time. Example: Either I have enough to pay off the loan shark, or he'll break my legs.
That's not intended for people who could work but chose not to. They require you to regularly apply for employment. The applications themselves can be stressful and difficult work if you don't like self-promotion.
I think I even have work-like play where a game stops being fun. And yes, play-like work is what I want to achieve.
Reinforcing effort only in combination with poor performance wasn't the intent. Pick a better criterion that you can reinforce with honest self-praise. You do need to start off with low enough standards so you can reward improvement from your initial level though.
I'm interested in what you rewarded for going to bed earlier (or given the 0% success rate, what you planned to reward if it ever happened) and how/when you rewarded it. Maybe rewarding subtasks would have helped.
I just read Don't Shoot The Dog, and one of the interesting bits was that it seemed like getting trained the way it described was fun for the animals, like a good game. Also as the skill was learnt the task difficulty level was raised so it wasn't too easy. And the rewards seemed somewhat symbolic - a clicker, and being fed with food that wasn't officially restricted outside the training sessions.
Thinking about applying it to myself, having the reward not be too important outside the game/practise means I'm not likely to want to bypass the game to get the ...
Well, it seems we have a conflict of interests. Do you agree?
Yes. We also have interests in common, but yes.
If you do, do you think that it is fair to resolve it unilaterally in one direction?
Better to resolve it after considering inputs from all parties. Beyond that it depends on specifics of the resolution.
...If you do not, what should be the compromise?
To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways
I think sitting really close beside someone I would be less likely to want to face them - it would feel too intimate.
I would always find people in aeroplanes less threatening than in trains. I wouldn't imagine the person in the next seat mugging me, for example, whereas I would imagine it on a train.
What do other people think of strangers on a plane versus on a train?
Like RolfAndreassen said: please back the fuck off and leave others alone.
Please stop discouraging people from introducing themselves to me in circumstances where it would be welcome.
I now plan to split up long boring tasks into short tasks with a little celebration of completion as the reward after each one. I actually decided to try this after reading Don't Shoot the Dog, which I think I saw recommended on Less Wrong. It's got me a somewhat more productive weekend. If it does stop helping, I suspect it would be from the reward stopping being fun.
Getting back to post-scarcity for people who choose not to work, and what resources they would miss out on, a big concern would be not having a home. Clearly this is much more of a concern than drinks on flights. The main reason it is not considered a dire concern is that people's ability to choose not to work is not considered that vital.
A second, hidden copy of himself could possibly use the time turner as soon as it was announced the ring was to be transfigured, and make sure Hermione was not in the ring, but I think Harry has better uses than that for as much time turning as he can get.
My first thought was that she'd been transfigured into the pajamas, but I don't think that's likely. My theory is that when Harry slept in his bed it was the second time he'd been through that time period. The first time, he stayed invisible with transfigured Hermione in his possession, waited until woken-up Harry had finished being searched, gave her to woken-up Harry, then went back in time and went to bed.
You can get microphones much smaller than 7 cm, and they can detect frequencies way lower than 20 kHz. There's no rule saying you need a large detector to pick up a signal with a large wavelength.
Women famously say "sometimes I just want to be listened to. Don't try to solve my problems, just show me that you care."
I would interpret that as being specific to problems. There may also be women who would like feigned interest in dopey things they're into, or they may prefer to just discuss them with their girlfriends who are actually interested.
When men do this, women say "yes, that's what I'm talking about" and attempt to reinforce that behavior, perhaps unconsciously.
Explicitly saying this can be taken at face value, I thi...
When I buy stuff from people I don't know I'm mostly treating them as a means to an end. Not completely, because there are ways I'd try to be fair to a human that wouldn't apply to a thing, but to a larger extent than I would want in personal / social relationships.
Another rule of thumb I kind of like is: don't get people into interactions with you that they wouldn't want if they knew what you were doing. I feel like that probably encourages erring too far on the side of caution and altruism. But if you know the other person would prefer you to empathise ...
Not that I know of, but Advogato's trust metric limits the damage by a rogue endorser of many trolls with a calculation using maximum network flow. It doesn't allow for downvotes.
If you allow downvoting and blocking all of someone's nodes, that could be an incentive for the person to partition their publications into three pseudonyms, so that once the first is blocked, the others are still available.
That's a good question. Here's a definition of "fair" aimed at UDT-type thought experiments:
The agent has to know what thought experiment they are in as background knowledge, so the universe can only predict their counterfactual actions in situations that are in that thought experiment, and where the agent still has the knowledge of being in the thought experiment.
This disallows my anti-oneboxer setup here: http://lesswrong.com/lw/hqs/why_do_theists_undergrads_and_less_wrongers_favor/97ak (because the predictor is predicting what decision would b...
You penalise based on the counterfactual outcome: if they were in Newcomb's problem, this person would choose one box.
The way I like to think about it is that convincingness is a 2-place function - a simulation is convincing to a particular mind/brain. If there's a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it's cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.
Denn...
I want to use one of those phrases in conversation. Either grfgvat n znq ulcbgurfvf be znxvat znq bofreingvbaf (spoilers de-rot13ed)
Also I found the creator's page for the comic http://cowbirdsinlove.com/46
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had "After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes."
And then later, "These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat." though I'm not sure if the latter was separate ...
Isn't humour it's own reward? What extra reinforcement system could you use to increase it?
Yes, I upvoted it as an interesting idea, but wouldn't endorse actually putting it into practice.
I don't think it would substitute for optometrist appointments, just for getting new glasses of the same prescription as you already had. For people who have had LASIK, had your glasses prescriptions been changing up until then? And did you vision continue to change afterwards?
As munchkinry, it's pretty good, but I'm not taking this seriously enough to actually try it. It's just a fun idea to me.
I am mentally connecting this with the comment about tulpas
No need to modify the host's identity, you can both share their brain.
ETA: and now I'm thinking of the movie Being John Malkovich - the host was treated in an abusive manner, but there was a level of cooperation between the other minds sharing his body.
Practice getting off the Internet and going to bed:
Starting while not absorbed in browsing the web, find some not-too-compelling website, browse for a few minutes (not enough to get really into it) and then go and lie in bed for a few minutes (which shouldn't feel as difficult as it's not committing to a full night's sleep). While in bed, let your mind wander away from the internet. This practice can lead into practice for getting out of bed.
I tried this a bit - I'm not sure it was worthwhile, as I did sometimes get absorbed in browsing when trying this exercise.
When I was having a lot of trouble getting out of bed reasonably promptly in the mornings: practice getting out of bed - but not after just having woken up, that's what I was having trouble with in the first place. No, during the day, having been up for a while, go lie in bed for a couple of minutes with the alarm set, then get up when it goes off. Also, make this a pleasant routine with stretching, smiling and deep breathing.
I found this idea on the net here, which may have more details: http://www.stevepavlina.com/blog/2006/04/how-to-get-up-right-away-wh...
FYI, this training is part of USAF basic training. With more yelling. I wouldn't call it a pleasant routine, but it's certainly effective when you do it for six hours straight and start to get an adrenaline surge when your alarm goes off.
That still persists 1.5 years later, so it may be a munchkin hack in itself.
An alternative, courtesy of Anders Sandberg (via Kaj Sotala), is to set your alarm to ring two hours before your desired wake-up time, take one or two 50mg caffeine pills when it rings, and go back to sleep immediately thereafter. When you wake two hours later, getting out of bed shouldn't be a problem. Details here.
I don't have an elegant fix for this, but I came up with a kludgy decision procedure that would not have the issue.
Problem: you don't want to give up a decent chance of something good, for something even better that's really unlikely to happen, no matter how much better that thing is.
Solution: when evaluating the utility of a probabilistic combination of outcomes, instead of taking the average of all of them, remove the top 5% (this is a somewhat arbitrary choice) and find the average utility of the remaining outcomes.
For example, assume utility is proport...
Making up absurd explanations for the talking snake goes against the direction of your post, but I wanted to share this one: a remote control snake the owner can talk through is the sort of thing that could be a children's toy. Santa Claus gave one to Satan, who used it for mischief.
Suicide in particular is often illegal.
ETA: possibly this statement of mine was outdated.
BlazeOrangeDeer would be talking about this parody subreddit. Sometimes the parodies are in a similar "meta" style to Konkvistador's post.
Others have covered your knee jerk poison-is-bad reaction so I'll let that pass, but the thing that stuck out for me as bad epistemic standards from MMS proponents was seeing some "explanation" for why it would give you an upset stomach despite the other claim that it would only harm "bad" bacteria. Something about how it's your body flushing out poisons and it's a good sign. It struck me as an untested rationalisation someone just made up.
I'm assuming that if bought your cloak for the same price of a typical sweater, you would preferably use sweaters rather than the cloak.
Instead, just assume that if she had not found excuses to wear the cloak, she would use sweaters rather than the cloak. This could be chosen by habit rather than considered preference.
I had meant to suggest some sort of unintelligent feedback system. Not coincidence, but also not an intelligent optimisation, so still not an exact parallel to his thermostat.
This initially seemed like it would still be very difficult to use.
I didn't find any easier descriptions of TAPs available on lesswrong for a long time after this was written, but I just had another look and found some more recent posts that suggested a practice step after planning the trigger-action pair.
For example, here:
What are Trigger-Action Plans (TAPs)?
You can either practise with the real trigger, or practise with visualising the trigger.
There's lots more about TAPs on lesswrong now I that I haven't read yet but the practice idea stood out as particularly important.