You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread for December 9 - 16, 2013

5 Post author: NancyLebovitz 09 December 2013 04:35PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (371)

Comment author: Document 17 December 2013 03:59:42AM *  1 point [-]

I think I want to buy a new laptop computer. Can anyone here provide advice, or suggestions on where to look?

The laptop I want to replace is a Dell Latitude D620. Its main issues are weight, heat production, slowness (though probably in part from software issues), inability to sleep or hibernate (buying and installing a new copy of XP might fix this), lack of an HDMI port, and deteriorated battery life. I briefly tried an Inspiron i14z-4000sLV, but it was still kind of slow, and trying to use Windows 8 without a touchscreen was annoying.

I remember reading that it's unsafe to move or jostle a laptop with a magnetic hard drive while it's running, because of the moving parts. Based on that, it seems like it's best to get one with only a solid-state drive and no magnetic drive. Is that accurate?

I'm somewhat ambivalent about how to trade off power against heat and weight, or against cost of replacement if it's lost or damaged.

(Edit: I eventually ordered a Dell XPS 13.)

Comment author: ChristianKl 22 December 2013 03:42:13PM 1 point [-]

What's your budget?

How much hard drive space are you using currently?

Comment author: Document 23 December 2013 03:05:57PM *  0 points [-]

I'd rather not worry about budget.

Not counting external storage, I'm using about 25 GB of the D620's 38 GB, plus 25 GB (not counting software) on the family desktop PC.

(After ordering the XPS, I realized that it doesn't have a removeable battery, which seems like a longevity issue; but it seems likely that that's standard for devices of its weight class.)

Comment author: Document 20 December 2013 05:52:43AM 0 points [-]

Update: I've provisionally ordered a Dell XPS 13.

Comment author: ephion 19 December 2013 01:59:10PM 1 point [-]

Based on that, it seems like it's best to get one with only a solid-state drive and no magnetic drive. Is that accurate?

Not necessarily. Most laptops nowadays are equipped with anti shock hard drive mounts and the hard drives are specially designed to be resistant to shock. The advantages for an SSD are speed, not reliability.

This reliability report (with this caveat) indicates that Samsung is the most reliable brand on the market for now. I've always considered Lenovo and ASUS to be high quality, with ASUS generally having cheaper and more powerful computers (and a trade off in actually figuring out which one you want, that website is terrible).

Comment author: Document 19 December 2013 09:29:34PM 0 points [-]

Thanks for replying. I haven't looked at your link yet, but it seems like there'd be limits to how much shock protection could be fit in an ultrathin laptop, and it'd be hard to find out how good it is for specific models. (And the speed advantage seems like enough reason to want an SSD in any case.)

Comment author: Lumifer 19 December 2013 04:41:46PM 2 points [-]

The advantages for an SSD are speed, not reliability.

I would expect an SSD to be MUCH more reliable than a hard drive.

SSDs are solid-state devices with no moving parts. Hard drives are mechanical devices with platters rapidly rotating at microscopic tolerances.

So now that I've declared my prior let's see if there's data... :-)

"From the data I've seen, client SSD annual failure rates under warranty tend to be around 1.5%, while HDDs are near 5%," Chien said. (where Chien is "an SSD and storage analyst with IHS's Electronics & Media division") Source

Comment author: ephion 19 December 2013 05:06:53PM 1 point [-]

Reliability for SSDs is better than for HDD. However, they aren't so much more reliable that it alters best practices for important data keeping -- at least two backups, and one off site.

Comment author: Lumifer 19 December 2013 05:24:11PM *  4 points [-]

they aren't so much more reliable that it alters best practices for important data keeping

Oh, certainly.

Safety of your data involves considerably more than the reliability of your storage devices. SSDs won't help you if your laptop gets stolen or if, say, your power supply goes berserk and fries everything within reach.

Comment author: maia 17 December 2013 04:58:03AM 1 point [-]

Check out /r/suggestalaptop?

General comments: SSDs are generally faster than magnetic drives, but often fail much sooner.

If you're not positive you want to replace it altogether: You might be able to fix your heat/slowness issues just by taking a can of compressed air to it. And you could probably buy a new battery. Replacing it might still be a better proposition overall, though...

Comment author: Document 17 December 2013 08:46:37AM *  1 point [-]

Source on SSDs failing sooner? I thought (or assumed) it was the opposite. A quick Google search turns up the headline "SSD Annual Failure Rates Around 1.5%, HDDs About 5%".

Looking further, though, I also see: "An SSD failure typically goes like this: One minute it's working, the next second it's bricked.". The page goes on to say that there's a service that can reliably recover the data from a dead drive, but that seems like a privacy concern (if everything on the drive weren't logged by the NSA to begin with).

On the pro-SSD side, though, I try to keep anything important online or on an external drive anyway (for easier moving between devices). And I really like the idea of a laptop I can casually carry around without worrying about platters and heads.

Thanks for the suggestions; I may try the Reddit link later. (Edit: posted a thread here.)

Comment author: ephion 19 December 2013 01:52:18PM 1 point [-]

If you are backing up your data responsibly, the SSD failure isn't as much of an issue. And if you aren't backing up your data, then you need to take care of that before worrying about storage failure.

Comment author: sakranut 16 December 2013 12:10:49AM 11 points [-]

I decided I'd share the list of questions I try to ask myself every morning and evening. I usually spend about thirty seconds on each question, just thinking about them, though I sometimes write my answers down if I have a particularly good insight. I find they keep me pretty well-calibrated to my best self. Some are idiosyncratic, but hopefully these will be generally applicable.

A. Today, this week, this month: 1. What am I excited about? 2. What goals do I have? 3. What questions do I want to answer? 4. What specific ways do I want to be better?

B. Yesterday, last week, last month: 5. What did I accomplish that am I proud of? 6. In what instances did I behave in a way I am proud of? 7. What did I do wrong? How will I do better? 8. What do I want to remember? What adventures did I have?

C. Generally: 9: If I'm not doing exactly what I want to be doing, why?

Comment author: [deleted] 18 December 2013 10:10:32PM 0 points [-]

What does it mean for "you" to not be doing exactly what you "want"? Do you downplay or ignore your not-conscious thought processes?

Comment author: curiousepic 16 December 2013 08:51:34PM 3 points [-]

How long have you been doing this, and have you noticed any effects?

Comment author: sakranut 16 December 2013 10:40:48PM 1 point [-]

For about a month and a half, though I forget about 25% of the time. I haven't noticed any strong effects, though I feel as if I approach the day-to-day more conscientiously and often get more out of my time.

Comment author: wadavis 17 December 2013 04:53:47PM *  0 points [-]

For a term in university I followed a similar method. Every day I would post 'Today's Greatest Achievement:' in the relevant social media of the time. There was a noticeable improvement in happiness and extra-curricular productivity as I more actively sought out novel experiences, active community roles, and academic side projects. The daily reminder led to a far more conscientious use of my time.

The combined reminder that I spent all weekend playing video games and broadcasting to my entire social circle that that was my greatest achievement in the past 48 hours was in a mindless video game led to immediate behavior changes.

Comment author: shminux 16 December 2013 07:17:08PM 0 points [-]

9: If I'm not doing exactly what I want to be doing, why?

That's the hardest of them all, still searching for answers.

Comment author: [deleted] 15 December 2013 03:01:57PM 7 points [-]

The quality of intelligence journalism

I have been musing over the results of Rindermann, Coyle and Becker’s survey of intelligence experts presented at the ISIR conference. Since you may well be reading a newspaper this Sunday, I thought it might interest you to show what the experts think of the coverage of intelligence in the public media. By way of explanation, the authors cast their net widely, but did some extra sampling of the German media. Readers might like to suggest their own likes and dislikes in terms of the accuracy of coverage. I will be adding more details on other issues later. In yellow is the original survey 30 years ago, in blue the current 2013 survey.

According to the survey of experts Steve Sailer outperforms everyone else.

Comment author: Anatoly_Vorobey 15 December 2013 01:16:25PM 5 points [-]

What we actually know about mirror neurons.

Wow. I did not expect my background understanding of what is known about mirror neurons to have been so much hype-influenced.

Comment author: NancyLebovitz 15 December 2013 02:28:15PM 3 points [-]

Identical twins aren't perfectly identical

That there are differences between identical twins is known, but the article goes into detail about the types of difference, including effects which are in play before birth.

Comment author: lukeprog 15 December 2013 04:11:38AM 2 points [-]

Many of the leaders in the field of AI are no longer writing programs themselves: They don't waste their time debugging miles of code; they just sit around thinking about this and that with the aid of the new [CS-specific] concepts. They've become... philosophers! The topics they work on are strangely familiar (to a philosopher) but recast in novel terms.

Dennett (1982)

Comment author: Caspian 15 December 2013 03:32:52AM 1 point [-]

This story, where they treated and apparently cured someone's cancer, by taking some of his immune system cells, modifying them, and putting them back, looks pretty important.

cancer treatment link

Comment author: CellBioGuy 17 December 2013 03:45:09AM *  3 points [-]

Found the actual papers the coverage is based on.

How it was done: removing T cells (the cells which kill body cells infected with viruses directly, unlike B cells which secrete antibody proteins) and using replication-incapable viruses to put in a chimeric gene composed of part of a mouse antibody against human B-cell antigens, part of the human T-cell receptor that activates the T cell when it binds to something, and an extra activation domain to make the T-cell activation and proliferation particularly strong. Cells were reinjected, and they proliferated over 1000-fold, killed off all the cancerous leukemia cells they could detect in most patients, and the T-cells are sticking around as a permanent part of the patients immune systems. Relapse rates have been pretty low (but not zero).

This type of cancer (B-cell originating leukemia) is uniquely extraordinarily well suited for this kind of intervention for two reasons. One, there is an antigen on B cells and B-cell derived cancers that can be targeted without destroying anything else important in the body other than normal B cells. Two, since the modded T cells destroy both normal B cells carrying this antigen and the cancerous B cells, the patients have a permanent lack of antibodies after treatment which makes sure their immune system has a hard time reacting against the modified receptors present on the modded T cells, which has been a problem in other studies. Fortunately people can live without B cells if they are careful - it's living without T cells you cannot do. They also suspect that pre-treating with chemotherapy majorly helped these immune cells go after the weakened cancer cell population.

You can repeat this with T-cells tuned against any protein you want, but you had better watch out for autoimmune effects or the patient's immune system going after the chimeric protein you add and eliminating the modded population. And watch out ten years down the line for any T-cell originating lymphomas derived from wonky viral insertion sites in the modded cells - though these days there are 'gentler' viral agents than in the old days with a far lower rate of such problems, and CRISPR might make modding cells in a dish even more reliable soon.

Another thing in the toolkit. No silver bullets. Still pretty darn cool.

Comment author: knb 13 December 2013 07:05:47AM *  12 points [-]

Gregory Cochran has written something on aging. I'll post some selected parts, but you should read the whole thing, which is pretty short.

Theoretical biology makes it quite clear that individuals ought to age. Every organism faces tradeoffs between reproduction and repair. In a world with hazards, such that every individual has a decreasing chance of survival over time, the force of natural selection decreases with increasing age. This means that perfect repair has a finite value, and organisms that skimp on repair and instead apply those resources to increased reproduction will have a greater reproductive rate – and so will win out. Creatures in which there is no distinction between soma and germ line, such as prokaryotes, cannot make such tradeoffs between repair and reproduction – and apparently do not age. Which should be a hint.

...

In practice, this means that animals that face low exogenous hazards tend to age more slowly. Turtles live a long time. Porcupines live a good deal longer than other rodents. [...] Organisms whose reproductive output increases strongly with time, like sturgeons or trees, tend to live longer. The third way of looking at things is thermodynamics. Is aging inevitable? Certainly not. As long as you have an external source of free energy, you can reduce entropy with enthalpy.

...

In principle there is no reason why people couldn't live to be a billion years old, although that might entail some major modifications (and an extremely cautious lifestyle). The third way of looking at things trumps the other two. People age, and evolutionary theory indicates that natural selection won’t produce ageless organisms, at least if their germ cells and body are distinct - but we could make it happen.

This might take a lot of work. If so, don’t count on seeing effective immortality any time soon, because society doesn't put much effort into it. In part, this is because the powers that be don’t know understand the points I just made.

Nothing entirely new to me here, but it's always good to see another scientist come out in favor of aging research. Also, note that the Latin text on the top of Cochran's website is omnes vulnerant, ultima necat, which means approximately, "Each second wounds, the last kills."

Comment author: JQuinton 12 December 2013 09:01:26PM 2 points [-]

I recently read a blog post claiming that alcohol consumption can increase testosterone levels up to 5 hours after intake:

Scientists recently discovered, and I am not making this up, that consuming a drink containing grain alcohol (like Tucker Max’s “Tucker Death Mix”) raised both free and total testosterone for five hours post workout, whereas those who did not consume the frat boy rapist punch had their test levels fall below baseline. Happily, the alcohol had no effect on cortisol or estradiol levels, so the dudes in the study were just floating in a sea of dying brain cells and testosterone-fueled awesomeness (Vingren).

How much is enough to get the nearly 100% boost in testosterone postworkout science has recorded? It depends on your bodyweight. For matters of convenience and exigency, I decided to make a little chart for you guys to give you the proper dosage to spike your test levels properly using the study’s 1.09mg/kg bodyweight ratio organized by weight class, as this is after all an article aimed at serious lifters. For the Oly guys and IPF/USAPL (/sadfaceissad) among you, these are the weight classes that existed before the IOC decided that you guys couldn’t hang with the old school lifters.

How the fucking guys in the study made it home is a mystery- they sure as hell didn’t drive, and if they did, they didn’t live, because they slammed that shit in 10 minutes. I can drink with the best of them, but I’ve never faced half a liter of vodka in ten minutes- that’s some Decline of Western Civilization style drinking, and I’m not sure I can hang with the likes of 1980s hair metal bands.

I'm still not going to drink copious amounts of alcohol after a workout...

Comment author: ephion 18 December 2013 10:00:45PM 0 points [-]

A glass of wine (or two (or three)) or a beer after a workout have noticeably improved how I feel the next day. I didn't believe this post either, but it appears to have panned out.

Comment author: tgb 15 December 2013 04:39:20PM 1 point [-]

As usual, examine.com has some information related to this.

Comment author: hesperidia 11 December 2013 05:57:06PM 1 point [-]

Scientology uses semantic stopsigns:

http://www.garloff.de/kurt/sekten/mind1.html

Loaded Language is a term coined by Dr. Robert Jay Lifton, a psychiatrist who did extensive studies on the thought reform techniques used by the communists on Chinese prisoners. Of all the cults in existence today, Scientology has one of the most complex systems of loaded language. If an outsider were to hear two Scientologists conversing, they probably wouldn't be able to understand what was being said. Loaded language is words or catch phrases that short-circuits a person's ability to think. For instance, all information that is opposed to Scientology, such as what I am writing here, is labelled by Scientologists as "entheta" (enturbulated theta - "enturbulated" meaning chaotic, confused and "theta" being the Scientology term for spirit). Thus, if a Scientologist is confronted with some information that opposes Scientology, the word "entheta" immediately comes into his mind and he/she will not examine the information and think critically about it because the word "entheta" has short-circuited the person's ability to do so. This is just one example, of many, many Scientology terms.

Comment author: John_Maxwell_IV 13 December 2013 07:05:27AM 1 point [-]

The next step is TR-0 "bullbaiting" where the partner says things to the indoctrinee to get them to react. This is called finding a person's "buttons". When the person does react, he is told "flunk" and what he did to flunk and then the phrase that got him to react is repeated until the person no longer reacts. This is very effective as a behavior control method to get the person to blank out when someone starts saying negative things about Scientology.

Hm, this actually sounds like it could be useful...

I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.

Comment author: hesperidia 17 December 2013 08:02:00PM 0 points [-]

Hm, this actually sounds like it could be useful...

A therapist specializing in exposure therapy will be more useful than a cult for this purpose.

Comment author: John_Maxwell_IV 20 December 2013 06:20:13AM 0 points [-]

And also more expensive. But yeah, easier ways to get it than going in to scientology.

Comment author: Dorikka 13 December 2013 08:52:09PM 1 point [-]

Relevant, in case you hadn't already seen it.

Comment author: ChristianKl 13 December 2013 06:09:54PM 2 points [-]

I wonder if it would be valuable to get partway in to Scientology, then quit, just to observe the power of peer pressure, groupthink, and whatnot.

Part of scientology program involve sharing personal secrets. If you quit they can use those against you. Scientology is set up in a way that makes it hard to quit.

Comment author: Viliam_Bur 13 December 2013 10:57:49PM 2 points [-]

Part of scientology program involve sharing personal secrets.

More precisely, sharing personal secrets while connected to an amateur lie detector. And the secrets are documented on paper and stored in archives of the organization. It's optimized for blackmailing former members.

Comment author: Nornagest 13 December 2013 06:15:54PM *  3 points [-]

A lot of people still do, though. Last time I looked into this, the retention rate (reckoned between the first serious [i.e. paid] Scientology courses and active participation a couple years later) was about 10%.

Comment author: ChristianKl 13 December 2013 07:51:44PM 4 points [-]

It's not a question of whether they do leave, but whether they do come out ahead.

Scientology courses aren't cheap. If you are going to invest money into training, I would prefer to buy training from an organisation that makes leaving easy instead of making it painful.

Comment author: Nornagest 13 December 2013 08:00:44PM *  1 point [-]

Oh, I'm pretty confident they don't. But if you had strong reasons for joining and leaving Scientology other than what Scientologists euphemistically call "tech", then in the face of those base rates it seems unlikely to me that they'd manage to suck you in for real.

There are probably safer places to see groupthink in action, though.

Comment author: RolfAndreassen 11 December 2013 07:37:34PM 3 points [-]

Interesting. Reminds me of Orwell's "crimestop":

Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.

Comment author: NancyLebovitz 11 December 2013 06:35:58AM 3 points [-]

A monkey teaching a human how to crush leaves

Mirror neurons? Why does the monkey care about whether a human can crush leaves?

Comment author: CellBioGuy 13 December 2013 04:50:02PM 3 points [-]

Why does a human care about if a monkey cares about whether a human can crush leaves? For things like us primates, sometimes these things are their own reward.

Comment author: ChristianKl 13 December 2013 03:19:42PM 1 point [-]

It might simply be an interesting activity to teach a human how to crush leaves.

Comment author: tut 12 December 2013 02:55:21PM 0 points [-]

Do the monkeys ever crush leaves like that for themselves? Otherwise I think that it is more likely giving him a gift, hoping that he will reciprocate by giving the monkey a treat, or maybe just pet it. The leaves just happen to be what the monkey has most easily available at the time.

Comment author: NancyLebovitz 12 December 2013 03:49:02PM 2 points [-]

The monkey was folding the man's fingers, not just handing him leaves.

If the monkey is giving a gift to incur a sense of obligation, it might be even more complex behavior than teaching.

Comment author: tut 13 December 2013 02:19:27PM *  1 point [-]

Yes. What I was thinking was that people had previously given the monkeys treats by putting something in the monkey's hand and closing its fingers, so that this is the monkey is more or less imitating something that it wants the human to do.

It is not that teaching is too complex for a monkey, it is that I don't see what exactly it's teaching, but I feel that I recognize what the monkey is doing as the "you keep this" gesture.

Comment author: CAE_Jones 11 December 2013 03:55:43PM 0 points [-]

I've heard it said that, when cats present a kill to their owners, it's a form of trying to teach the owner to hunt. I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.

If so, I'd predict this happens more often in more social animals. So yes to lions and monkeys, no to bears and hamsters. This would suggest we'd see similar behavior from dogs, though, and I can't think of examples of dogs trying to teach humans any skills. This is particularly damning for my hypothesis, since dogs are known for their cooperation with humans.

Comment author: passive_fist 12 December 2013 09:23:41PM *  0 points [-]

I can only assume that some mammals will treat animals from other species as part of their tribe/pack/pride/etc if they get along well enough.

It's hard for me to imagine how this wouldn't be the case. It is a highly non-trivial sensory/processing problem for a cat to look at another cat and think "This creature is a cat, just like I am a cat, therefore we should take care of each other" but, at the same time, to look at a human and think "This creature is a human, it is not like me, therefore it does not share my interests."

This problem is especially more acute for cats than dogs, because cats don't really form tight-knit packs, and they have less available processing power.

I'd like to see some more research on the psychology of pack behavior and how/why animals cooperate with each other though.

Comment author: NancyLebovitz 11 December 2013 05:29:56PM *  2 points [-]

Sheep-herding rabbit-- included because it's an amazing video and who could resist, and because it's at least an example of learning from dogs.

As for your generalization, maybe the important thing is to look at species which have to teach their young. I'm not sure how much dogs teach puppies.

Dog teaches puppy to use stairs

Comment author: Lumifer 11 December 2013 05:32:17PM 1 point [-]

Your rabbit link is broken.

Comment author: NancyLebovitz 11 December 2013 05:44:25PM 0 points [-]

Fixed now.

Comment author: Emile 11 December 2013 07:33:06AM 5 points [-]

Because enjoying teaching useful stuff to people you get along with is a trait that got selected for?

Comment author: NancyLebovitz 11 December 2013 06:12:27AM 3 points [-]

Finding food in foreign grocery stores, or finding out that reality has fewer joints than you might think.

From the comments:

Making sense of unfamiliar legal systems

This insight also leads to a helpful lesson of just what "having an open mind to a different culture" really means. At bottom, it means having faith in the people who subscribe to the culture -- faith that these people are motivated by the same forces as we, that they are not stupid, irrational or innately predisposed to a certain temperament, that whatever they are doing will make sense once we understood the entire circumstance.

Comment author: Username 11 December 2013 04:31:44AM *  10 points [-]

Are there any translation efforts in academia? It bothers me that there may be huge corpuses of knowledge that are inaccessible to most scientists or researchers simply because they don't speak, say, Spanish, Mandarin, or Hindi. The current solution to this problem seems to be 'everyone learn English', which seems to do ok in the hard sciences. But I fear there may be a huge missed opportunity in social sciences, especially because Americans are WEIRD and not necessarily psychologically or behaviorally respresentative of the world population. (Link is to an article, link to the cited paper here: pdf)

Comment author: Douglas_Knight 13 December 2013 05:33:24PM 4 points [-]

If a hypothetical bothers you, maybe you should hold off proposing solutions and instead investigate whether it is a real problem.

Comment author: gwern 13 December 2013 06:14:34PM *  4 points [-]

I'm not sure losing the non-English literature is a big problem. A lot of foreign research is really bad. A little demonstration from 5 days ago: I criticized a Chinese study on moxibustion https://plus.google.com/103530621949492999968/posts/TisYM64ckLM

This was translated into / written in English and published in a peer-reviewed journal (Neural Regeneration Research). And it's complete crap.

Of course there is very bad research published by the West on alternative medicine too, but as the links I provide show, Chinese research is systematically and generally of very low quality. If China cannot produce good research, what can we expect of other countries?

Comment author: Douglas_Knight 13 December 2013 07:57:15PM 2 points [-]

The language that I think most plausibly contains a disconnected scientific literature is Japanese.

Comment author: ChristianKl 13 December 2013 03:24:26PM 0 points [-]

If you know English and Mandarin, you might make an academic career out of writing meta analysis of topics discussed in Mandarin research papers.

Comment author: Barry_Cotter 14 December 2013 09:02:11AM 2 points [-]

I am not professionally involved in these fields but I have read that among those who are there is a very jaundiced opinion of Chinese and Indian scientific research. If none of the following hold completely ignoring their publications is apparently a good heuristic; at least one foreign co-author or one who did their doctorate in the first world or an institution or author with a significant reputation. Living in China and having some minimal experience with the Chinese attitude to plagiarism/copying/research makes this seem plausible. I doubt anyone's missing anything by ignoring scientific articles published in Mandarin. I make no such claims for social sciences.

Comment author: sixes_and_sevens 11 December 2013 12:44:56PM 8 points [-]

The plural of "corpus" is "corpora". I don't say this to be pedantic, but because the word is quite lovely, and deserves to be used more.

Comment author: Metus 11 December 2013 06:40:09AM 2 points [-]

Some time ago someone linked a paper indicating that there are benefits to fragmentation of academia by language barriers as less people are exposed to some kind of dominant view allowing them to come up with new ideas. One cited example was anthropology which had a Russian and an Anglosphere tradition.

I'd assume there not to be any major translation efforts as being a translator isn't as effective as publishing something of your own by far.

Comment author: NancyLebovitz 11 December 2013 02:12:23PM 1 point [-]

The Body Electric mentioned that the Soviets were ahead of the west in studying electrical fields in biology because (not sure of the date-- sometime before the seventies) electricity sounded to much like elan vital to the westerners.

Comment author: Douglas_Knight 11 December 2013 06:48:39PM 0 points [-]

Which Body Electric? I don't see it in Becker and Selden, but maybe I don't know what to look for.

Comment author: NancyLebovitz 11 December 2013 07:53:03PM *  0 points [-]

Possibly this Body Electric. It's at least about the right subject, but I'd have swore I'd read it much earlier than 1998, and my copy (buried somewhere) probably had a purple cover.

The cover on the hardcover looks more familiar, and at least it's from 1985.

Wikipedia makes it sound like the right book.

Where were you searching? You had the authors right.

Comment author: Douglas_Knight 11 December 2013 09:33:09PM *  0 points [-]

I looked at that book on google books. I searched for "Soviet," "elan," etc, and did not see the story you mentioned.

Added: Amazon says that the book uses these words a lot more than google says, but I didn't look at many hits.

Comment author: byrnema 11 December 2013 04:03:13PM -1 points [-]

That's interesting. I read your comment out of context and didn't know you were making a comment about the language. I agreed that I don't like thinking about electricity in animals (or more strongly, any coordinated magnetic phenomena, etc) because of this association. There is a similarity in the sounds, ("electrical" and "elan vital") but also the concepts are close in space ... perhaps the Soviets lacked this ugh field altogether.

Comment author: NancyLebovitz 11 December 2013 05:21:48PM 0 points [-]

I was using "sounded like" metaphorically. I assume they knew the difference in meaning, but were affected by the similarity of concepts and worry about their reputations.

I guessed that the Soviets were more willing to do the research because Marxism was kind of like weird science, so they were willing to look into weird science in general. However, this is just a guess. A more general hypothesis is that new institutions are more willing to try new things.

Comment author: Viliam_Bur 11 December 2013 09:44:37AM 4 points [-]

being a translator isn't as effective as publishing something of your own by far.

Publishing your own scientific paper brings you more rewards, but translating other person's article requires less time and less scientific skills (just enough to understand the vocabulary and follow the arguments).

If someone would pay me for doing it, I would probably love to have a job of translating scientific articles to my language. It would be much easier for me to translate dozen articles than to create one. And if I would only translate the articles that passed some filter, for example those published in peer-reviewed journals, I could probably translate the output of twenty or fifty scientists.

Comment author: Username 11 December 2013 10:05:18AM *  2 points [-]

It seems like there could definitely be money in 'international' journals for different fields, which would aggregate credible foreign papers and translate them. Interesting that they don't seem to exist.

Comment author: RichardKennaway 12 December 2013 10:21:13AM 2 points [-]

How effective would it be to use human expertise to translate just the contents pages of journals, with links to Google Translate for the bodies of the papers? Or perhaps use humans to also translate the abstracts?

Does anything like this exist already?

Comment author: satt 13 December 2013 01:27:25AM 1 point [-]

Idea that popped into my head: it might be straightforward to make a frontend for the arXiv that adds a "Translate this into" drop-down list to every paper's summary page. (Using the list could redirect the user to Google Translate, with the URL for the PDF automatically fed into the translator.) As far as I know no one has done this but I could be wrong.

Comment author: Metus 11 December 2013 02:39:09PM 1 point [-]

This chain is so interesting. As a grad student I could translate some papers and make some decent money in such a hypothetical regime.

Comment author: ESRogs 11 December 2013 03:52:30AM 13 points [-]

I'm expecting China to have an increasing role in global affairs over the next century. With that in mind, there are a couple of things I'm curious about:

  • Does anyone have an idea of how prevalent existential risk type ideas are in China?

  • Has anyone tried to spread LW memes there?

  • Are the LW meetups in Shanghai, etc. mostly ex-pats or also locals?

Thanks!

Comment author: Dan_Weinand 10 December 2013 08:26:57PM 8 points [-]

Any good advice on how to become kinder? This can really be classified as two related goals, 1) How can I get more enjoyment out of alleviating others suffering and giving others happiness? 2) How can I reliably do 1 without negative emotions getting in my way (ex. staying calm and making small nudges to persuade people rather than getting angry and trying to change people's worldview rapidly)?

Comment author: Gabriel 12 December 2013 10:01:33AM 0 points [-]

I recommend trying loving-kindness meditation.

Comment author: Dan_Weinand 12 December 2013 09:17:29PM 0 points [-]

Could you elaborate? I'm relatively familiar with and practice mindfulness meditation, but I've never heard of loving-kindness meditation.

Comment author: Gabriel 13 December 2013 05:22:03AM *  0 points [-]

This here Wikipedia page is a good summary.

It mostly boils down to simply concentrating on feeling nice towards everyone. There is some technical advice on how to turn the vague goal of 'feeling nice' to more concrete mental actions (through visualization, repeating specific phrases, focusing on positive qualities of people) and how to structure the practice by having a progression of people towards which you generate warm fuzzy feelings, of increasing level of difficulty (like starting with yourself and eventually moving on to someone you consider an enemy). Most of this can be found in the Wiki article or easily googled.

Comment author: beoShaffer 13 December 2013 12:34:34AM 0 points [-]
Comment author: byrnema 11 December 2013 04:26:23PM *  1 point [-]

I also want to learn how to be kinder. The sticking point, for me, is better prediction about what makes people feel good.

I was very ill a year ago, and at that time learned a great deal about how comforting it is to be taken care of by someone who is compassionate and knowledgeable about my condition. But for me, unless I'm very familiar with that exact situation, I have trouble anticipating what will make someone feel better.

This is also true in everyday situations. I work on figuring out how to make guests feel better in my home and how to make a host feel better when I'm the guest. (I already know that my naturally overly-analytic, overly-accommodating manner is not most effective.) I observe other people carefully, but it all seems very complex and I consider myself learning and a 'beginner' -- far behind someone who is more natural at this.

Comment author: hesperidia 11 December 2013 06:27:32PM 2 points [-]

I have trouble anticipating what will make someone feel better.

In this kind of situation, I usually just ask, outright, "What can I do to help you?" Then I can file away the answer for the next time the same thing happens.

However, this assumes that, like me, you are in a strongly Ask culture. If the people you know are strongly Guess, you might get answers such as "Oh, it's all right, don't inconvenience yourself on my account", in which case the next best thing is probably to ask 1) people around them, or 2) the Internet.

You also need to keep your eyes out for both Ask cues and Guess cues of consent and nonconsent - some people don't want help, some people don't want your help, and some people won't tell you if you're giving them the wrong help because they don't want to hurt your feelings. This is the part I get hung up on.

Comment author: TheOtherDave 11 December 2013 07:28:50PM 3 points [-]

The "keep your eyes out for cues" works the other way around in what we're calling a "Guess culture" as well.

That is, most natives of such a culture will be providing you with hints about what you can do to help them, while at the same time saying "Oh, it's all right, don't inconvenience yourself on my account." Paying attention to those hints and creating opportunities for them to provide such hints is sometimes useful.

(I frequently observe that "Guess culture" is a very Ask-culture way of describing Hint culture.)

Comment author: byrnema 11 December 2013 06:33:00PM 0 points [-]

Yes, I would like to improve on all of this. I haven't found the internet particularly helpful.

And I do find myself in a bewildering 'guess' culture. Asking others (though not too close to the particular situation) would probably yield the most information.

Comment author: Manfred 11 December 2013 07:21:30AM 2 points [-]

In addition to seconding nonviolent communication, cognitive behavior therapy techniques are pretty good - basically mindfulness exercises and introspection. If you want to change how you respond to certain situations (e.g. times when you get angry, or times when you have an opportunity to do something nice), you can start by practicing awareness of those situations, e.g. by keeping a pencil and piece of paper in your pocket and making a check mark when the situation occurs.

Comment author: Ben_LandauTaylor 10 December 2013 11:00:29PM 9 points [-]

I'd recommend Nonviolent Communication for this. It contains specific techniques for how to frame interactions that I've found useful for creating mutual empathy. How To Win Friends And Influence People is also a good source, although IIRC it's more focused on what to do than on how to do it. (And of course, if you read the books, you have to actually practice to get good at the techniques.)

Comment author: Dan_Weinand 11 December 2013 12:36:17AM 3 points [-]

Thanks! And out of curiosity, does the first book have much data backing it? The author's credentials seem respectable so the book would be useful even if it relied on mostly anecdotal evidence, but if it has research backing it up then I would classify it as something I need (rather than ought) to read.

Comment author: ChristianKl 13 December 2013 08:00:52PM 1 point [-]

When it comes to research about paradigms like that it's hard to evaluate them. If you look at nonviolent communication and set up your experiment well enough I think you will definitely find effects.

The real question isn't whether the framework does something but whether it's useful. That in turn depends on your goals.

Whether a framework helps you to successfully communicate depends a lot on cultural background of the people with whom you are interacting.

If you engage in NVC, some people with a strong sense of competition might see you as week. If you would consistentely engage in NVC in your communcation on LessWrong, you might be seen as a weird outsider.

You would need an awful lot of studies to be certain about the particular tradeoff in using NVC for a particular real world situation.

I don't know of many studies that compare whether Windows is better than Linux or whether VIM is better than Emacs. Communication paradigms are similar they are complex and difficult to compare.

Comment author: erratio 11 December 2013 10:32:05PM 2 points [-]

Thirded. The most helpful part for me was internalising the idea that even annoying/angry/etc outbursts are the result of people trying to get their needs met. It may not be a need I agree with, but it gives me better intuition for what reaction may be most effective.

Comment author: jsalvatier 11 December 2013 10:24:04PM 1 point [-]

I found NVC is very intuitively compelling, have personal anecdotal evidence that it works (though not independent of ESRogs, we go to the same class).

Comment author: Ben_LandauTaylor 11 December 2013 07:49:52PM *  5 points [-]

According to wikipedia, there's a little research and it's been positive, but it's not the sort of research I find persuasive. I do have mountains of anecdata from myself and several friends whose opinions I trust more than my own. PM me if you want a pdf of the book.

Comment author: ESRogs 11 December 2013 05:03:32PM 2 points [-]

I would like to offer further anecdotal evidence that NVC techniques are useful for understanding your own and other people's feelings and feeling empathy toward them.

Comment author: shminux 10 December 2013 08:47:17PM 0 points [-]

What is your reason for wanting to?

Comment author: Dan_Weinand 11 December 2013 12:39:31AM 1 point [-]

I find myself happier when I act more kindly to others. In addition, lowering suffering/increasing happiness are pretty close to terminal values for me.

Comment author: shminux 11 December 2013 01:02:05AM 0 points [-]

You say

I find myself happier when I act more kindly to others.

Yet you said earlier that

How can I get more enjoyment out of alleviating others suffering and giving others happiness?

Does this mean that you feel that you do enjoy it but not "enough" in some sense and you want to enjoy it even more?

Comment author: Dan_Weinand 11 December 2013 02:16:27AM *  3 points [-]

Correct, it is enjoyable but I wish to make it more so. Hence why I used "more".

Comment author: Tuxedage 10 December 2013 07:14:32PM *  55 points [-]

At risk of attracting the wrong kind of attention, I will publicly state that I have donated $5,000 for the MIRI 2013 Winter Fundraiser. Since I'm a "new large donor", this donation will be matched 3:1, netting a cool $20,000 for MIRI.

I have decided to post this because of "Why our Kind Cannot Cooperate". I have been convinced that people donating should publicly brag about it to attract other donors, instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

Comment author: Adele_L 11 December 2013 09:29:50PM 8 points [-]

Would anyone else be interested in pooling donations to take advantage of the 3:1 deal?

Comment author: Tripitaka 18 December 2013 11:53:24PM 1 point [-]

I'd be interested, but only the small sum of 100$. Did anybody else take you up on that offer? Of course I'd like to verify the pool-persons identity before transfering money.

Comment author: intrepidadventurer 11 December 2013 07:13:38PM 19 points [-]

This post and reading "why our kind cannot cooperate" kicked me off my ass to donate. Thanks Tuxedage for posting.

Comment author: somervta 11 December 2013 05:25:17AM 6 points [-]

You sir, are awesome.

Comment author: Brillyant 10 December 2013 10:42:35PM 2 points [-]

Interesting.

I have been convinced that people donating should publicly brag about it to attract other donors

It certainly seems to make sense for the sake of the cause for (especially large, well-informed) donors to make their donations public. The only downside seems to be a potentially conflicting signal on behalf of the giver.

instead of remaining silent about their donation which leads to a false impression of the amount of support MIRI has.

I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?

Growing up Evangelical, it was taught that we should give secretly to charities (including, mostly, the church).

I wonder why? The official Sunday School answer is so that you remain humble as the giver, etc. I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

Comment author: gwern 13 December 2013 09:12:42PM *  3 points [-]

I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

There may not be anything to explain: the early Christian church grew very slowly. Perhaps secret almsgiving simply isn't a good idea.

Comment author: Brillyant 13 December 2013 09:33:45PM 0 points [-]

Hm. Possibly. Though it does still seem to be a rather popular convention in churches today to adopt an interpretation of secret offerings.

I would imagine popular interprations of scriptures on giving would evolve based on the goals of the church (to get $$$), and kept in check only by being believable enough to the member congregations.

Tithing seems to work for the church, so lots of churches resurrect it from the OT and really shaky exegesis and make it a part of the rules. If tithing didn't work for the church, they could easily make it go away in the same way they get rid of tons of outdated stuff from the OT (and the NT).

Secret offerings seems similar to me. I'd imagine they could make the commands for secret giving go away with some simple hermeneutical waves of the hand if it didn't benefit them.

Comment author: ChristianKl 13 December 2013 08:02:45PM *  3 points [-]

I wonder if there is some other mechanism whereby it made sense for Christians to propogate that concept (secret giving) among followers?

This gives the church an information advantage. Information is power. It gives them the opportunity to make it seem like everyone is donating less than their neighbors.

Comment author: Brillyant 13 December 2013 09:01:35PM 0 points [-]

Ah. So the leaders can give the ongoing message to "give generously" to a group and, as long as the giving data is kept in secret and no one ever speaks to anyone else about how much they gave, then each member will feel compelled to continue to give more in an effort to (a) "please God" and (b) gain favor in the eyes of the leaders by keeping up with, or outgiving, the other members. Is this what you are saying? If not can you elaborate?

Comment author: ChristianKl 13 December 2013 09:56:32PM 3 points [-]

Look at Mormons. They have a rule that you have to donate 10% of your income. If you don't than you aren't pleasing god and god might punish you.

In reality the average Mormon doesn't donate 10% but might feel guilty for not doing so. If someone who would donate 7% would know that they donate above average, they would feel less guilty about not meeting the goal of donating 10%.

Comment author: Brillyant 13 December 2013 10:39:01PM 0 points [-]

Sure, but why 10%? Why not 15%? Or 20%?

It is possible that they are setting the bar too low. You might have many people who would have given 30% had not the command been for 10%, but for 30%?

Comment author: drethelin 13 December 2013 11:08:40PM 1 point [-]
Comment author: ChristianKl 13 December 2013 10:48:56PM 4 points [-]

It is possible that they are setting the bar too low.

Yes, it is. Choosing that particular number might not be optimal. But there a cost of setting the number to high. If you set it too high and people don't think they can reach that standard they might not even try.

Comment author: Brillyant 14 December 2013 12:06:26AM 0 points [-]

Right.

I'd guess 10% is not an arbitrary number, but rather is a sort of market equilibrium that happens to be supportable by a certain interpretation of OT scripture. It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT, and could have been used by leadership to impose that % on laypeople.

In any case, in my experience within the church, there are tithes... AND then there are offerings which include numerous different cause to give to on any given Sunday. It was often stated these causes (building projects, missions outreaches, etc.) were in addition to your tithe.

It is funny to me... it is almost like the reverse of a compensation plan you'd build for a team of commissioned sales people. Instead of trying to optimize the plan to best incentivize for sales performance by motivating your sales people to sell, the church may have evolved their doctrines and practices on giving to optimize for collecting revenue by motivating your members to give. Ha.

Comment author: gjm 14 December 2013 12:28:15AM 0 points [-]

It might have just as well been 3% or 7% or 12% as these numbers are all pretty significant in the OT

This is of course no argument against anything substantive you're saying, but while the numbers 3,7,12 are certainly all significant in the OT the idea of percentage surely wasn't. I can see 1/3, or 1/7, or 1/12, though.

Comment author: Brillyant 14 December 2013 02:17:16AM 0 points [-]

Good point. Though, from my recall, there isn't much basis in the OT for the modern day concept of tithing at all, percentage or otherwise. Christianity points to verses about giving 1/10th of your crops to the priest as the basis.

If they really wanted to change the rules and up it to 1/7th, or 12% or anything they want, they could come up with some new basis for that match using fancy hermeneutics.

This is sort of what is happening right now with homosexuality. Many churches are changing their views. They are justifying that by reinterpreting the verses they've used to condemn it in the past.

In fact, you can pretty much get the Bible to support any position or far-fetched belief you'd like. You only need a few verses... and it's a big book.

This is one of my favorites.

Comment author: drethelin 13 December 2013 08:06:38PM 1 point [-]

or that "Christians" donate a lot when it's really just a few of them.

Comment author: Tuxedage 10 December 2013 10:53:15PM 6 points [-]

I'm not sure this is true. Doesn't MIRI publish its total receipts? Don't most organizations that ask for donations?

Total receipts may not be representative. There's a difference between MIRI getting funding from one person with a lot of money and large numbers of people donating small(er) amounts. I was hoping this post to serve as a reminder that many of us on LW do care about donating, rather than a few rather rich people like Peter Thiel or Jaan Tallinn.

Also I suspect scope neglect can be at play -- it's difficult to, on an emotional level, tell the difference between $1 million worth of donations, or ten million, or a hundred million. Seeing each donation that led to adding up to that amount may help.

Comment author: Viliam_Bur 11 December 2013 09:15:02AM 2 points [-]

Seeing each donation that led to adding up to that amount may help.

Yes, because it would show how many people donated. Number of people = power, at least in our brains.

The difference between one person donating 100 000, or one person donating 50 000 and ten people donating 5 000 is that in the latter case, your team has eleven people. It is the same amount of money, but emotionally it feels better. Probably it has other advantages (such as smaller dependence on whims of a single person), but maybe I am just rationalizing here.

Comment author: [deleted] 10 December 2013 06:25:57PM 1 point [-]

Nicholas Agar has a new book. I read Humanity's End and may even read this...eventually.

http://www.amazon.com/gp/aw/d/0262026635/ref=mp_s_a_1_3?qid=1386699492&sr=8-3

Comment author: mwengler 10 December 2013 04:36:00PM 2 points [-]

Red Queen hypothesis means that humans are probably the latest step in a long sequence of fast (on evolutionary time scale) value changes. So does Coherent Extrapolated Volition (CEV) intend to

1) extrapolate all the future co-evolutionary battles humans would have and predict the values of the terminal species as our CEV, or is it intended somehow to

2) freeze the values humans have at the point in time we develop FAI and build a cocoon around humanity which will let it keep this (nearly) arbitrarily picked point in its evolution forever?

If it is 1), it seems the AI doesn't have much of a job to do. Presumably interfere against existential risks to humanity and its successor species, perhaps keep extremely reliable stocks for repopulating if humanity or its successor manages still to kill itself. Maybe even in a less extreme interpretation, FAI does what is required to keep humanity and its successors as the pinnacle species, stealing adaptations from unrelated species that actually manage to threaten us and our successors, so we sort of have 1') which is extrapolate to a future where the pinnacle species is always a descendant of ours.

If 2), it would seem FAI could simply build a sim that freezes in place the evolutionary pressures that brought us to this point as well as freezing in to place our own current state. And then run that sim forever, the sim simply removes genetic mutation from the sim and perhaps has active rebalancing to work against any natural selection which is currently going on.

We could have BOTH futures, those who prefer 2) go live in the Sim that they have always thought was indistinguishable from reality anyway, and those who prefer 1 stay here in the real world and play out their part in evolving whatever comes next. Indeed, the sim of 2) might serve as a form of storage/insurance against existential threats, a source from which human history can be restarted from its point at 0 year FAI whenever needed.

Does CEV crash in to Red Queen hypothesis in interesting ways? Could a human value be to roll the dice on our own values in hopes of developing an even more effective species?

Comment author: DanielLC 10 December 2013 08:22:34PM 0 points [-]

Neither. CEV is supposed to look at what humanity would want if they were smarter, faster, and more the people they wished they were. It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.

Comment author: mwengler 11 December 2013 12:32:22AM 0 points [-]

It finds the end of the evolution of how we change if we are controlled by ourselves, not by the blind idiot god.

Well considering that we at the point we create the FAI are completely a product of the blind idiot god, and so our CEV is some extrapolation of where that blind idiot had gotten us to at the point we finally got the FAI going, it seems very difficult to me to say that the blind idiot god has at all been taken out of the picture.

I guess the idea is that by US being smart and the FAI being even smarter, we are able to whittle down our values until we get rid of the froth, dopey things like being a virgin when you are married and never telling a lie, move through the 6 stages of morality to the top one, the FAI discovers the next 6 or 12 stages and runs sims or something to cut even more foam and crust until there's only one or two really essential things left.

Of course those one or two things were still placed there by the blind idiot god. And if something other than them had been placed by the blind idiot, CEV would have come up with something else. It does not seem there is any escaping this blind idiot. So what is the value of a scheme who's appeal is the appearance of escaping the blind idiot if the appearance is false?

Comment author: DanielLC 11 December 2013 07:58:45PM 0 points [-]

We are not escaping the blind idiot god in the sense if it not having any control. We are escaping in the sense that we have full control. To some extent, they overlap, but that doesn't matter. I only care about being in control, not about everything else not being in control.

Comment author: Viliam_Bur 11 December 2013 09:55:56AM 0 points [-]

It does not seem there is any escaping this blind idiot.

By luck, we got some things right. We don't have to get rid of them just because we got them by a random process.

So what is the value of a scheme who's appeal is the appearance of escaping the blind idiot if the appearance is false?

The value is in escaping the parts that harm us. Evolution made me enjoy chocolate, and evolution also made me grow old and die. I would love to have an eternal happy life. I don't see any good reason to get rid of the chocolate; although I would accept to trade it for something better.

Comment author: AlexMennen 10 December 2013 06:30:38PM 0 points [-]

CEV is supposed to refer to the values of current humans. However, this does not necessarily imply that an FAI would prevent the creation of non-human entities. I'd expect that many humans (including me) would assign some value to the existence of interesting entities with somewhat different (though not drastically different) values than ours, and the satisfaction of those values. Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.

Comment author: mwengler 10 December 2013 07:30:55PM 0 points [-]

Thus a CEV would likely assign some value to the preferences of a possible human successor species by proxy through our values.

An interesting question, is the CEV dynamic? As we spent decades or millennia in the walled gardens built for us by the FAI would the FAI be allowed to drift its own values through some dynamic process of checking with the humans within its walls to see how its values might be drifting? I had been under the impression that it would not, but that might have been my own mistake.

Comment author: AlexMennen 11 December 2013 12:25:35AM *  2 points [-]

No. CEV is the coherent extrapolation of what we-now value.

Edit: Dynamic value systems likely aren't feasible for recursively self-improving AIs, since an agent with a dynamic goal system has incentive to modify into an agent with a static goal system, as that is what would best fulfill its current goals.

Comment author: DanielLC 10 December 2013 08:23:36PM 0 points [-]

It's not dynamic. It isn't our values in the sense of what we'd prefer right now. It's what we'd prefer if we were smarter, faster, and more the people that we wished we were. In short, it's what we'd end up with if it was dynamic.

Comment author: AlexMennen 11 December 2013 12:35:49AM -1 points [-]

In short, it's what we'd end up with if it was dynamic.

The overwhelming majority of dynamic value systems do not end in CEV.

Comment author: DanielLC 11 December 2013 07:55:38PM 0 points [-]

What I mean is that if you looked at what people valued, and gave them the ability to self-modify, and somehow kept them from messing up and accidentally doing something that they didn't want to do, you'd have something like CEV but dynamic. CEV is the end result of this.

Comment author: mwengler 11 December 2013 12:17:41AM 0 points [-]

It's not dynamic. It isn't our values in the sense of what we'd prefer right now. It's what we'd prefer if we were smarter, faster, and more the people that we wished we were. In short, it's what we'd end up with if it was dynamic.

Unless the FAI freezes our current evolutionary state, at least as involves our values, the result we would wind up with if CEV derivation was dynamic would be different from what we would end up with if it is just some extrapolation from what current humans want now.

Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.

Of course being effectively kept in a really really nice zoo by the FAI, we would not be experiencing any kind of NATURAL selection anymore, and evidence certainly suggests that our volition is to be taller, smarter, have bigger dicks and boobs, be blonder, tanner, and happier, all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures. Certainly CEV keeps us from wanting defective, crippled, and genetically diseased children, so this seems a fairly safe prediction.

It would seem as defined that CEV would have to be fixed at the value it was set at when FAI was created. That no matter how smart, how tall, how blond, how curvaceous or how pudendous we became we would still be constantly pruned back to the CEV of 2045 humans.

As to our values not even being optimal for our current environment fuhgedaboud our future environment, it is pretty widely recognized that we are evolved for the hunter gatherer world of 10,000 years ago, with familial groups of a few hundred, the necessity for survival of hostile reaction against outsiders, and systems which allow fear to distort in extreme ways our rational estimations of things.

I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us? Maybe they will push a few other species to become intelligent and social and let them duke it out and have natural selection run with them. As long as their species that our CEV didn't feel too overly warm and fuzzy about this shouldn't be a problem for them. And certain as a human in the walled garden I would LOVE to be studying what evolution does beyond what it has done to us, so this would seem like a fine and fun thing for the FAI to do to keep at least my part of the CEV entertained.

Comment author: AlexMennen 11 December 2013 05:56:01PM *  2 points [-]

Even if there were some reason to think our current values were optimal for our current environment, which there is actually reason to think they are NOT, we would still have no reason to think they were optimal in a future environment.

Type error. You can evaluate the optimality of actions in an environment with respect to values. Values being optimal with respect to an environment is not a thing that makes sense. Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that's not very relevant to CEV.

all of which our zookeeper FAI should be able to move us (or our descendants) towards while carrying out necessary eugenics to keep our genome healthy in the absence of natural selection pressures.

An FAI can be far more direct than that. Think something more along the lines of "doing surgery to make our bodies work the way we want them to" than "eugenics".

I wonder if the FAI will be sad

Do not anthropomorphize an AI.

Comment author: mwengler 11 December 2013 07:27:42PM 0 points [-]

Type error. ... Unless you mean to refer to whether or not our values are optimal in this environment with respect to evolutionary fitness, in which case obviously they are not, but that's not very relevant to CEV.

You are right about the assumptions I made and I tend to agree it is erroneous.

Your post helps me refine my concern about CEV. It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.

Probably what drives my fear of CEV not reflecting MY values is dopey, low probability. In my case it is an aspect of "Everything that comes from organized religion is automatically stupid." To me, CEV and FAI are the modern dogma, man discovering his natural god does not exist, but deciding he can build his own. An all-loving (Friendly) all powerful (self-modifying AI after FOOM) father-figure to take care of us (totally bound by our CEV).

Of course there could be real reasons that CEV will not work. Is there any kind of existence proof for a non-trivial CEV? For the most part values such as "lying is wrong" "stealing is wrong" "help your neighbors" all seem like simplifying abstractions that are abandoned by the more intelligent because they are simply not flexible enough. The essence of nation-to-nation conflict is covert, illegal competition between powerful government organizations that takes place in the virtual absence of all other values other than "we must prevail." I would presume a nation which refused to fight dirty at any level would be less likely to prevail and so such high mindedness would have no place in the future, and therefore no place in the CEV. That is, the fact that I with normal-ish intelligence can see that most values are a simple map for how humanity should interoperate to survive and the map is not the territory, an extrapolation to if we were MUCH smarter would likely remove all the simple landmarks we have on the maps suitable for our current distribution of IQ.

Then consider the value much of humanity places on accomplishment, and the understanding that coddling, keeping as pets, keeping safe, protecting, is at odds with accomplishment, and get really really smart about that and a CEV is likely to not have much in it about protecting us, even from ourselves.

So perhaps the CEV is a very sparse thing indeed, requiring only that humanity, its successors or assigns, survive. Perhaps FAI sits there not doing a whole hell of a lot that seems useful to us at our level of understanding, with its designers kicking it wondering where they went wrong.

I guess what I'm really getting too is perhaps our CEV, perhaps when you use as much intelligence as you can to extrapolate where our values go in the long long run, you get to the same place the blind idiot was going all along- survival. I understand many here will say no you are missing out on the bad vs good things in our current life, how we can cheat death but keep our taste for chocolate. Their hypothesis is that CEV has them still cheating death and keeping their taste for chocolate. I am hypothesizing that CEV might well have the juggernaut of the evolution of intelligence, and not any of the individuals or even species that are parts of that evolution, as its central value. I am not saying I know it will, what I am saying is I don't know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn't crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.

Evolution may be run by a blind idiot but it has gotten us this far. It is rare that something as obviously expensive as death would be kept in place for trivial reasons. Certainly the good news for those who hate death is the evidence is that lifespans are more valuable in smart species, I think we live twice as long as most other trends against other species would suggest we should, so maybe the optimum continues to go in that direction. But considering how increased intelligence and understanding is usually the enemy of hatred, it seems at least a possibility that needs to be considered that CEV doesn't even stop us from dying.

Comment author: AlexMennen 12 December 2013 12:30:19AM 0 points [-]

It must be that I am expecting the CEV will NOT reflect MY values. In particular, I am suggesting that the CEV will be too conservative in the sense of over-valuing humanity as it currently is and therefore undervaluing humaity as it eventually would be with further evolution, further self-modification.

CEV is supposed to value the same thing that humanity values, not value humanity itself. Since you and other humans value future slightly-nonhuman entities living worthwhile lives, CEV would assign value to them by extension.

Is there any kind of existence proof for a non-trivial CEV?

That's kind of a tricky question. Humans don't actually have utility functions, which is why the "coherent extrapolated" part is important. We don't really know of a way to extract an underlying utility function from non-utility-maximizing agents, so I guess you could say that the answer is no. On the other hand, humans are often capable of noticing when it is pointed out to them that their choices contradict each other, and, even if they don't actually change their behavior, can at least endorse some more consistent strategy, so it seems reasonable that a human, given enough intelligence, working memory, time to think, and something to point out inconsistencies, could come up with a consistent utility function that fits human preferences about as well as a utility function can. As far as I understand, that's basically what CEV is.

CEV is likely to not have much in it about protecting us, even from ourselves.

Do you want to die? No? Then humanity's CEV would assign negative utility to you dying, so an AI maximizing it would protect you from dying.

I am not saying I know it will, what I am saying is I don't know why everybody else has already decided they can safely predict that even a human 100X or 1000X as smart as they are doesn't crush them the way we crush a bullfrog when his stream is in the way of our road project or shopping mall.

If some attempt to extract a CEV has a result that is horrible for us, that means that our method for computing the CEV was incorrect, not that CEV would be horrible to us. In the "what would a smarter version of me decide?" formulation, that smarter version of you is supposed to have the same values you do. That might be poorly defined since humans don't have coherent values, but CEV is defined as that which it would be awesome from our perspective for a strong AI to maximize, and using the utility function that a smarter version of ourselves would come up with is a proposed method for determining it.

Criticisms of the form "an AI maximizing our CEV would do bad thing X" involve a misunderstanding of the CEV concept. Criticisms of the form "no one has unambiguously specified a method of computing our CEV that would be sure to work, or even gotten close to doing so" I agree with.

Comment author: mwengler 13 December 2013 01:07:18PM 2 points [-]

My thought on CEV not actually including much individual protection followed something like this: I don't want to die. I don't want to live in a walled garden taken care of as though I was a favored pet. Apply intelligence to that and my FAI does what for me? Mostly lets me be since it is smart enough to realize that a policy of protecting my life winds up turning me into a favored pet. This is sort of the distinction ask somewhat what they want you might get stories of candy and leisure, look at them when they are happiest you might see when they are doing meaningful and difficult work and living in a healthy manner. Apply high intelligence and you are unlikely to promote candy and leisure. Ultimately, I think humanity careening along on its very own planet as the peak species, creating intelligence in the universe where previously there was none is very possibly as good as it can get for humanity, and I think it plausible FAI would be smart enough to realize that and we might be surprised how little it seemed to interfere. I also think it is pretty hard working part time to predict what something 1000X smarter than I am will conclude about human values, so I hardly imagine what I am saying is powerfully convincing to anybody who doesn't lean that way, I'm just explaining why or how an FAI could wind up doing almost nothing, i.e. how CEV could wind up being trivially empty in a way.

THe other aspect of being empty for CEV I was not thinking our own internal contradictions although that is a good point. I was thinking disagreement across humanity. Certainly we have seen broad ranges of valuations on human life and equality and broadly different ideas about what respect should look like and what punishment should look like. THese indicate to me that a human CEV as opposed to a French CEV or even a Paris CEV, might well be quite sparse when designed to keep only what is reasonably common to all humanity and all potential humanity. If morality turns out to be more culturally determined than genetically, we could still have a CEV, but we would have to stop claiming it was human and admit it was just us, and when we said FAI we meant friendly to us but unfriendly to you. The baby-eaters might turn out to be the Indonesians or the Inuits in this case.

I know how hard it is to reach consensus in a group of humans exceeding about 20, I'm just wondering how much a more rigorous process applied across billions is going to come up with.

Comment author: AlexMennen 15 December 2013 06:27:12PM 0 points [-]

I was thinking disagreement across humanity.

You can just average across each individual.

we would have to stop claiming it was human and admit it was just us

Yes, "humanity" should be interpreted as referring to the current population.

Comment author: Viliam_Bur 11 December 2013 10:07:13AM *  1 point [-]

we would still be constantly pruned back to the CEV of 2045 humans

Two connotational objections: 1) I don't think that "constantly pruned back" is an appropriate metaphor for "getting everything you have ever desired". The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.

I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us?

Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.

Comment author: NancyLebovitz 11 December 2013 02:06:48PM 0 points [-]

I'm dubious about the extrapolation-- the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.

Comment author: Viliam_Bur 11 December 2013 03:14:29PM 0 points [-]

I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn't happen anyway.

It's like saying "if I don't roll a die, I lose the chance of rolling 6", to which I add "and if you do roll the die, you still have 5/6 probability of not rolling 6". Just to make it clear that by avoiding the "spontaneous" future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.

Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.

Comment author: NancyLebovitz 11 December 2013 03:58:18PM 0 points [-]

'Unmediated' may not have been quite the word to convey what I meant.

My impression is that CEV is permanently established very early in the AI's history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.

Comment author: Douglas_Knight 10 December 2013 06:11:32PM 0 points [-]

What does the Red Queen hypothesis have to do with value change?

Comment author: mwengler 10 December 2013 07:25:19PM 2 points [-]

with random mutations and natural selection, old values can disappear and new values can appear in a population. The success of the new values depends only on their differential ability to keep their carriers in children, not on their "friendliness" to the old values of the parents, which is what FAI respecting CEV is meant to accomplish.

The Red Queen Hypothesis is (my paraphrase for purposes of this post) that a lot of the evolution that takes place is not to adapt to unliving environment but to the living and most importantly also evolving environment in which we live, on which we feed, and which does its damdest to feed on us. Imagine a set of smart primates who have already done pretty well against dumber animals by evolving more complex vocal and gestural signalling, and larger neocortices so that complex plans worthy of being communicated can be formulated and understood when communicated. But they lack the concept of handing off something they have with the expectation that they might get something they want even more in trade. THIS is essentially one of the hypotheses of Matt Ridley's book "The Rational Optimist," that homo sapiens is a born trader, while the other primates are not. Without trading, economies of scale and specialization do almost no good. With trading and economies of scale and specialization, a large energy investment in a super-hot brain and some wicked communication gear and skills really pays off.

Subspecies with the right mix of generosity, hypocrisy, selfishness, lust, power hunger, and self-righteousness will ultimately eat the lunch of their too generous or too greedy to cooperate or too lustful to raise their children or too complacent to seek out powerful mates brethren and sistern. This is value drift brought to you by the Red Queen.

Comment author: Bayeslisk 10 December 2013 09:49:48AM 3 points [-]

I have a strong desire to practice speaking in Lojban, and I imagine that this is the second-best place to ask. Any takers?

Comment author: [deleted] 10 December 2013 06:06:44PM 0 points [-]

.i'enai

Comment author: TsviBT 10 December 2013 12:35:02AM 24 points [-]

PSA: If you want to get store-bought food (as opposed to eating out all the time or eating Soylent), but you don't want to have to go shopping all the time, check to see if there is a grocery delivery service in your area. At least where I live, the delivery fee is far outbalanced by the benefit of almost no shopping time, slightly cheaper food, and decreased cognitive load (I can just copy my previous order, and tweak it as desired).

Comment author: dougclow 13 December 2013 08:21:31AM 7 points [-]

Another benefit for me is reduced mistakes in picking items from the list.

Some people don't use online shopping because they worry pickers may make errors. My experience is that they do, but at a much lower rate than I do when I go myself. I frequently miss minor items off my list on the first circuit through the shop, and don't go back for it because it'd take too long to find. I am also influenced by in-store advertising, product arrangements, "special" offers and tiredness in to purchasing items that I would rather not. It's much easier to whip out a calculator to work out whether an offer really is better when you're sat calmly at your laptop than when you're exhausted towards the end of a long shopping trip.

You'd expect paid pickers to be better at it - they do it all their working hours, I only do it once or twice a month. Also, all the services I've used (in the UK) allow you to reject any mistaken items at your door for a full refund - which you can't do for your own mistakes. The errors pickers make are different to the ones I would, which makes them more salient - but they are no more inconvenient in impact on average.

Comment author: John_Maxwell_IV 13 December 2013 06:20:49AM *  0 points [-]

Regarding food in particular, I'm still wishing Romeo Stevens would commercialize his tasty and nutritious soylent alternative so I could buy it the same way I buy juice from the grocery store.

Comment author: Bakkot 11 December 2013 05:35:32PM 1 point [-]

For those in the community living in the south Bay Area: https://www.google.com/shopping/express/

Comment author: Metus 10 December 2013 06:08:53PM 9 points [-]

This makes me wonder: What are some simple ways to save quite some time that the average person does not think of?

Comment author: Gunnar_Zarncke 14 December 2013 10:04:57PM *  7 points [-]
Comment author: [deleted] 13 December 2013 04:03:44AM 0 points [-]

Dave Asprey claims that you can get by fine on five hours of sleep if you optimize it to spend as much time in REM and delta sleep as possible. This appeals to me more than polyphasic sleep does. Link

Also I was intrigued when xkcd mentioned the 28 hour day, but I don't know of anyone who has maintained that schedule

Comment author: Gunnar_Zarncke 14 December 2013 10:26:20PM 1 point [-]

There are by now some quite extensive studies about the amount of required or healthy sleep. Sleep is roughly normal distributed between 5 and 9 hours and for some of those getting 5 or less hours of sleep this appears to be healthy:

Jane E. Ferrie, Martin J. Shipley, Francesco P. Cappuccio, Eric Brunner, Michelle A. Miller, Meena Kumari, Michael G. Marmot: A Prospective Study of Change in Sleep Duration: Associations with Mortality in the Whitehall II Cohort.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2276139/pdf/aasm.30.12.1659.pdf

So probably Dave Asprey is one of those 1% for this is correct.

Some improvements (or changes) may be possible for most of us though. You can get along with less sleep if you sleep at your optimum sleep time (which differs depending on your genes esp. the Period 3 gene) and if you start to sleep quickly.

Polyphasic sleep may significantly reduce your sleep total but nobody seems to be able what the health effects are. It might be that it risks your long time health.

Comment author: NancyLebovitz 13 December 2013 11:33:07AM 3 points [-]

Dan Aspey claims he can do well on 5 hours of sleep, and then makes a further claim that any other adult (he recommends not trying to do serious sleep reduction until you're past 23) can also do well on 5 hours. To judge by a fast look at the comments, rather few of his readers are trying this, let alone succeeding at it.

Do you have any information about whether Aspey's results generalize?

Comment author: [deleted] 13 December 2013 06:38:00PM 0 points [-]

Not really.

Comment author: [deleted] 13 December 2013 04:43:25PM 5 points [-]

I am under the impression that nearly anybody who talks about sleep is guilty of Generalizing from One Example.