Discuss things here if they don't deserve a post in Main or Discussion.

If a topic is worthy and receives much discussion, make a new thread for it.

New to LessWrong?

New Comment
212 comments, sorted by Click to highlight new comments since: Today at 11:42 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'm thinking maybe we should try to pool all LW's practical advice somewhere. Perhaps a new topic in Discussion, where you post a top-level comment like "Will n-backing make me significantly smarter?", and people can reply with 50% confidence intervals. Then we combine all the opinions to get the LW hivemind's opinions on various topics. Thoughts?

PS. Sorry for taking up the 'Recent Comments' sidebar, I don't have internet on my own computer so I have to type my comments up elsewhere and post them all at once.

0gwern12y
Why not just add those into the survey?
0Dorikka12y
Good idea -- go for it! :D

In one of the subthreads concerned with existential risk and the Great Filter, I proposed that one possible filtration issue is that intelligent species that evolved comparatively earlier in their planets' lifetimes or evolved on planets that formed much sooner compared to when their heavy elements were formed would have a lot more fissionable material (especially uranium-235), and that this might make it much easier for them to wipe themselves out with nuclear wars. So we may have escaped the Great Filter in part by evolving late. Thinking about this more, I'm uncertain how important this sort of filtration is. I'm curious if a) people think this could be a substantial filter and b) if anyone is aware of discussion of this filter in the literature.

If we had had more fissionable material over the last 100 years how would that have made nuclear war more likely?

9JoshuaZ12y
If life had evolved say 2 billion years earlier than there would be about 6 times as much U-235 on the planet, and most uranium ores would be around 3% U-235 rather than 0.7% U-235. This means that making nuclear weapons would be easier, since obtaining enough uranium would be a lot easier and the amount of enriching needed would go down as well. For similar reasons it would also then be easier to make plutonium in large quantities. However, the fact that one would still need some amount of enrichment means that this would still be technically difficult, just easier. However, fusion bombs are much more effective for civilizations destroying themselves, and even with cheap fissiles, fusion bombs are still comparatively tough. There's another reason that this filter may not be that big a filtration event: Having more U-235 around means that one can more easily construct nuclear reactors. Fermi's original pile used non-enriched uranium, so one can have a (not very efficient) uranium reactor simply from that without much work, and modern reactors can use non-enriched uranium (although that requires careful designs). But on a large scale, in such a setting, somewhat enriched uranium (compared to what we consider normal) would be much more common, and functional, useful reactors can be easily made with percentages as low as 2% of U-235, and in this setting most of the uranium would be closer to 3% U-235. So making nuclear reactors much easier means one has a much easier source of energy (in fact, on Earth, there's at least one documented case of such a reactor occurring naturally about 1.7 billion years ago ). Similar remarks apply to nuclear rockets which are one of the few plausible ways one can reasonably go about colonizing other planets. So the two concerns are: a) how much more likely would it be for a civilization to actually wipe itself out in this sort of situation and b) how much is this balanced out by the presence of a cheap energy source and an easier way t
7jhuffman12y
Perhaps it makes it a little more likely for a civilization to end themselves but it doesn't seem to have the potential to be a great filter. It doesn't seem that likely that even a large scale war with fusion weapons would extinguish a species; and as you point out there is still quite a barrier to development of fusion weapons even with more prolific 235. So far in our history the proliferation of nuclear weapons seems to have discouraged wars of large scope between great powers. In fact two great powers have not fought each other since Japan's surrender. Granted this is a pretty small sample of time but a race without the ability to rationally choose peace probably has little chance regardless of 235 levels. So if there is a great filter here with species extinguishing themselves in war, more 235 makes it a little bit greater only.
6gwern12y
What, exactly, would the increased uranium level do? * It doesn't seem to me that it would speed up the development of an atomic bomb much because you have to have the idea in the first place; and in our timeline, the atomic bomb followed the idea very quickly (what was it, 6 years?); the lower concentration no doubt slowed things by a few months or perhaps less than 5 years, but the histories I read didn't point to concentrating as a bottleneck but more conceptual issues (how much do you need? how do the explosive lenses work? etc.) Nor do I see how it might speed up the general development of physics and study of radioactivity; if Marie Curie was willing to go through tons of pitchblende to get a minute bit of radium, then uranium clearly was nowhere on her radar. Going from 0.6 to 3% won't suddenly make a Curie study uranium ore instead. The one such path would be discovering a natural uranium reactor, but how big a window is there where scientists could discover a reactor and speed up development of nuclear physics? I mean, if a scientist in the 1700s had discovered a uranium reactor, would he be able to do anything about it? Or would it just remain a curiosity, something like the Greeks and magnets? * Nuclear proliferation is not constrained by the ability to refine ore, but more by politics; South Africa and South Korea and Libya and Iraq didn't abandon their nukes or programs because it was costing them 6x as much to refine uranium. * Nukes wouldn't become much more effective; nukes are so colossally expensive that their yields are set according to function and accuracy of targeting. (The poorer your targeting, like Russia, the bigger your yields will be to compensate.)
0JoshuaZ12y
Well, one issue is that it becomes easier for countries to actually get nukes once the whole technology is known. One needs to start with less uranium and needs to refine it less. Regarding the Curies, while that it is true, it might be that people would have noticed radioactivity earlier. And more U-235 around means more radium around also. But I agree that this probably wouldn't have a substantial impact on when things would be discovered. Given how long a gap there was between that initial discovery and the idea of an atomic bomb, even if it did speed things up it is unlikely to have impacted the development of nuclear weapons that mcuh. Your points about profileration and effectiveness seems to both be strong. Overall, this conversation makes me move my view in the other direction. That is, this seems to be not just not a strong filtration candidate, the increased ease of energy access argument seems to if anything push things in the other direction. Overall, this suggests that as far as presence of U-235 is concerned, civilizations that arise on comparatively young planets should have less not more filtration. This is worrisome.
3gwern12y
Yes, but how much does this help? There are multiple methods available of varying sophistication/engineering complexity (thermal easy, laser hard); a factor of 6 surely helps, but any of the methods works if you're just willing to run the ore or gas through enough times.
-2JoshuaZ12y
That's a good point. So the only advantage comes from not needing as much uranium ore to start with and since uranium ore is easy to get already that's not a major issue.
1khafra12y
I think it fails as a filter because even a huge nuclear war wouldn't wipe out eg cockroaches. Assuming "intelligent life evolves from multicellular life" is IID, with an early appearance it could happen a few times before the planet gets as old as ours. To wit: The only reason to think the dinosaur's extinction event wasn't nuclear war is a lack of fossilized technological artifacts; and it doesn't seem to have filtered us yet.
6wedrifid12y
The only reason? The lack of creatures with appendages suitable for tool wielding or the evident brain capacity for the task doesn't come into it just a tiny bit?
2FAWS12y
Do we know that? Iguanodons for example have hands that look not all that terribly far off from hands suitable for tool use, some related species that we didn't find in the fossile record yet evolving proper hands doesn't seem impossible to me.
-1wedrifid12y
I have very little idea. Last I heard the brontosaurus doesn't even exist and the triceratops is really just an immature torosaurus. That gives a ballpark for how much confidence I can have in my knowledge of the species in that era.
3JoshuaZ12y
This is incorrect. The name "brontosaurus" is incorrect. But the nomenclature correction to apatosaurus did not come with any change in our understanding of the species.
0wedrifid12y
While that which was labelled brontosaurus was later subsumed into the previously identified genus apatosaurus the early reconstructed fossil which popularized our image of the brontosourus was also discovered to include a head based of models of camarasaurus skulls. That and it was supposedly forced to live in the water because it was too large to support itself on land. Basically the 'brontosourus' that I read about as a child is mostly bullshit. Even this much I didn't have anything but the vaguest knowledge of until I read through the wikipedia page. As for possible tool capable appendages or even traces of radioactive isotopes I really have very little confidence in knowing about. It just isn't my area of interest.
2David_Gerard11y
Wikipedia is a pretty up-to-date source on dinosaurs, with lots of avid and interested editors on the topic. (The artistic reconstructions come close to being original research, but a reconstruction tends not to be used until it's passed a gamut of severely critical and knowledgeable editors.) Remember that it's quite an active field, with new discoveries and extrapolations therefrom all the time. It surprises me slightly how much we know from what little evidence we have, and that we nevertheless do actually know quite a bit. (I have a dinosaur-mad small child who critiques the dinosaur books for kids from the library. Anything over a couple of years old is useless.)
0JoshuaZ12y
Camarasaurus is a close relative, the use of it as a model for reconstructing the skull was deliberate. (Moreover, modern data shows that it was in fact quite a good reconstruction.) The water thing did turn out to be just wrong, but that's not any different than about the scale of change that has happened with a lot of dinosaurs (for example the changing understanding of how T-Rex hunted.) There's certainly been a lot of changes (although most of the brontosaurus stuff was known a very long time ago and just took a lot of time to filter through to popular culture), but none of it amounts to "brontosaurus" not existing.
0wedrifid12y
What? No it doesn't. It was found to be the totally wrong sauropod to pretend was a brontosourus head. Did you read the line in wikipedia backwards? (The wording could be a little more explicit, at a stretch there is ambiguity. The actual journal article is more clear.) Or did you just make that up as a plausible assumption? It should be based off the diplodocus.
0JoshuaZ12y
Hmm, now looking per your suggestion at the Wikipedia article. They emphasize the degree of difference more than I remember it turning out to be an issue. The source they are using is here (may be a paywall). I don't know enough paleontology to understand all the details of that paper. However, I suspect that to most laypeople a skull that resembles a diplodocus would be close to that of a camarasaurus so the issue may be a function of what one means by a good reconstruction. (I suspect that many 10 year olds could probably see the differences between a diplodocus skull and a torasaurus skull, but it would take more effort to point out the difference between diplodocus and camarasaurus.)
0wedrifid12y
I could totally tell the difference between a camarasaurus and a raptor. That's about my limit. And I know about raptors because they are cool. Also, they feature in fictional math tests. They wouldn't be able to describe the difference (or know either of those dinosours) but the difference when you look at a new apatosaurus compared to an old picture of a 'brontosourus' is rather stark. ie. The new one looks like a pussy.
2[anonymous]12y
I'm not exactly sure how much more or less common fossils are from various time periods but I think its fair to point out we have very few skeletons of certain hominids running around that fit that description in East Africa a few million years back. Which dosen't change that you are right that it is very very unlikely to be the case that a tool using or very clever undiscovered species (at least to the extent needed to make the argument work) existed then. But we should keep in mind just what a puny fraction of extinct species are known to us.
0Vladimir_Nesov12y
Is this really important? The crucial point is some means for accumulation of cultural knowledge, which could well be implemented via tradition of scholarship without any support from external tools, and even failing that, ability (or just innate rationality) a couple of levels higher than human could do the trick. Given runaway evolution of intelligence, it seem like ability to bear tools is irrelevant, and AFAIK evolution of human intelligence wasn't caused by the faculty of tool-making (so the effect isn't strong in either direction).
7pedanterrific12y
I find this comment extremely puzzling. How do you suppose an intelligent species could go about building nuclear bombs without the ability to use tools?
1Vladimir_Nesov12y
The relevant kind of "ability to use tools" is whatever can be used, however inefficiently at the beginning, to start building stuff, if you apply the ingenuity of an international scientific community for 100000 years to the task; not appendages that a chimp-level chimp can use to sharpen sticks in an evening. You seem to underestimate the power of intelligence. This is directly analogous to AI boxing, with limitations of intelligent creatures' bodies playing the role of the box. I'd expect intelligent tortoises or horses should still be capable of bootstrapping technological civilization (if they get better than humans at rationality to sustain scientific progress in the initial absence of technological benefits, or just individually sufficiently more intelligent to get to the equivalent of the necessary culture's benefit in a lifetime).
-1JoshuaZ12y
There are a lot of species that are almost as smart as humans, and some even engage in tool use. (e.g. many species of corvids). But their tool use is limited, and part of the limit appears to be their lack of useful appendages and comparatively small size. In at least some of these species such as the New Caladonian Crow, tool techniques can be passed on from one generation to the next. This sort of thing suggests that appendages matter a fair bit. (Obviously they aren't sufficient even when one is fairly smart. Elephants have an extreme flexible appendage, have culture, are pretty brainy, and don't seem to have developed any substantial tool use.)
0Vladimir_Nesov12y
Elephants or crows don't have scientific communities, so the analogy doesn't work, doesn't suggest anything about the hypothetical I discussed.
1JoshuaZ12y
Humans developed tool use well before we had anything resembling the scientific method or a scientific community. Humans had already 2000 years ago become the dominant species on the planet and had a substantial enough impact to make easily noticeable changes in the global environment. Whatver is necessary for this sort of thing, a scientific community doesn't seem to be on the list.
2Vladimir_Nesov12y
You are missing the point still. The question was whether the presence of appendages convenient for tool-making is an important factor in intelligent species' ability to build a technological civilization. In other words, whether creatures intelligent enough to build a technological civilization, but lacking an equivalent of hands, would still manage to build a technological civilization. Elephants or crows are irrelevant, as they are not smart enough. Human use of tools is irrelevant, as we do have hands. The relevant class of creatures are those that are smart and don't have hands (or similar), for example having bodies of tortoises (or worse).
0JoshuaZ12y
Hmm, I'm confused now about what you are trying to assert. You are, if I'm now parsing you correctly, asserting that a species with no tool appendage but with some version of the scientific method could reach a high tech level without tool use? If so, that doesn't seem unreasonable, but you seem to be conflating intelligence with having a scientific community. These are not at all the same thing.
1Vladimir_Nesov12y
In the situation where you have smart folks with no ability to build tools, scientific community is one useful technology they can still build, and that can dramatically improve their capability to solve the no-hands problem. For example, I wouldn't expect humans with no hands (and with hoofs, say) to develop technology if they don't get good enough at science first (and this might fail to happen at our level of rationality in the absence of technology, which would be the case in no-hands hypothetical). As an alternative, I listed sufficiently-greater individual intelligence that doesn't need augmentation by culture to solve the no-hands problem (which might have developed if no-hands humans evolved a bit more, failing to solve the no-hands problem).
0JoshuaZ12y
That sufficiently greater intelligence without hands could succeed is a supposition that seems questionable unless one makes sufficiently greater to be so large as to see no plausible reason it would evolve. And a scientific community is very difficult to develop unless one already has certain technologies that seem to require some form of tools. A cheap and efficient method of storing information seems to be necessary. Humans accomplished that with writing. It is remotely plausible one could get such a result some other way but it is tough to see how that could occur without the ability to use tools.
1Vladimir_Nesov12y
If creatures figure out selective breeding, one way to solve the no-hands problem would be for them to breed themselves for intelligence...
2pedanterrific12y
Would it be easier for greater-than-human intelligent nohanders to breed themselves for more intelligence or for, you know, hands?
0Vladimir_Nesov12y
(I didn't want to reply, but given the follow-up...) Since they are already intelligent, there's a road to incremental improvement. For hands, it's not even clearly possible, will take too long, and psychology will change anyway in the meantime, causing even greater value drift (which is already the greatest cost of breeding for intelligence).
0[anonymous]12y
The answer is yes.
-2JoshuaZ12y
It depends on the attitudes of the species. Non-standard appendages might be a sign of ill-health. Humans are not the only species that uses a heuristic approximating "looks like a normal member of my species" as a proxy for health and general evolutionary fitness. So breeding hands might be tough, in that they wouldn't be able to breed easily with the other members of the population necessarily. On the other hand, breeding for intelligence doesn't have that problem. But all of this is highly speculative and to a large extent is a function in detail of what the species is like and what obvious phenotypical variation there is that can be easily traced to genetics.
0pedanterrific12y
My understanding is we're starting from the assumption that the species in question is on average far more rational (and probably more intelligent) than humanity. If creatures that can create a thriving scientific community in the total absence of technology have gotten to the point of saying "You know, things would be a lot easier if we had hands. Hey, how about selective breeding?" I don't imagine the fact that they'd likely find hands unsexy would be an issue.
1Vladimir_Nesov12y
Well, I expect educated humans could pull this off (that is, assuming development of science/rationality). An oral tradition of scholarship seems sufficient for all practical purposes, on this level of necessary detail, if reliable education is sustained, and there is a systematic process that increases quality of knowledge over time (i.e. science and/or sufficient rationality).
3JoshuaZ12y
On the whole, we have a pretty decent estimate for the intelligence levels produced by evolution, There are some potential observer bias issues (if there were another, more highly intelligent species we'd probably be them.) but even taking that into account the distribution seems clear. There's a tendency to underestimate how intelligent other species are compared to humans. This is a general problem that is even reflected in our language (look at the verbs "parrot" and "ape" compared to what controlled studies show that they can do.) While there are occasional errors of overestimation (e.g. Clever Hans), and we do have a tendency to overestimate the intelligence of pets, the general thrust in the last fifty years has been that animals are smarter than we give them credit for. So taking all this into account, we should shift our distribution of likely intelligence slightly towards the intelligent side. But even given that, it doesn't seem likely that a species would evolve to be intelligent enough to do the sort of thing you intend. Keep in mind that intelligence is really resource intensive. At least in humans, oral traditions are not very reliable. There are only a handful of traditions in the world that seem to have remotely accurate oral traditions. See for example, the Cohanic Y chromosome where to some extent an oral tradition was confirmed by genetic evidence. But even in that case there's a severe limit to the information that was conveyed (a few bits worth of data) and even that was conveyed imperfectly. An oral tradition would therefore likely need to have many more experiments repeated simply to verify that the claimed results were correct. Moreover, individuals who are not near each other would need to send messengers back and forth or would need to travel a lot. While it is possible (one could imagine messengers with Homeric memory levels keeping many scientific ideas and data sets in their heads) this doesn't seem very likely. Moreover, in order fo
2Sniffnoy12y
Link is broken, and some other text appears to have gotten folded into the URL.
0JoshuaZ12y
Thanks. Fixed.
-2wedrifid12y
Yes (as pedanterrific noted). Unless the dinosours were sufficiently badass that they could chew on uranium ore, enrich it internally and launch the resultant cocktail via high powered, targeted excretion. That is one impressive reptile. Kind of like what you would get if you upgraded a pistol shrimp to an analogous T-Rex variant. (Other alternatives include an intelligent species capable of synthesizing and excreting nano-factories from their pores.)
0Vladimir_Nesov12y
Replied to pedanterrific.
2wedrifid12y
In response to that reply I note that I gave two examples of mechanisms by which a species might launch nuclear weapons without any ability to use tools. I could come up with more if necessary and a more intelligent (or merely different) mind could create further workarounds still. But that doesn't preclude acknowledging that the capability to use tools does give significant evidence about whether the species creates technology - particularly in what amount to our genetic kin. Lack of fossilized evidence of technological artifacts is not the only reason to believe that the extinction of dinosours wasn't due to nuclear war. It is merely one of the stronger reasons.
2JoshuaZ12y
Most of your assessment seems reasonable to me. However, seems wrong. I haven't crunched the numbers, but I suspect that a species killing nuclear war would leave enough traces in the isotopic ratios around the planet that we'd be able to distinguish it from an asteroid impact. (The Oklo reactor mentioned earlier was discovered to a large extent due to tiny differences in expected verse observed isotope ratios.)
6gwern12y
This was actually covered in a book I read (I think it was The World Without Us). Summary: even our reactors leave clear traces that will be detectable about as long as the mass extinction event we're causing. So a civilization and species-killing thermonuclear war would definitely be detectable by us.
5khafra12y
Fair enough. I should have said "if the dinosaurs had been intelligent, and their extinction was due to nuclear winter following a large thermonuclear exchange, the history of our own species could still look substantially similar." Although evolution might have proceeded a bit differently with higher background radiation.
2JoshuaZ12y
Phrased that way your point seems very strong. Indeed, dinosaurs died out only 65 million years ago, which isn't that long ago, especially in the context of this sort of filtration event.
0Luke_A_Somers11y
Fallout is a technological artifact.
0JoshuaZ11y
Yes, I'm not sure what your point is. Can you expand?
2Luke_A_Somers11y
What you then provided as a counterexample of other reasons to reject this theory fits within the scope of things that are missing.
0JoshuaZ11y
Ah ok. Yes, you're right, fallout should be covered for purposes of the original comment then.

I just finished reading Steven Pinker's new book, The Better Angels of Our Nature: Why Violence Has Declined. It's really good, as in, maybe the best book I've read this year. Time and again, I was shocked to find subjects treated of keen interest to LW, or which read like Pinker had taken some of my essays but done them way better (on terrorism, on the expanding circle, etc.); even so, I was surprised to learn new things (resource problems don't correlate well with violence?).

I initially thought I might excerpt some parts of it for a Discussion or Article, but as the quotes kept piling up, I realized that it was hopeless. Reading reviews or discussions of it is not enough; Pinker just covers too much and rebuts too many possible criticisms. It's very long, as a result, but absorbing.

When writing a comment on LessWrong, I often know exactly which criticisms people will give. I will have thought through those criticisms and checked that they're not valid, but I won't be able to answer them all in my post, because that would make my post so long that no-one would read it. It seems like I've got to let people criticise me, and then shoot them down. This seems awfully inefficient, it's like the purpose of having a discussion rather than me simply writing a long post is just to trick people into reading it.

9shminux12y
I suppose if you have an external blog, you can simply summarize the potential criticisms on your LW post and link to a further discussion of them elsewhere. Or you can structure your post such that it discusses them at the very end: ====== Optional reading: In this way you get your point across first, while those interested can continue on to the detailed analysis.
4vi21maobk9vp12y
Briefly summarize expected objections and write whatever you want to write about them in a comment to your comment.
2JoshuaZ12y
One thing I do when trying to anticipate possible objections is to simply acknowledge them briefly in a parenthetical and then say something like "but these objections are weak" or "these objections have some validity but suffer from problems. Addressing them in detail would make this post too long."

There's a room open in one of the Berkeley rationalist houses, http://sfbay.craigslist.org/eby/sub/2678656916.html

Reply via the ad if you are interested for more details!

How to cryonics?

And please forgive me if this is a RTFM kind of thing.

I've been reading LW for a time, so I've been frequently exposed to the idea of cryonics. I usually push it to the back of my mind: I'm extremely pessimistic about the odds of being revived, and I'm still young, after all. But I realize this is probably me avoiding a terrible subject rather than an honest attempt to decide. So I've decided to at least figure out what getting frozen would entail.

Is there a practical primer on such an issue? For example; I'm only now entering grad school, ... (read more)

1Suryc1112y
I have essentially the same query. How exactly do I go about acquiring a cryonics insurance policy, especially when I am still in school (undergrad American university)? What if I live with my parents and they do not support cryonics? Actually, how does one go about acquiring any specific form of insurance policy?
0gwern12y
Have you tried to see what Alcor.org might say? Such a practical primer seems like the sort of thing a cryonics organization might write. (Crazy, I know...)
2quentin12y
Yeah, I didn't look hard enough. So I'll leave this here. Dear people from the future, here is what I have found so far: http://alcor.org/BecomeMember/scheduleA.html http://alcor.org/BecomeMember/sdfunding.htm Though, if anyone was in a similar position and would like to share, I'd still love to hear about it.

There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I'm talking about? I'm not finding it.

4Manfred12y
Was it this one?
0lukeprog12y
'Twas!
4VincentYu12y
I'm not sure about the LW discussion post, but the phenomenon that you describe closely resembles Nyhan and Reifler's 'backfire effect', which I think reached a popular audience when David McRaney wrote about it on You Are Not So Smart. ETA: Googling LW for "backfire effect" and nyhan doesn't turn up any recent post, so maybe this is not what you are looking for.
1dbaupp12y
I'm not in a position to Google easily, but "belief polarization" is another term for this, I think.
2lessdazed12y
Are you thinking of the one where people updated only to consider dangers less likely than their initial estimate? http://lesswrong.com/lw/814/interesting_article_about_optimism/
0lukeprog12y
That's not what I was thinking of, but interesting nonetheless.

For LifeHacking--instrumental rational skills--does anyone have experience getting lightweight professional advice? E.g., for clothing, hire a personal stylist to pick out some good-looking outfits for you to buy. No GQ fashion-victimhood, just some practical suggestions so that you can spend the time re-reading Pearl's Causality instead of Vogue.

The same approach--simple one-time professional advice, could apply to a variety of skills.

If anyone has tried this sort of thing, I'll be glad to learn your experience.

Is anyone writing a bot for this contest?

http://aichallenge.org/index.php

7[anonymous]12y
Sounds awesome, where did you first hear of this? Anyone interested in starting a team for this?

Gogo LessWrong team! The experience and the potential publicity will be excellent.

I'll chip in with a prize to the amount of ($1000 / team's rank in the final contest), donated to the party of your choice. Team must be identified as "LessWrong" or suchlike to be eligible.

3[anonymous]12y
This sounds like a wonderful opportunity for anyone interested to promote Lesswrong and themselves as well as give to a good cause like SIAI/other worthy charity! We should really bring this to people's attention. It also sounds like an excellent test of applied rationality.
5falenas10812y
It seems like there's a decent amount of interest. This should probably be made into a post of its own, and hopefully a promoted one, if we want an official LessWrong team. A lot of people don't check back on the open thread who would probably be interested in joining.
2[anonymous]12y
Yes I think it should be, I think there would be some interest. Considering we have some very competent and experienced people on Lesswrong and some very enthusiastic amateurs, several teams wouldn't be too bad an idea either if there where enough people. Some of the amateur LWers might be a bit intimidated by being part of the "Offical LessWrong team", whereas "Lesswrong Team #3" or "Lesswrong amateur rationalist group" dosen't sound as bad.
1lavalamp12y
Done: http://lesswrong.com/r/discussion/lw/8ay/ai_challenge_ants/
4lavalamp12y
I'd consider joining a team thing. A LessWrong team would be cool if it, you know, wins... Currently there is not much tough competition, my bot is incredibly stupid (doesn't pay attention to the other players) and in the top 200. I know about it from the contest they held last year.
0[anonymous]12y
Is this an annual event?
0lavalamp12y
Seems to be. This is their third contest.
3Emile12y
I'd be interested in joining a team - I'm a video game programmer with an AI degree, so it's the kind I should be good at (I don't have massive amounts of free time though).
4malthrin12y
I put some thought into it, but I don't think I'll have time to. I wouldn't mind sharing my ideas with anyone who is actually doing it.
2lavalamp12y
I wouldn't mind listening to ideas...
3malthrin12y
My most important thought was to ensure that all CPU time is used. That means continuing to expand the search space in the time after your move has been submitted but before the next turn's state is received. Branches that are inconsistent with your opponent's move can be pruned once you know it. Architecturally, several different levels of planning are necessary: food harvesting and anticipating new food spawns. Pathfinding, with good route caching so you don't spend all your CPU here. Combat instances, evaluating a small region of the map with alpha/beta pruning and some pre-tuned heuristics. High level strategy, allocating ants between food operations, harassment, and hive destruction. If you're really hardcore, a scheduling algorithm to dynamically prioritize the above calculations. I was just going to let the runtime handle that and hope for the best, though.

I don't know much about machine learning, but wouldn't it be possible to use machine learning to get a machine to optimize your diet, exercise, sleep patterns, behaviour, etc.? Perhaps it generates a list of proposed daily routines, you follow one and report back some stats about yourself like weight, blood pressure, mood, digit span, etc.. It then takes these and uses them to figure out what parts of what daily routines do what. If it suspects eating cinnamon decreases your blood pressure, it makes you eat cinnamon so you can tell it whether it worked. Th... (read more)

7Emile12y
Sounds like it could work, especially if it uses a database of all users, so that users most similar to you also give an indication of what might or might not work for you. "I am [demographic and psychological parameters] and would like to [specific goal - mood, weight, memoty, knowledge] in the coming [time period]; what would work best?" Sounds like an interesting project, I'll have to think about it.
2listic12y
I think machine learning has potential in badly formalized fields. Surely, diet and exercise are not most well formalized fields, but it looks like there are certain working heuristics out there. What do you mean by "behaviour" btw? To start thinking about applying machine learning to diet, exercise, sleep patterns and behaviour you should answer the question: what do you want to optimize them for?
2curiousepic12y
It surprises and disappoints me that I haven't heard of some sort of massive expert program like this being used in healthcare yet. I hope it will come soon, perhaps in the form of a Watson derivative.

Anyone have anything to share in the way of good lifehacks? Even if it only works for you, I would very much like to hear about it. Here are two I've been using with much success lately:

  • Get an indoor cycle or a treadmill and exercise while working on a laptop. At first I just used to cycle while watching movies on TV, but lately I've stopped watching movies and just cycle while doing SRS reps or reading ebooks. Set up your laptop with its power cable and headphones on the cycle, and leave them there always. If you're too tired to cycle, just sit on the c

... (read more)

So, anime is recognized as one of the LW cultural characteristics (if only because of Eliezer) and has come up occasionally, eg. http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/

Is this arbitrary? Or is there really something better for geeks about anime vs other forms of pop culture? I have an essay arguing that due to various factors anime has the dual advantages of being more complex and also more novel (from being foreign). I'd be interested in what other LWers have to say.

Neil deGrasse Tyson is answering questions at reddit:

What are your thoughts on cryogenic preservation and the idea of medically treating aging?

neiltyson 737 points 5 hours ago

A marvelous way to just convince people to give you money. Offer to freeze them for later. I'd have more confidence if we >had previously managed to pull this off with other mammals. Until then I see it as a waste of money. I'd rather enjoy the >money, and then be buried, offering my body back to the flora and fauna of which I have dined my whole life.

Does anyone else have a... (read more)

5Dorikka12y
I have never heard of this person before, but if they actually think "offering my body back to the flora and fauna of which I have dined my whole life." is worth mentioning, it sounds like they're victim of a naturalistic bias.
1MixedNuts12y
In this case it just marks Tyson an undiscriminating skeptic. Eliezer has written on the general case of disagreement.
0lessdazed12y
What if, hypothetically, no one has made much money freezing people? What if, hypothetically, it cost $5 to freeze someone indefinitely? What's the cost at which it becomes worth it, even in absence of it working on a whole mammal?

I'm having trouble deciding how to weight the preferences of my experiencing self versus the preferences of my remembering self. What do you do?

1jhuffman12y
I forget.

I am currently in an undergrad American university. After lurking on LW for many months, I have been persuaded that the best way for me to contribute towards a positive Singularity is to utilize my comparative advantage (critical reading/writing) to pursue a high-paying career; a significant percentage of the money I earn from this undecided lucrative career will hopefully go towards SIAI or some other organization that is helping to advance the same goals.

The problem is finding the right career that is simultaneously well-paying and achievable, with hopef... (read more)

[This comment is no longer endorsed by its author]Reply
5daenerys12y
I am going to say that academia (in the humanities) is not a good choice if you want to make money, or even be guaranteed a job. Professorial jobs are moving away from tenure-track positions, and towards part-time positions. There are very few professorial jobs and very many people (who are all "the best") who want them. Boring Data Or put in more understandable terms: 100 Reasons Not to go into Academia Or for amusement's sake: PhD Comics
0Suryc1112y
Thanks for the information! I was already leaning away from academia for those very reasons.

Reminder of classic comment from Will Newsome: Condensed Less Wrong Wisdom: Yudkowsky Edition, Part 1.

I've noticed that I have developed a habit of playing dumb.

Let me explain. When someone says something that sounds stupid to me, I tend to ask the obvious question or pretend to be baffled as if I'd never heard of the issue before, rather than giving a lecture. I do this even when it is ridiculously improbable that I don't already know about and simply disagree with said issue. I'm non-confrontational by nature, which probably had something to do with slipping into this habit, but I also pride myself on being straightforward, so...

What I'm wondering, is it... (read more)

3TheOtherDave12y
My $0.02: there's a gradient between listening charitably (e.g., assuming that your interlocutor probably meant something sensible, and therefore that the senseless thing you heard doesn't accurately reflect their meaning) on the one hand, prioritizing your time (e.g., disengaging from discussions that seem like a waste of time, either with silence or with cached politeness or whatever) on the other, and refusing to challenge error (e.g., pretending something is reasonable because pointing out the flaws with it feels rude) on a third. Only the third of those seems like a problem to me. Where you draw the threshold of too-much-in-that-third-bucket is a really up to you. You're under no ethical obligation to prompt non-cached thoughts from everyone who talks to you.
0Cthulhoo12y
I have developed a similar habit over time. I am often the "smart guy" in my social enviroment (I'm not particularly brilliant, but neither is my usual enviroment), and I can often identify major flaws in other people's reasoning. Despite this, I very rarely point them directly out. Social conventions usually state that this behaviour is considered unpolite, indirectly implying that the other person is dumb. It can be even worse if the other person is emotionally attached to the thought. So, unless I am discussing with a very close friend, I usually restrain from making meaningful comments.
0Oscar_Cunningham12y
This is connected with the first post in this thread. Conversation is easier when you take turns, setting up your partner to ask obvious questions.

Is there a strong reason to think that morality is improving? Contrast with science, in which better understanding of physics leads to building better airplanes, notwithstanding the highly persuasive critiques of science from Kuhn, et al. But morality has no objective test.

100 years ago, women were considered inherently inferior. 200 years ago, chattel slavery was widespread. 500 years ago, Europe practiced absolute monarchy. I certainly think today is an improvement. But proponents of those moralities disagree. Since the laws of the universe don't have a variable for justice, how can I say they are wrong?

9gwern12y
Funnily enough, I just wrote an essay on the related meta-ethics topic, Singer's Whiggish 'expanding circle' thesis: http://www.gwern.net/Notes#the-narrowing-circle
4taelor12y
It's tempting to give in to the Whig Theory of History and concede that the "good guys" always win eventually, because this does seem (at least superficially) to be the case; the Nazis and Soviets both lost out, slavery got abolished, feminism and the civil rights movement happened. The question is, though, did the good guys win out because they were "good", or are they seen as good because they won?
3Prismattic12y
It's not quite that simple. The descendants of the victors generally see the victors as good, but that doesn't mean the descendants of the vanquished see the defeated as evil. Nazism seems to be a case where the defeated society really has strongly repudiated its past, but there is plenty of Soviet nostalgia in Russia and Confederate nostalgia in Dixie.
4TheOtherDave12y
"Morality is improving" is a bit underspecified, as is "science is improving." But assuming "morality is improving" means something like "on average, people's moral beliefs are better than they used to be" (which seems to be what you mean), you're right of course that the question only makes sense if you have some way of identifying "better". But then, similar things are true of science and airplanes. A 2011 airplane isn't "objectively better" than a 1955 airplane. It's objectively different, certainly, but to assert that the differences are improvements is to imply a value system. If you're confident enough in your value system to judge airplanes based on it, what makes judging moral systems based on it any different?
[-][anonymous]12y120

Science is not airplanes, but the capability to produce airplanes. In 2011, we know how to make 1955 airplanes (as well as 2011 airplanes). In 1955, we only knew how to make 1955 airplanes. Science is advancing.

0TheOtherDave12y
Fair point.
1TimS12y
I don't think there is a dispute that the social purpose of an airplane is to move people a substantial distance in exchange for fuel. Modern airplanes move more people for less fuel than 1955 airplanes. Therefore, they are objectively better than older airplanes. And that doesn't even address speed. ---------------------------------------- I'm very confident that I am more moral than Louis XIV. I suspect he would disagree. How should we decide who is right?
3TheOtherDave12y
I could quibble about the lack of dispute -- I know plenty of people who object to the environmental impact of modern planes, for example, some of whom argue that the aviation situation is worse in 2011 than it was in 1955 precisely because they value low environmental impact more than they value moving more people (or, at least, they claim to) -- but that's really beside my point. My point is just that asserting that moving more people (faster, more comfortably, more cheaply, etc.) for less fuel is what makes airplanes better is asserting a value system. That it is ubiquitously agreed upon (supposing it were) makes it no less a value system. Regardless of how we should decide, or even if there is a way that we should decide, the way we will decide is that you will evaluate moral(TimS) - moral(Louis XIV) based on your value system, and I will evaluate it based on mine. (What Louis XIV's opinion on the matter would have been, had he ever considered it, doesn't matter much to me, and it certainly doesn't matter to Louis, who is dead. Does it matter to you?) Just like you evaluate good(2011 airplanes) - good(1955 airplanes) based on your value system, and I evaluate it based on mine. Why in the world would we do anything else?
0TimS12y
Today, everyone agrees that slavery is wrong. So wrong that attempting to implement slavery will cause you to be charged with all sorts of crimes. Yet our ancestors didn't think slavery was wrong. Were they just idiots? ---------------------------------------- I'm not going to argue that Science isn't a value system, but it succeeds on its own terms. Even if you think that The Structure of Scientific Revolutions is brilliant and insightful, Science shows that it succeeds at what it aims for. A similar critique of morality can be found in books like Nietzche's On the Genealogy of Morals. What is morality's response?
0TheOtherDave12y
No. That's good to know, but I didn't claim that science was a value system. I claimed that "what makes an airplane better is carrying more people further with less fuel" is a value system. So is "what makes an airplane better is being painted bright colors". (As far as I know, nobody holds that one.) Science may be a value system, but it isn't one that tells us that carrying more passengers with less fuel is better than carrying fewer passengers with more fuel, nor that having bright colors is better than having non-bright colors. Science helps us find ways to carry more passengers with less fuel, it also helps us find ways to make colors brighter. I don't understand what this question is asking.
-1TimS12y
Then how did they fail to notice that slavery is wrong? ---------------------------------------- Science the investigative part of humanity's attempt to control Nature. It is objectively the case that we control Nature better than we once did. I assert that there is evolutionary pressure on our attempts to control nature. Specifically, bad Science fails to control nature. ---------------------------------------- Very paraphrased Structure of Scientific Revolutions: Science makes progress via paradigm shifts. Very paraphrased Nietzsche: Paradigm shifts have occurred in morality. If paradigm shifts don't seem like a radical claim about either Science or Morality, then perhaps I should write a discussion post about why the claim is extraordinary.
0TheOtherDave12y
I'm having a very hard time following your point, so if you can present it in a more systematic fashion in a discussion post, that might be best.
0atorm12y
I think I followed the point pretty well, although I don't know that I can explain it any better. It's worth its own post, TimS.
0TimS12y
I appreciate your feedback. I'm struggling with whether this idea is high enough quality to make a discussion post. And my experience is that I underestimate the problem of inferential distance.
1TheOtherDave12y
Most people underestimate inferential distance, so that's a pretty good theory. If it helps, I think the primary problem I'm having is that you have a habit of substituting discussion of one idea for discussion of another (e.g, "morality's response" to Nietzsche vs. the radical/extraordinary nature of paradigm shifts, the value system that sorts airplanes vs. the "investigative part of humanity's attempt to control Nature," etc.) without explicitly mapping the two. I assume it's entirely obvious to you, for example, how you would convert an opinion about paradigm shifts in morality into a statement about morality's response to Nietzsche and vice-versa, so from your perspective you're simply alternating synonyms to make your writing more interesting. But it's not obvious to me, so from my perspective each such transition is basically changing the subject completely, so each round of discussion seems only vaguely related to the round before. Eventually the conversation feels like trying to nail Jello to a tree. Again, I don't mean here to accuse you of changing the subject or of having incoherent ideas; for all I know your discussion has been perfectly consistent and coherent, I just lack your ability (and, evidently, atorm's) to map the various pieces of it to one another (let alone to my own comments). So, something that might help close the inferential distance is to start over and restate your thesis using consistent and clearly defined terms.
0[anonymous]12y
.
0TheOtherDave12y
Thank you for defining your terms. I agree that we have the same basic neural/behavioral architecture that our stone-age ancestors had, and that we arrange the world such that others suffer less harm (per capita) than our stone-age ancestors did, and that this is a good thing.
2jhuffman12y
There are a lot of people who would argue morality has been getting worse since their own youth. It doesn't matter when or where we are talking about, it is pretty much always true that a lot of people think this. The same is true of fashion.
1malthrin12y
What measurable quantity are you talking about here?
0TimS12y
Moral goodness is the quality I'm referencing, but measurable isn't an adjective easily applied to moral goodness.
3malthrin12y
If it's not directly measurable, it must be a hidden node. What are its children? What data would you anticipate seeing if moral goodness is increasing? I'm asking these basic questions to prompt you to clarify your thinking. If the concept that you label 'moral goodness' is not providing any predictions, you should ask yourself why you're worried about it at all.
0TimS12y
I don't understand, since I don't think your position is "morality does not exist for lack of ability to measure."
3malthrin12y
"Morality" is a useful word in that it labels a commonly used cluster of ideaspace. Points in that cluster, however, are not castable to an integer or floating point type. You seem to believe that they do implement comparison operators. How do those work, in your view?
0TimS12y
You are using some terminology that I don't recognize, so I'm uncertain if this is responsive, but here goes. We are faced with "choices" all the time. The things that motivate us to make a particular decision in a choice are called "values." As it happens, values can be roughly divided into categories like aesthetic values, moral values, etc. Value can conflict (i.e. support inconsistent decisions). Functionally, every person has a table listing all the values that the person finds persuasive. The values are ranked, so that a person faced with a decision where value A supports a different decision than value B knows that the decision to make is to follow the higher ranked value. Thus, Socrates says that Aristotle made an immoral choice iff Aristotle was faced with a choice that Socrates would decide using moral values, and Aristotle made a different choice than Socrates would make. ---------------------------------------- Caveats: * I'm describing a model, not asserting a theory about the territory (i.e. I'm no neurologist) * My statements are attempting to provide a more rigorous definition of value. Hopefully, it and the other words I invoke rigorously (choice, moral, decision) correspond well to ordinary usage of those words. Is this what you are asking?
-2malthrin12y
That's a good start. Let's take as given that "morality" refers to an ordered list of values. How do you compare two such lists? Is the greater morality: * The longer list? * The list that prohibits more actions? * The list that prohibits fewer actions? * The closest to alphabetical ordering? * Something else? Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that "morality" is increasing.
0TimS12y
People measure morality be comparing their agreement on moral choices. It's purely behavioral. As a corollary, a morality that does not tell a person how to make a choice is functionally defective, but it is not immoral. ---------------------------------------- There are lots of ways of resolving moral disputes (majority rule, check the oracle, might makes right). But the decision of which resolution method to pick is itself a moral choice. You can force me to make a particular choice, but you can't use force to make me think that choice was right.
1malthrin12y
Sorry, I don't know what morality is. I thought we were talking about "morality". Taboo your words.
0TimS12y
Ok, I like "ordered list of (abstract concepts people use to make decisions)." I reiterate my points above: When people say a decision is better, they mean the decision was more consistent with their list than alternative decisions. When people disagree about how to make a choice, the conflict resolution procedure each side prefers is also determined by their list.
0billswift12y
"Morality" seems to me to be a fuzzy algebraic sum of many different actions that we approve or disapprove of. So the first step might be to list the actions, then whether we approve or disapprove of it and how much. That should keep people busy for a good while. Just trying to decide how to "measure" how much we approve or disapprove of a specific action is likely to be a significant problem.
0Jayson_Virissimo12y
What is immoral about monarchy (relative to democracy)?
0TimS12y
Absolute monarchy vs. Limited Monarchy I confess I don't know much about the little European monarchies you highlighted, but I strongly suspect that they are not Absolute Monarchies.
0MixedNuts12y
You mean relative to republic. All of these are democracies.
0Richard_Kennaway12y
The same way you make any other moral judgement -- whatever way that is. That is, you are really asking "what, if anything, is morality?" If you had an answer to that question, answering the one you explicitly asked would just be a matter of historical research, and if you don't, there's no possibility of answering the one you asked.
-2TimS12y
Fair enough. I think the combination of historical evidence and the lack of a term for justice in physics equations is strong evidence that morality is not real. And that bothers me. Because it seems like society would have noticed, and society clearly thinks that morality is real.
4atorm12y
Society has failed to notice lots of things.
2Richard_Kennaway12y
Perhaps it is real, but is not the sort of thing you are assuming it must be, to be real. I can't point to the number 2, and some people, perplexed by this, have asserted that numbers are not real. I can point to a mountain, or to a river, but I can't point to what makes a mountain a mountain or a river a river. Some people, perplexed by this, conclude there are no such things as mountains and rivers. I can't point to my mind....and so on. Can I even point? What makes this hand a pointer, and how can anyone else be sure they know what I am pointing to? Stare at anything hard enough, and you can cultivate perplexity at its existence, and conclude that nothing exists at all. This is a failure mode of the mind, not an insight into reality. Have you seen the meta-ethics sequence? The meta-ethical position you are arguing is moral nihilism, the belief that there is no such thing as morality. There are plenty of others to consider before deciding for or against nihilism.
2lessdazed12y
How hard do you think it would be to summarize the content of the meta-ethics sequence that isn't implicit from the Human's Guide to Words? I never recommend anyone read the ethics sequence fist.
-2TimS12y
It's funny that I push on the problem of moral nihilism just a little, and suddenly someone thinks I don't believe in reality. :) I've read the beginning and the end of the meta-ethics sequence, but not the middle. I agree with Eliezer that recursive questions are always possible, but you must stop asking them at some point or you miss more interesting issues. And I agree with his conclusion that the best formulation of modern ethics is consideration for the happiness of beings capable of recursive thought. ---------------------------------------- I like to write a discussion post (or a series of posts) on this issue, but I don't know where to start. Someone else responded to me (EDIT: with what seemed to me like] questioning the assertion that science is a one-way ratchet, always getting better, never getting worse. [EDIT: But we don't seem to have actually communicated at all, which isn't a success on my part.] ---------------------------------------- In case you want a connection to Artificial Intelligence: Eliezer talks about the importance of provably Friendly AI, and I agree with his point. If we create super-intelligence and it doesn't care about our desires, that would be very bad for us. But I think that the problem I'm highlighting says something about the possibility of proving that an AI is Friendly.
2TheOtherDave12y
It seems likely to me that I'm the person you're referring to. If so, I don't endorse your summary. More generally, I'm not sure either of us understood the other one clearly enough in that exchange to merit confident statements on either of our parts about what was actually said, short of literal quotes .
-4Nisan12y
Those of our ancestors who were slaves did not like being slaves, and those who were oppressed by monarchies did not like being oppressed. Now some of them may have supported slavery and monarchy in principle, but their morality was clearly broken because they were made deeply unhappy by institutions which they approved of behind a Rawlsian veil of ignorance. Women didn't like the particulars of gendered oppression, so we've clearly made progress by their standards. EDIT: Why the downvotes?
6MixedNuts12y
Actually, victims of oppressive systems often support them. Many girls get clitoridectomies because their mothers demand it, even against their fathers' wishes.
0Nisan12y
Yes, you can find ways in which victims are made to be complicit in their oppression. But it's not hard to find ways in which victims genuinely suffer, and that's all that's needed for an objective moral standard.
0[anonymous]12y
NMDV, but maybe it's for "their morality was clearly broken"?

What would you suggest someone to read if you were trying to explain to them that souls don't exist and that a person is their brain? I vaguely remember reading something of Eliezer's on this topic and someone said they would read some articles if I sent them to them. Would it just be the Free Will sequence?

(NB -- posting this under the assumption that open threads have subsumed off-topic threads, of which I haven't seen any recently. If this is mistaken, I'll retract here and repost in an old off-topic thread)

I've seen numerous threads on Lesswrong discussing the appeal of the games of go, mafia, and diplomacy to rationalists. I thought I would offer some additional suggestions in case people would like to mix up their game-playing a bit, for meetups or just because. Most of these involve a little more randomness than the games listed above, but I don't r... (read more)

I've come to realize that I don't understand the argument that Artificial Intelligence will go foom as well as I'd like. That is, I'm not sure I understand why AI will inherently become massively more intelligent than humans. As I understand it, there are three points:

  • AI will be able to self-modify its structure.

    By assumption, AI has goals, so self-modification to improve its ability to achieve those goals will make AI more effective.

  • AI thinks faster than humans because it thinks with circuits, not with meat.

    The processing speed of a computer i

... (read more)

I've been thinking of this a bit recently, and haven't been able to come to any conclusion.

Apart from the fact that it discourages similar future behavior in others, is it good for people who do bad things to suffer? Why?

5dlthomas12y
My answer has long been an unequivocal "no", on the grounds that I don't see why it would be, and so "hurting people is bad" doesn't get any exceptions it doesn't need.
0ahartell12y
That's the conclusion I keep coming to, but I have trouble justifying this to others. It's just such an obvious built-in response that bad people deserve to be unhappy. I guess the inferential difference is too high. Follow Up: What is your opinion of prisons? How unpleasant should they be? Is the answer to the second question something like "the unpleasantness with the best [unpleasantness] to [efficacy in discouraging antisocial behavior] ratio, while favoring ratios with low unpleasantness and high discouragement"? (Feel free to tell me that the above sentence in unintelligible).
2dlthomas12y
I think there are a number of issues that go into prison design. The glib answer is, "whatever produces the best outcomes," but I understand that leaving it at that is profoundly unsatisfying. I don't have the background in the domain to give a detailed answer, but I have some thoughts about things worth considering. I generally take "unpleasant" to mean strongly "not liked" at the time. There is, however, a distinction between liking and wanting, in terms of how our brains deal with these things. For deterrence, we want the situation to be "not wanted" - how much people dislike being in jail while actually in jail is irrelevant. It is also worth noting that both perceived degree of punishment and perceived likelihood of punishment matter.
1dlthomas12y
A consequence of this that just occurred to me (and obviously, I've not chewed on it long so I expect there are some holes): In some circumstances, we may make jail a stronger deterrent by making it more pleasant. Consider, for instance, if jail time is being used to signal toughness and thereby acquire status in a given peer group. Cop shows and the like occasionally portray this kind of thing (particularly with musicians wishing to establish credibility - I think Bones did this more than once). The more prisoners are seen as abused, the stronger the signal. If prisoners are seen as pampered, that doesn't work so well. I have no idea how much this hypothetical corresponds to reality in the first place, however, or under what circumstances this effect would dominate compared to countervailing pressures.
0wedrifid12y
Slightly more glib: "Whatever produces the best outcomes for the decision maker".
0ahartell12y
Thanks. That makes a ton of sense.
1Oscar_Cunningham12y
Semantic stop-sign alert!
4dlthomas12y
Applause lights?
0ahartell12y
I was using an applause light? Is there a better way to term that my opinions on this matter seem really weird to people who have never heard of consequentialism and don't spend much time thinking about the nature of morality (though neither do I, really)?
1dlthomas12y
I think that signaling, "See, I read the sequences!" was not 0% of your motivation in phrasing that way. I don't actually think it's a big problem. I don't think it was all that significant a portion of your motivation, or I would have commented directly. I actually think that the marking of it as a semantic stop-sign was incorrect; while the phrase, "the inferential distance is too high" could certainly be used that way, it was a tangential issue you (as I read it) were putting on hold, not washing your hands of. What would your response have been, if someone had responded with a request to look at ways to shrink the inferential distance? I therefore think Oscar's post is more of an applause light - he could have more usefully engaged, and instead chose to simply quote scripture at you. The fact that there was one comment which contained short snippets by two different posters that amounted to basically nothing but a reference into the sequences each seemed worth commenting on. And what better way than to make the situation worse?
4ahartell12y
That's probably fair. More than "See, I read the sequences!", it was probably something like "Look, I fit in with you guys because we know the same obscure terms! And since I consider LW posters who seem smart high status this makes me high status by association!". I didn't verbally think that, of course, but still.
2ahartell12y
I don't think it fits completely. I wasn't trying to completely write off my inability to defend this view with others (it probably also has to do with the fact that my ideas aren't fully formed) and I think the phrase does convey information. It means that the people I was referring to don't have the background knowledge (mainly consequentialism) to make my views seem reasonable. Hence, high inferential difference.
1wedrifid12y
False positive. (Does not appear to be a semantic stop sign.)
1Nornagest12y
Only insofar as it discourages similar future behavior in the same person, I'd say. If we're discounting future consequences entirely I'm not sure it makes sense to talk about punishment, or even about good and bad in the abstract. But I'm a consequentialist, and I think you'll find that the deontological or virtue-ethical answers to the same question are quite different.
0dlthomas12y
I'm not sure that I agree. It may be necessary to punish more to keep a precommitment to punish credible. That precommitment may be preventing others from doing harm.
1Nornagest12y
Fair enough. I'd lumped the effects of that sort of precommitment under "discouraging others from acting similarly", and accordingly discarded it.
0dlthomas12y
Ah, I read it as a contrast. My bad.
0ahartell12y
Thanks, could you respond to my reply to dlthomas, as well?
2Nornagest12y
Yeah, I saw the comment. I wasn't going to reply to it, but I might as well unpack my reasons why: the ethics of imprisonment are fairly complicated, and depend not only on deterrent effects and the suffering of prisoners but also on a number of secondary effects with their own positive or negative consequences. Resource use, employability effects, social effects on non-prisoners, products of prison labor, et cetera. I don't feel qualified to evaluate all that without quite a lot of research that I currently have little reason to pursue, so I'm going to reserve judgment on the question for now.
0ahartell12y
Sorry, and thank you.

Recent results suggest that red dwarf stars may have habitable planets after all. Summary article in New Scientist. These stars are much more common than G-type stars like the sun, and moreover, previous attempts at searching for life (such as looking for radio waves or for looking for planets that show signs of oxygen) has focused on G-type stars. The basic idea of this new result is that water ice will more effectively absorb radiation from red dwarfs (due to the infrared wavelengths that much of their output occurs in) allowing planets which are farth... (read more)

I recall a study showing that eating lower-GI breakfast cereals helps schoolchildren focus. Perhaps this is related to blood glucose's relation to willpower?

Up until recently my diet was around 50% fruit and fruit-juice, but lately I've tried cutting fruit out and replacing it with carbs and fat and protein. I'm not sure whether this has strongly affected my willpower. My willpower /has/ improved, but I started exercising more and went onto cortisone around the same time, so I'm not sure what's doing it. However, the first few days without sugary food, esp... (read more)

Video: Eliezer Yudkowsky - Heuristics and Biases

Yudkowsky on fallacies, occam, witches, precision and biases.

Video: How Should Rationalist Approach Death?

Skepticon 4 Panel featuring James Croft, Greta Christina, Julia Galef and Eliezer Yudkowsky.

Maybe some of you have already seen my Best of Rationality Quotes post. I plan to do it again this December. That one spanned 21 months of Rationality Quotes. Would you prefer to see a Best of 2011 or a Best of So Far?

0gwern12y
I'd like to see annual editions. If you were up to it, it'd be nifty to have 'Best of 2009/2010/2011' and then an overall ranking, 'Best of LW'.
[-][anonymous]12y00

(The practical ethics of posting on the internet are sometimes complicated. Ideally, all posts should be interesting, well-reasoned, and germane to the concerns of the community. But not everyone has such pure motives all of the time. For example, one can imagine some disturbed and unhealthy person being tempted to post an incoherent howl of despair, frustration, and self-loathing in a childish cry for attention that will ultimately be regretted several hours later. For the sake of their own reputation and good community standing (to say nothing of keeping... (read more)

[This comment is no longer endorsed by its author]Reply

Would it be possible for someone to help me understand uploading? I can understand easily why "identity" would be maintained through a gradual process that replaces neurons with non-biological counterparts, but I have trouble understanding the cases that leave a "meat-brain" and a "cloud-brain" operating at the same time. Please don't just tell me to read the quantum physics sequence.

1TheOtherDave12y
Can you clarify what it is you don't understand? If you're looking for an explanation of how to implement uploading, I can't be much help. If you're looking for a way of having the idea seem more plausible: try reversing it. If I can replace neurons with hardware at all, why should it only be possible gradually, and why should it require destroying the original?
0ahartell12y
I can't wrap my head around what it would be like to exist as the "meat-version" and "cloud-version" at the same time if both of the versions maintain my identity. The reversal thing sort of makes me want to accept it more, but I wouldn't want to support the idea just because I personally don't know what would make those things limiting factors. About gradualness: I can almost imagine that being important. Like, if you took baby!me and wrote over my mind with adult!me, maybe identity wouldn't be preserved... I guess that doesn't really make sense. But the gradualness of the change between baby!me and adult!me seems vaguely related to identity. Really the problem I have is that I don't get what it would be like to be existing in two ways at once and be experiencing both. If I were only to experience one, I would experience the meat-version after I was scanned, and the meat-version's death/destruction wouldn't change that. Sorry if I'm being dense.
1TheOtherDave12y
Gotcha. OK, try this then: At time T1, I begin replacing my meat with cloud. At T2, I complete that process. At T3, I make a copy of my cloud-self. Is it your intuition that that third step ought to fail? If so, can you unpack that intuition? If you think that third step can succeed, do you have the same problem? That is, if I can have two copies of my cloud-self running simultaneously, do you not get what that would be like? My answer to what that would be like is it would be just like this. That is, if you make a cloud-copy of me while I'm sleeping, I wouldn't know it, and the existence of that cloud-copy wouldn't in any way impinge on my experience of the world. Also, I would wake up in the cloud, and the existence of my meat body would not in any way impinge on my experience of the world. There's just two entities, both of which are me.
1ahartell12y
I guess I have similar problems with the third step. I'm really sorry if it seems like I'm just refusing to update, and thanks a bunch; that last part really did help. But consider the following: Isn't that still like dying? I know that to the world it's the same, but from the inside, it's death, right?. Have you read HPMoR? Fred and George are basically alternate copies of the same brain. If you were Fred, wouldn't you rather not die, even though you would still have George!you alive and well?
0TheOtherDave12y
It's not a problem; this idea is genuinely counterintuitive when first encountered. The reason it's counterintuitive is that you're accustomed to associating "ahartell" with a single sequence of connected observer-moments. Which makes sense: in the real world, it's always been like that. But in this hypothetical world there are two such sequences, unrelated to one another, and they are both "ahartell." That's completely unlike anything in your real experience, and the consequences of it are legitimately counterintuitive; if you want to understand them you have to be willing to set those intuitions aside. One consequence is that you can both live and die simultaneously. That is, if there are two ahartells (call them A and 1) and A dies, then you die; it's a real death, just as real as any other death. If 1 survives, then you survive; it's a real survival, just as real as any other survival. The fact that both of these things happen at once is counterintuitive, because it doesn't ever come up in the real world, but it is a natural consequence of that hypothetical scenario. Similarly, another consequence is that you can die twice. That is, if A and 1 both die, those are two independent deaths, each as real as any other death. And another consequence is that you can live twice. That is, if A and 1 both survive, they are two independent lives; A is not aware of 1, 1 is not aware of A. A and 1 are different people, but they are both you. Again, weird and counterintuitive, but a natural consequence of a weird and counterintuitive situation.
0ahartell12y
Ok, three more questions/scenarios. 1) You are Fred (of HPMOR's Fred & George, who for this we'll assume are perfect copies). Voldemort comes up to you and George and says he will kill one of you. If he kills George, you live and nothing else happens. If he kills you, George lives and gets a dollar. Would you choose to allow you!Fred to die? And not just as the sacrifice you know it's reasonable to make in terms of total final utility but as the obvious correct choice from your perspective. (If the names are a problem, assume somebody makes a copy of you and immediately asks you this question.) 2) If all else is equal, would you rather have N*X copies than X copies for all positive values of X and all positive and greater than 1 values of N? (I don't know why I worded that like that. Would you rather have more copies than less for all values of more and less?) 3) You go to make copies of yourself for the first time. You have $100, with which you can pay for either 1 copy or 100 copies (with a small caveat). If you choose 100 copies, each copy's life is 10% less good, and the life of original/biological!you will be 20% less good (the copy maker is a magical wizard that can do things like this and likes to make people think). Do you choose the 100 copies? And do you think that it is obviously better and one would be stupid to choose otherwise? Thanks.
0TheOtherDave12y
Re: #1... there are all kinds of emotional considerations here, of course; I really don't know what I would do, much as I don't know what I would do given a similar deal involving my real-life brother or husband. But if I leave all of that aside, and I also ignore total expected utility calculations, then I prefer to continue living and let my copy die. Re: #2... within some range of Ns where there aren't significant knock-on effects unrelated to what I think you're getting at (e.g., creating too much competition for the things I want, losing the benefits of cooperation among agents with different comparative advantages, etc.), I prefer that N+1 copies of me exist than N copies. More generally, I prefer the company of people similar to me, and I prefer that there be more agents trying to achieve the things I want more of in the world. Re: #3... I'm not sure. My instinct is to make just one copy rather than accept the 20% penalty in quality of life, but it's not an easy choice; I acknowledge that I ought to value the hundred copies more.
0ahartell12y
I'm not trying to back you into a corner, but it seems like your responses to #1 and #3 indicate that you value the original more than the others, which seems to imply that the copies would be less you. From your answer to #2, I came up with another question. Would you value uploading and copying just as much if somehow the copies were P-zombies? It seems like your answers to #1-3 would be the same in that case. Thanks for being so accommodating, really.
0TheOtherDave12y
I don't value the original over the others, but I do value me over not-me (even in situations where I can't really justify that choice beyond pure provincialism). A hypothetical copy of me created in the future is just as much me (hypothetically) as the actual future me is, but a hypothetical already-created copy of a past me is not me. The situation is perfectly symmetrical; if someone makes a copy of me and asks the copy the question in #1, I give the same answer as when they ask the original. I have trouble answering the P-zombie question, since I consider P-zombies an incoherent idea. I mean, if I can't tell the difference between P-zombies and genuine people, then I react to my copies just the same as if they were genuine people... how could I do anything else?
0ahartell12y
Thanks. It makes sense (ish) and you've either convinced me or convinced me that you've convinced me ;).

Double counting of evidence in sports: is it justifiable to prominently and as one of the few pieces of information about them list the number of shutouts (i.e. "clean sheets" when no points are surrendered over the course of a game) by baseball pitchers and goalies? Assume the number of games played and total points allowed are mentioned, so the information isn't misleading.

Are positive and negative utility fungible? What facts might we learn about the brain that would be evidence either way?

2gwern12y
In what sense is utility fungible? Remember utility is fungible by definition - if 1 positive utilon doesn't cancel out 1 negative utilon, then at least one of them was not actually 1.
0lessdazed12y
I confused a perceived pattern in humans for a pattern in the world. Assuming (and it may be so) humans are much more dutch-bookable along -loss/gain and -gain/loss lines than loss/loss or gain/gain lines, and we can project our utility function to remove muddle such that at the end two self-consistent value categories (loss and gain) can't be made consistent with each other, that's our problem. This is unlikely, as even if humans are most muddled along this axis, there is no difference in kind between unmuddling between gains and losses and within gains or losses. Maybe unmuddling just can't be done, but there's little reason to believe that it can be partially done with the result being exactly two categories. I object to holding that tightly to the definition. "Atoms" are divisible...assuming we figured out how to trade all utilities against each other, but only in two inconsistently related categories, "utility" would still be apt if reality was found to lack only that. We would then speak of "positive utility" and "negative utility" using complex numbers or something, able to say 10+5i is "more" (in some sense) than 6+i but not 11+i or 6+11i.
0gwern12y
Inconsistency or Dutch-booking is bad regardless of fungibility, because they let you be pumped for arbitrary amounts. If they don't, then they may simply reflect extreme preferences.
1wedrifid12y
People work for money.
0lessdazed12y
Is there a specific model of human utility you endorse? People prefer A to B, B to C, C to A etc.

I read this term once, but I can't remember it, and every few months I remember that I can't remember the term and it bothers me. I've tried googling but with no success, and I think someone here may know.

The term defines a category of products that is considered more valuable because of its high price. That is, more people buy it because it is high priced than would if it were low price, because the high price makes it seem high value and because the high price makes owning the product high status. The wikipedia page for the term mentioned Rolls Royce cars as an example and said that Apple computers fit the term in the past but now do so less. Does this sound familiar to anyone? Thanks.

7Unnamed12y
Veblen good
1ahartell12y
Wow! Incredible. This has honestly been on my mind for years. I almost said something about it starting with a "V" but I wasn't confident enough and didn't want to discourage a correct answer that started with a different letter. Thanks a ton.
1lessdazed12y
Some products actually are directly more valuable at higher prices. Placebos!

I didn't know where to put this. Maybe someone can help. I am trying to further understand evolution.

PLEASE correct my assumptions if they are inaccurate/wrong: 1) Organisms act instinctively in order to pass alleles on. 2) Human biology is similar, but we have some sort of more developed intelligence (more developed or a distinct one?) that allows us to weigh options and make decisions. Correct me if I am wrong, but it seems that we can act in contradiction to assumption #1 (ex: taking birth control), is this because of the 2nd assumption? Do other animals act similarly (or is there some consciousness we have that they don’t)? Or do they choose not to act in contradiction to assumption #1

We are adaptation executers, not fitness maximizers.

That's equally the case for other animals.

3saturn12y
Evolution doesn't plan ahead. It's possible that humans will acquire an instinctive aversion to birth control, but not before that trait arises by chance and then the individuals who have it out-reproduce the rest of the species.
2pedanterrific12y
Catholicism?

Would people post interesting things to an "Alternate Universe 'Quotes' Thread"?

'Quotes' would include things like:

"They fuck you up, count be wrong" - Kid in The Wire, Pragmatist Alternative Universe when asked how he could keep count of how many vials of crack were left in the stash but couldn't solve the word problem in his math homework.

Teenage Mugger: [Dundee and Sue are approached by a black youth stepping out from the shadows, followed by some others] You got a light, buddy? Michael J. "Crocodile" Dundee: Yeah, sure k... (read more)

This probably wouldn't work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that 'pretty smart' thing's code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizi... (read more)

5dlthomas12y
That's an awfully large search space, with highly nonlinear dynamics, a small target, and might still not be enough to encode what we need to encode. I don't see that approach as very likely to work.
5MixedNuts12y
It's unlikely we'd ever generate something smart enough to be worth keeping yet dumb enough not to kill us. Also, where do you get your friendly utility function from?
2gwern12y
There's no way that is going to work, think of how many possible 100k-characte Brainfuck programs there are. Brainfuck does have the nice characteristic that each program is syntactically valid, but then you have the problem of running them, which is very resource-intensive (you would expect AI to be slow, so you need very large time-outs, which means you test very few programs every time-interval). Speaking of Brainfuck: http://www.vetta.org/2011/11/aiq/