Discuss things here if they don't deserve a post in Main or Discussion.
If a topic is worthy and receives much discussion, make a new thread for it.
Discuss things here if they don't deserve a post in Main or Discussion.
If a topic is worthy and receives much discussion, make a new thread for it.
I'm thinking maybe we should try to pool all LW's practical advice somewhere. Perhaps a new topic in Discussion, where you post a top-level comment like "Will n-backing make me significantly smarter?", and people can reply with 50% confidence intervals. Then we combine all the opinions to get the LW hivemind's opinions on various topics. Thoughts?
PS. Sorry for taking up the 'Recent Comments' sidebar, I don't have internet on my own computer so I have to type my comments up elsewhere and post them all at once.
In one of the subthreads concerned with existential risk and the Great Filter, I proposed that one possible filtration issue is that intelligent species that evolved comparatively earlier in their planets' lifetimes or evolved on planets that formed much sooner compared to when their heavy elements were formed would have a lot more fissionable material (especially uranium-235), and that this might make it much easier for them to wipe themselves out with nuclear wars. So we may have escaped the Great Filter in part by evolving late. Thinking about this more, I'm uncertain how important this sort of filtration is. I'm curious if a) people think this could be a substantial filter and b) if anyone is aware of discussion of this filter in the literature.
If we had had more fissionable material over the last 100 years how would that have made nuclear war more likely?
I just finished reading Steven Pinker's new book, The Better Angels of Our Nature: Why Violence Has Declined. It's really good, as in, maybe the best book I've read this year. Time and again, I was shocked to find subjects treated of keen interest to LW, or which read like Pinker had taken some of my essays but done them way better (on terrorism, on the expanding circle, etc.); even so, I was surprised to learn new things (resource problems don't correlate well with violence?).
I initially thought I might excerpt some parts of it for a Discussion or Article, but as the quotes kept piling up, I realized that it was hopeless. Reading reviews or discussions of it is not enough; Pinker just covers too much and rebuts too many possible criticisms. It's very long, as a result, but absorbing.
When writing a comment on LessWrong, I often know exactly which criticisms people will give. I will have thought through those criticisms and checked that they're not valid, but I won't be able to answer them all in my post, because that would make my post so long that no-one would read it. It seems like I've got to let people criticise me, and then shoot them down. This seems awfully inefficient, it's like the purpose of having a discussion rather than me simply writing a long post is just to trick people into reading it.
There's a room open in one of the Berkeley rationalist houses, http://sfbay.craigslist.org/eby/sub/2678656916.html
Reply via the ad if you are interested for more details!
How to cryonics?
And please forgive me if this is a RTFM kind of thing.
I've been reading LW for a time, so I've been frequently exposed to the idea of cryonics. I usually push it to the back of my mind: I'm extremely pessimistic about the odds of being revived, and I'm still young, after all. But I realize this is probably me avoiding a terrible subject rather than an honest attempt to decide. So I've decided to at least figure out what getting frozen would entail.
Is there a practical primer on such an issue? For example; I'm only now entering grad school, ...
There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I'm talking about? I'm not finding it.
For LifeHacking--instrumental rational skills--does anyone have experience getting lightweight professional advice? E.g., for clothing, hire a personal stylist to pick out some good-looking outfits for you to buy. No GQ fashion-victimhood, just some practical suggestions so that you can spend the time re-reading Pearl's Causality instead of Vogue.
The same approach--simple one-time professional advice, could apply to a variety of skills.
If anyone has tried this sort of thing, I'll be glad to learn your experience.
Gogo LessWrong team! The experience and the potential publicity will be excellent.
I'll chip in with a prize to the amount of ($1000 / team's rank in the final contest), donated to the party of your choice. Team must be identified as "LessWrong" or suchlike to be eligible.
I don't know much about machine learning, but wouldn't it be possible to use machine learning to get a machine to optimize your diet, exercise, sleep patterns, behaviour, etc.? Perhaps it generates a list of proposed daily routines, you follow one and report back some stats about yourself like weight, blood pressure, mood, digit span, etc.. It then takes these and uses them to figure out what parts of what daily routines do what. If it suspects eating cinnamon decreases your blood pressure, it makes you eat cinnamon so you can tell it whether it worked. Th...
Anyone have anything to share in the way of good lifehacks? Even if it only works for you, I would very much like to hear about it. Here are two I've been using with much success lately:
Get an indoor cycle or a treadmill and exercise while working on a laptop. At first I just used to cycle while watching movies on TV, but lately I've stopped watching movies and just cycle while doing SRS reps or reading ebooks. Set up your laptop with its power cable and headphones on the cycle, and leave them there always. If you're too tired to cycle, just sit on the c
So, anime is recognized as one of the LW cultural characteristics (if only because of Eliezer) and has come up occasionally, eg. http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/
Is this arbitrary? Or is there really something better for geeks about anime vs other forms of pop culture? I have an essay arguing that due to various factors anime has the dual advantages of being more complex and also more novel (from being foreign). I'd be interested in what other LWers have to say.
Neil deGrasse Tyson is answering questions at reddit:
What are your thoughts on cryogenic preservation and the idea of medically treating aging?
neiltyson 737 points 5 hours ago
A marvelous way to just convince people to give you money. Offer to freeze them for later. I'd have more confidence if we >had previously managed to pull this off with other mammals. Until then I see it as a waste of money. I'd rather enjoy the >money, and then be buried, offering my body back to the flora and fauna of which I have dined my whole life.
Does anyone else have a...
I'm having trouble deciding how to weight the preferences of my experiencing self versus the preferences of my remembering self. What do you do?
I am currently in an undergrad American university. After lurking on LW for many months, I have been persuaded that the best way for me to contribute towards a positive Singularity is to utilize my comparative advantage (critical reading/writing) to pursue a high-paying career; a significant percentage of the money I earn from this undecided lucrative career will hopefully go towards SIAI or some other organization that is helping to advance the same goals.
The problem is finding the right career that is simultaneously well-paying and achievable, with hopef...
Reminder of classic comment from Will Newsome: Condensed Less Wrong Wisdom: Yudkowsky Edition, Part 1.
I've noticed that I have developed a habit of playing dumb.
Let me explain. When someone says something that sounds stupid to me, I tend to ask the obvious question or pretend to be baffled as if I'd never heard of the issue before, rather than giving a lecture. I do this even when it is ridiculously improbable that I don't already know about and simply disagree with said issue. I'm non-confrontational by nature, which probably had something to do with slipping into this habit, but I also pride myself on being straightforward, so...
What I'm wondering, is it...
Is there a strong reason to think that morality is improving? Contrast with science, in which better understanding of physics leads to building better airplanes, notwithstanding the highly persuasive critiques of science from Kuhn, et al. But morality has no objective test.
100 years ago, women were considered inherently inferior. 200 years ago, chattel slavery was widespread. 500 years ago, Europe practiced absolute monarchy. I certainly think today is an improvement. But proponents of those moralities disagree. Since the laws of the universe don't have a variable for justice, how can I say they are wrong?
Science is not airplanes, but the capability to produce airplanes. In 2011, we know how to make 1955 airplanes (as well as 2011 airplanes). In 1955, we only knew how to make 1955 airplanes. Science is advancing.
What would you suggest someone to read if you were trying to explain to them that souls don't exist and that a person is their brain? I vaguely remember reading something of Eliezer's on this topic and someone said they would read some articles if I sent them to them. Would it just be the Free Will sequence?
(NB -- posting this under the assumption that open threads have subsumed off-topic threads, of which I haven't seen any recently. If this is mistaken, I'll retract here and repost in an old off-topic thread)
I've seen numerous threads on Lesswrong discussing the appeal of the games of go, mafia, and diplomacy to rationalists. I thought I would offer some additional suggestions in case people would like to mix up their game-playing a bit, for meetups or just because. Most of these involve a little more randomness than the games listed above, but I don't r...
I've come to realize that I don't understand the argument that Artificial Intelligence will go foom as well as I'd like. That is, I'm not sure I understand why AI will inherently become massively more intelligent than humans. As I understand it, there are three points:
AI will be able to self-modify its structure.
By assumption, AI has goals, so self-modification to improve its ability to achieve those goals will make AI more effective.
AI thinks faster than humans because it thinks with circuits, not with meat.
The processing speed of a computer i
I've been thinking of this a bit recently, and haven't been able to come to any conclusion.
Apart from the fact that it discourages similar future behavior in others, is it good for people who do bad things to suffer? Why?
Recent results suggest that red dwarf stars may have habitable planets after all. Summary article in New Scientist. These stars are much more common than G-type stars like the sun, and moreover, previous attempts at searching for life (such as looking for radio waves or for looking for planets that show signs of oxygen) has focused on G-type stars. The basic idea of this new result is that water ice will more effectively absorb radiation from red dwarfs (due to the infrared wavelengths that much of their output occurs in) allowing planets which are farth...
I recall a study showing that eating lower-GI breakfast cereals helps schoolchildren focus. Perhaps this is related to blood glucose's relation to willpower?
Up until recently my diet was around 50% fruit and fruit-juice, but lately I've tried cutting fruit out and replacing it with carbs and fat and protein. I'm not sure whether this has strongly affected my willpower. My willpower /has/ improved, but I started exercising more and went onto cortisone around the same time, so I'm not sure what's doing it. However, the first few days without sugary food, esp...
Video: Eliezer Yudkowsky - Heuristics and Biases
Yudkowsky on fallacies, occam, witches, precision and biases.
Video: How Should Rationalist Approach Death?
Skepticon 4 Panel featuring James Croft, Greta Christina, Julia Galef and Eliezer Yudkowsky.
Maybe some of you have already seen my Best of Rationality Quotes post. I plan to do it again this December. That one spanned 21 months of Rationality Quotes. Would you prefer to see a Best of 2011 or a Best of So Far?
(The practical ethics of posting on the internet are sometimes complicated. Ideally, all posts should be interesting, well-reasoned, and germane to the concerns of the community. But not everyone has such pure motives all of the time. For example, one can imagine some disturbed and unhealthy person being tempted to post an incoherent howl of despair, frustration, and self-loathing in a childish cry for attention that will ultimately be regretted several hours later. For the sake of their own reputation and good community standing (to say nothing of keeping...
Would it be possible for someone to help me understand uploading? I can understand easily why "identity" would be maintained through a gradual process that replaces neurons with non-biological counterparts, but I have trouble understanding the cases that leave a "meat-brain" and a "cloud-brain" operating at the same time. Please don't just tell me to read the quantum physics sequence.
Double counting of evidence in sports: is it justifiable to prominently and as one of the few pieces of information about them list the number of shutouts (i.e. "clean sheets" when no points are surrendered over the course of a game) by baseball pitchers and goalies? Assume the number of games played and total points allowed are mentioned, so the information isn't misleading.
Are positive and negative utility fungible? What facts might we learn about the brain that would be evidence either way?
I read this term once, but I can't remember it, and every few months I remember that I can't remember the term and it bothers me. I've tried googling but with no success, and I think someone here may know.
The term defines a category of products that is considered more valuable because of its high price. That is, more people buy it because it is high priced than would if it were low price, because the high price makes it seem high value and because the high price makes owning the product high status. The wikipedia page for the term mentioned Rolls Royce cars as an example and said that Apple computers fit the term in the past but now do so less. Does this sound familiar to anyone? Thanks.
I didn't know where to put this. Maybe someone can help. I am trying to further understand evolution.
PLEASE correct my assumptions if they are inaccurate/wrong: 1) Organisms act instinctively in order to pass alleles on. 2) Human biology is similar, but we have some sort of more developed intelligence (more developed or a distinct one?) that allows us to weigh options and make decisions. Correct me if I am wrong, but it seems that we can act in contradiction to assumption #1 (ex: taking birth control), is this because of the 2nd assumption? Do other animals act similarly (or is there some consciousness we have that they don’t)? Or do they choose not to act in contradiction to assumption #1
We are adaptation executers, not fitness maximizers.
That's equally the case for other animals.
Would people post interesting things to an "Alternate Universe 'Quotes' Thread"?
'Quotes' would include things like:
"They fuck you up, count be wrong" - Kid in The Wire, Pragmatist Alternative Universe when asked how he could keep count of how many vials of crack were left in the stash but couldn't solve the word problem in his math homework.
Teenage Mugger: [Dundee and Sue are approached by a black youth stepping out from the shadows, followed by some others] You got a light, buddy? Michael J. "Crocodile" Dundee: Yeah, sure k...
This probably wouldn't work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that 'pretty smart' thing's code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizi...
Recent results suggest that red dwarf stars may have habitable planets after all. Summary article in New Scientist. These stars are much more common than G-type stars like the sun, and moreover, previous attempts at searching for life (such as looking for radio waves or for looking for planets that show signs of oxygen) has focused on G-type stars. The basic idea of this new result is that water ice will more effectively absorb radiation from red dwarfs (due to the infrared wavelengths that much of their output occurs in) allowing planets which are farther from the red dwarf to have higher temperatures.
The main context that this is relevant to LW is that this is increasing the set of star systems with a potential for life by a large factor. There are around 5 times as many red dwarfs as there are stars like our sun, but the direct increase isn't by a factor of five since red dwarfs were known to have a habitable zone, it was just considered to be small and close to the star. We've already discussed recent results which suggest that a large fraction of G-type stars have planets in their habitable zones, but this potentially swamps even that effect. In that thread, many people suggested that they already assumed that habitable planets were common, but this seems to suggest that they are even more common than anyone was thinking.
This may force an update to putting more of the great filter ahead of us.