If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
(I plan to make these threads from now on. Downvote if you disapprove. If I miss one, feel free to do it yourself.)
An outside view of LessWrong:
I've had a passing interest in LW, but about 95% of all discussions seem to revolve around a few pet issues (AI, fine-tuning ephemeral utilitarian approaches, etc.) rather than any serious application to real life in policy positions or practical morality. So I was happy to see a few threads about animal rights and the like. I am still surprised, though, that there isn't a greater attempt to bring the LW approach to bear on problems that are relevant in a more quotidian fashion than the looming technological singularity.
As far as I can tell, the reason for this is that in practical matters, "politics is the mind killer" is the mind killer.
Is there an argument behind "quotidian" besides "I have a short mental time horizon and don't like to think weird thoughts"?
Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?
I sometimes run into a situation where I see a comment I'm ambivalent about about, that I would normally not vote on. However, this comment also has an extreme vote total, either very high or very low. I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have. What do you do in this situation?
I would prefer this comment to be more like 0, but I'm not sure it's acceptable to vote according to what I want the total to be, as opposed to what I think about the post, because it gives me more voting power than I would otherwise have.
You get to modify the karma rating by one in either direction. Do so in whatever manner seems most desirable to you.
You have too much voting power if you create a sock puppet and vote twice.
You're assuming that biasing karma scores towards zero (relative to what they would be before) is bad. Sure, it could be, but I don't see any particular reason why.
Some thinking is easier in privacy.
In a fascinating study known as the Coding War Games, consultants Tom DeMarco and Timothy Lister compared the work of more than 600 computer programmers at 92 companies. They found that people from the same companies performed at roughly the same level — but that there was an enormous performance gap between organizations. What distinguished programmers at the top-performing companies wasn’t greater experience or better pay. It was how much privacy, personal workspace and freedom from interruption they enjoyed. Sixty-two percent of the best performers said their workspace was sufficiently private compared with only 19 percent of the worst performers. Seventy-six percent of the worst programmers but only 38 percent of the best said that they were often interrupted needlessly.
These are interesting results, but the research was from 1985--"Programmer Performance and the Effects of the Workplace," in Proceedings of the 8th International Conference on Software Engineering, August 1985. It seems unlikely that things have changed, but I don't know whether the results have been replicated.
The biggest risk of "existential risk mitigation" is that it will be used by the "precautionary principle" zealots to shut down scientific research. There is some evidence that it has been attempted already, see the fear-mongering associated with the startup of the new collider at CERN.
A slowdown, much less an actual halt, in new science is the one thing I am certain will increase future risks, since it will undercut our ability to deal with any disasters that actually do occur.
As part of my work for Luke, I looked into price projections for whole genome sequencing, as in not SNP genotyping, which I expect to pass the $100 mark by 2014. The summary is that I am confident whole-genome sequencing will be <$1000 by 2020, and slightly skeptical <$100 by 2020.
Starting point: $4k in bulk right now, from Illumina http://investor.illumina.com/phoenix.zhtml?c=121127&p=irol-newsArticle_print&ID=1561106 (I ran into a ref saying knomeBASE did <$5k sequencing - http://hmg.oxfordjournals.org/content/20/R2/R132.full#xref-ref-...
I'm reading Moldbug's Patchwork and considering it as a replacement for Democracy. I expected it to be dystopia, but it actually sounds like a neat place to live, it is however a scary Eutopia.
Has anyone else read this recently?
At LW, religion is often used as a textbook example of irrationality. To some extent, this is correct. Belief in untestable supernatural is a textbook example of belief in belief and privileging the hypothesis.
However, religion is not only about belief in supernatural. A mainstream church that survives centuries must have a lot of instrumental rationality. It must provide solutions for everyday life. There are centuries of knowledge accumulated in these solutions. Mixed with a lot of irrationality, sure. Many religious people were pretty smart, for example...
When it comes to accepting evolution, gut feelings trump fact
...“What we found is that intuitive cognition has a significant impact on what people end up accepting, no matter how much they know,” said Haury. The results show that even students with greater knowledge of evolutionary facts weren’t likelier to accept the theory, unless they also had a strong “gut” feeling about those facts...
In particular, the research shows that it may not be accurate to portray religion and science education as competing factors in determining beliefs about evolution. For th
A current thought experiment I'm pondering:
Scientists discover evidence that popularly discriminated against really does have all the claimed negative traits. The evidence is so convincing that everyone who hears it instantly agrees this is the case.
If you want to picture a group, I suggest the discovery that Less Wrong readers are evil megalomaniacs who want to turn you into paperclips.
How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?
I've heard...
I'm puzzled that you describe this as a hypothetical.
For example, the culture I live in is pretty confident that five-year-olds are so much less capable than adults of acting in their own best interests that the expected value to the five-year-olds of having their adult guardians make important decisions on their behalf (and impose those decisions against their will) is extremely positive.
Consequently we are willing to justify subjecting five-year-olds to profound inequalities.
This affects my ideas of equality quite a bit, and always has. It is indeed OK to discriminate "against" them, and to treat them differently legally, and to not invite them to dinner, and always has been.
How, if at all, does this affect your ideas of equality? Is it now okay to discriminate against them? Treat them differently legally? Not invite them to dinner?
We are actually as a society ok with discriminating against the vast majority of possible social groups. If this was not the case life as we know it would simply become impossible because we would have to treat everyone equally. That would be a completely crazy civilization to live in. Especially if it considered the personal to be political.
You couldn't like Alice because she is smart, since that would be cognitivist. You couldn't hang out with Alice because she has a positive outlook on life, because that would discriminate against the mentally ill (those who are currently experiencing depression for starters). You couldn't invite Alice out for lunch because you think she's cute, because that would be lookist. ect. ect.
Without the ability to discriminate between the people who have traits we find desirable or useful and those we don't, without a bad conscience, most people would be pretty miserable and perpetually repressed. Indeed considering humans are social creatures I'd say the repression and psychological damage would dwarf anything ever caused by even the most puritanical sexual norms.
"Discrimination" usually just means "applying statistical knowledge about the group to individuals in the group" and is a no-no in our society. If you examine it too closely, it stops making sense, but it is useful in a society where the "statistical knowledge" is easily faked or misinterpreted.
What are some efficient ways to signal intelligence? Earning an advanced degree from a selective university seems rather cost intensive.
I figured someone would have said this by now, and it seems obvious to me, but I'm going to keep in mind the general principle that what seems obvious to me may not be obvious to others.
You said efficient ways to signal intelligence. Any signaling worth salt is going to have costs, and the magnitude of these costs may matter less than their direction. So one way to signal intelligence is to act awkwardly, make obscure references, etc.; in other words, look nerdy. You optimize for seeming smart at the cost of signaling poor social skills.
Some less costly ones that vary intensely by region, situation, personality of those around you, and lots and lots of things, with intended signal in parentheses:
In a Dark-Arts-y way, glasses?
(A brief search indicates there are several studies that suggest wearing glasses increases percieved intelligence (e.g. this and this (paywall)), but there are also some that suggest that it has no effect (e.g. this (abstract only)))
Here's a few suggestions, some sillier than others, in no particular order:
Much depends on the audience one is signalling to.
Join organizations like Mensa
To stupid or average people, this is a signal of intelligence. To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".
Associate yourself with games and activities that are usually clustered with intelligence, e.g. chess, Go, etc.
Again this works as a signal to people who are at a remove from these activities, because the average player is smarter than the average human. People who themselves actually play, however, will have encountered many people who happen to be good at certain specific things that lend themselves to abstract strategy games, but are otherwise rather dim.
Speak eloquently, use non-standard cached thoughts where appropriate; be contrarian (but not too much)
Agree with this one. It's especially useful because it has the opposite sorting effect of the previous two. Other intelligent people will pick up on it as a sign of intelligence. Conspicuously unintelligent people will fail to get it.
...Learn other languages--doing so not on
To other intelligent people, my impression is that Mensa membership mostly distinguishes the subset of "intelligent and pompous about it" from the larger set of "intelligent people".
My experience seems to support this. The desire to signal intelligence is often so strong that it eliminates much of the benefits gained from high intelligence. It is almost impossible to have a serious discussion about something, because people habitually disagree just to signal higher intelligence, and immediately jump to topics that are better for signalling. Rationality and mathematics are boring, conspiracy theories are welcome. And of course, Einstein was wrong; an extraordinarily intelligent person can see obvious flaws in theory of relativity, even if they don't know anything about physics.
Mensa membership will not impress people who want to become stronger and have some experience with Mensa. Many interesting people make the Mensa entry test, come to the first Mensa meeting... and then run away.
The best ways to signal intelligence are to write, say, or do something impressive. The details depend on the target audience. If you're trying to impress employers, do something hard and worthwhile, or write something good and get it published. If you're a techie and trying to impress techies, writing neat software (or adding useful features to existing software) is a way to go.
if you are asking about signalling intelligence in social situations, I suggest reading interesting books and thinking about them. Often, people use "does this person read serious books and think about them" as a filter for smarts.
True, but free tuition or not, it's plenty costly in terms of opportunity.
(This is true to an almost hilarious extent if you're a humanities scholar like me: I'm not getting those ten (!!!!!!!) years of my life back.)
Well, I wrote a bit about what musicologists do here. In terms of research areas, I myself am the score-analyzing type of musicologist, so I spend my days analyzing music and writing about my findings. I'm an academic, so teaching is ordinarily a large part of what I do, although this year I have a fellowship that lets me do research full-time. Pseudonymity prevents me from saying more in public about what I research, although I could go into it by PM if you are really interested.
I am (well, was -- I don't play much any more) what I once described as a "low professional-level [classical] pianist." That is, I play classical piano really well by most standards, but would never have gotten famous. At a much lower level, I can also play jazz piano and Baroque harpsichord. I never learned to play organ, and never learned any non-keyboard instruments. Among professional musicologists, I'm pretty much average for both number of instruments I can play and level of skill.
As to pieces about Jupiter, I can only offer you my personal opinion -- being a musicologist doesn't make my musical preferences more valid than yours. Both pieces are great, and I had a special fondness for the H...
In Marcus Hutter's list of open problems relating to AIXI at hutter1.net/ai/aixiopen.pdf (this is not a link because markdown is behaving strangely), problems 4g and 5i ask what Solomonoff induction and AIXI would do when their environment contains random noise and whether they could still make correct predictions/decisions.
What is this asking that isn't already known? Why doesn't the theorem on the bottom of page 24 of this AIXI paper constitute a solution?
I've been incubating some thoughts for a while and can't seem to straighten them out enough to make a solid discussion post, much less a front page article. I'll try to put them down here as succinctly as possible. I suspect that I have some biases and blindspots, and I invite constructive criticism. In other cases, I think my priors are simply different than the LW average, because of my life experiences.
Probably because of how I was raised, I've always held the opinion that the path to world-saving should follow the following general steps: 1) Obtain ...
Might it not be even more effective to convince others to become ultra-rich and fund the organizations you want to fund? (Actually, this doesn't seem too far off the mark from what SIAI is doing).
I humbly suggest that perhaps you haven't thought long enough about how easy it might actually be to become ultra-rich if you actually set out with that goal in mind.
Any arguments that legitimately push you towards that conclusion should be easily convertible into actual advice about how to become ultra-rich. I think you're underestimating the difficulty of turning vague good-sounding ideas into effective action.
Stephen Law on his new book, Believing Bullshit:
Intellectual black holes are belief systems that draw people in and hold them captive so they become willing slaves of claptrap. Belief in homeopathy, psychic powers, alien abductions - these are examples of intellectual black holes. As you approach them, you need to be on your guard because if you get sucked in, it can be extremely difficult to think your way clear again.
Something has been bothering me about Newcomb's problem, and I recently figured out what it is.
It seems to simultaneously postulate that backwards causality is impossible and that you have repeatedly observed backwards causality. If we allow your present decision to affect the past, the problem disappears, and you pick the million dollar box.
In real life, we have a strong expectation that the future can't affect the past, but in the Newcomb problem we have pretty good evidence that it can.
How did Less Wrong get its name?
I have two disjunct guesses that are not mutually exclusive, but do not depend on each other:
I don't know if either of these is true, or both, or whatever. I want to know the real answer.
Searching this site and Google has been useless so far.
An unusual answer to Newcomb's problem:
I asked a friend recently what he would do if encountering Newcomb's problem. Instead of giving either of the standard answer, he immediately attempted to create a paradoxical outcome and, as far as I can tell, succeeded. He claims that he would look inside the possibly-a-million-dollars box and do the following: If the box contains a million dollars, take both boxes. If the box contains nothing, take only that box (the empty one).
What would Omega do if he predicted this behavior or is this somehow not allowed in the problem setup?
Not allowed. You get to look into the second box only after you have chosen. And even if both boxes were transparent, the paradox is easily fixed. Omega shouldn't predict what will you do (because that's assuming that you will ignore the content of the second box and Omega isn't stupid like that) but what will you do if box B contains a million dollars. Then it would correctly predict that your friend would two-box in that situation, so it wouldn't put the million dollars into the second box and your friend would take only the empty box according to his strategy. So yeah.
So I was reading a book in the Ender's Game series, and at one point it talks about the idea of sacrificing a human colony for the sake of another species. It got me thinking about the following question. Is it rational to protect 20 "piggies" (which are morally equivalent to humans) and sacrifice 100 humans if the 20 piggies constitute 100% of their species' population and the humans represent a very very small fraction of the human race. At first, it seemed obvious that it's right to save the "piggies," but now I'm not so sure. Ha...
Does anyone know how one would go about suggesting a new feature for predictionbook.com? I think it would be better if you could tag predictions so that then you could see separate ratings for predictions in different domains. Like, "Oh look, my predictions of 100% certainty about HPMOR are correct 90% of the time but my predictions of 100% certainty about politics are right 70% of the time." Also, you could look at recent predictions for only a specific topic, or see how well calibrated another user is in a specific area.
Moore's Law Won't Fade for Business Reasons
Some writers have claimed that excess computing power will reduce the effort put into designing new and more powerful chips. Even when most users can't make use of the additional power, fear of losing out to the competition will keep designers pushing. Eventually, it will become too expensive to keep developing the new technology, but we are a lot further from those limits.
Depressing article opposing life extension research is depressing. Brief summary: In the least convenient possible world, human research trials would be unethically exploitative. And this is presented as an argument against attempting to end aging.
ZOMG, vaccines are part of the transhumanist agenda!! So are therefore unnatural and evil.
Spotted on Respectful Insolence.
I've found a video that would be really cool if it were true, but I don't know how to judge its truth and it sounds ridiculous. This talk by Rob Bryanton deals with higher spatial dimensions, and suggests that different Everett branches are separated in the 5th dimension, universes with different physical laws are separated in the sixth dimension, etc. I can't find much info about the creator online, but one site accuses him of being a crank.Can somebody who knows something about physics tell me if there is any grain of truth to this possibility?
I wish it to be known that the next person to sign on as a beta for my fiction is entitled to the designation "pi".
"My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,"
"I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,"
Question regarding the quantum physics sequence:
This article tells me that the amplitude for a photon leaving a half mirror in each of the two directions is 1 and i (for straight and taking a turn, respectively) for an amplitude of 1 of a photon reaching the half-mirror. This must be a simplification, otherwise two half mirrors in a line would result in amplitude of i photon turning at the first mirror, an amplitude of i photon turning at the second mirror, and an amplitude of 1 of photon passing through both. This means that the squared-modulus ratio is 1...
Utility functions do a terrible job of modelling our conscious wants and desires. Our conscious minds are too non-continuous to be modeled effectively. But our total minds are far more continuous, radical changes are rare which is why "character" and "personality" are recognizable over time and often despite our conscious desires, even quite strong conscious desires.
What is the rational case for having children?
One can tell a story about how evolution made us not simply to enjoy the act that causes children but to want to have children. But that's not a reason, that's a description of the desire.
One could tell a story about having children as a source of future support or cost-controlled labor (i.e. farmhands). But I think the evidence is pretty strong that children are not wealth-maximizing in the modern era.
And if there is no case for having children, shouldn't that bother us on "Our morality should add up to normal, ceteris parabis" grounds?
An outside view of LessWrong:
As far as I can tell, the reason for this is that in practical matters, "politics is the mind killer" is the mind killer.
Is there an argument behind "quotidian" besides "I have a short mental time horizon and don't like to think weird thoughts"?
Why would LessWrong be able to come to a consensus on political subjects? Who would care about such a consensus if it came about?