Open Thread: May 2010
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (543)
I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?
Upvoted your comment for asking in the first place.
If your post was a novel explanation of some aspect of rationality, and wasn't just about landing punches on libertarianism, I'd want to see it. If it was pretty much just about criticizing libertarianism, I wouldn't.
I say this as someone very unsympathetic to libertarianism (or at least what contemporary Americans usually mean by 'libertarianism') - I'm motivated by a feeling that LW ought to be about rationality and things that touch on it directly, and I set the bar high for mind-killy topics, though I know others disagree with me about that, and that's OK. So, though I personally would want to downvote a top-level post only about libertarianism, I likely wouldn't, unless it were obnoxiously bare-faced libertarian baiting.
I agree on most counts.
However, I'd also enjoy reading it if it were just a critique of libertarianism but done in an exceptionally rational way, such that if it is flawed, it will be very clear why. At minimum, I'd want it to explicitly state what terminal values or top-level goals it is assuming we want a political system to maximize, consider only the least convenient possible interpretation of libertarianism, avoid talking about libertarians too much (i.e. avoid speculating on their motives and their psychology; focus as much as possible on the policies themselves), separate it from discussion of alternatives (except insofar as is necessary to demonstrate that there is at least one system from which we can expect better outcomes than libertarianism), not appear one-sided, avoid considering it as a package deal whenever possible, etc.
I will vote it down unless you say something that I have not seen before. I think that it was a good idea to not make LW a site for rehearsing political arguments, but if you have thought of something that hasn't been said before and if you can explain how you came up with it then it might be a good reasoning lesson.
I will only vote it up if there's something I haven't seen before, but will only vote it down if I think it's dreadful.
We may not be ready for it yet, but at some point we need to be able to pass the big test of addressing hard topics.
I'd love to read it, though I may well disagree with a lot of it. I'd prefer it if it were kept more abstract and philosophical, as opposed to discussing current political parties and laws and so forth: I think that would increase the light-to-heat ratio.
Not enough information to answer. I will upvote your post if I find it novel and convincing by rationalist lights. Try sending draft versions to other contributors that you trust and incorporate their advice before going public. I can offer my help, if being outside of American politics doesn't disqualify me from that.
I'm interested.
Has anyone read "Games and Decisions: Introduction and Critical Survey" by R. Duncan Luce and Howard Raiffa? Any thoughts on its quality?
I have a cognitive problem and I figured someone might be able to help with it.
I think I might have trouble filtering stimuli, or something similar. A dog barking, an ear ache, loud people, or a really long day can break me down. I start to have difficulty focusing. I can't hold complex concepts in my head. I'll often start a task, and quit in the middle because it feels too difficult and try to switch to something else, ultimately getting nothing done. I'll have difficulty deciding what to work on. I'll start to panic or get intimidated. It's really an issue.
I've found two things that help:
Music is good at filtering out noise and helping me focus. However, sometimes i can't listen to it or it is not enough.
The other thing is to make a extremely granular tasklist and then follow it without question. The tasks have to be really small and seem manageable.
Anyone have any suggestions? I'm not neurotypical in the broader sense, but I don't believe I fall on the autism spectrum.
I have similar sensory issues on occasion and believe them to be a component of my autism, but if you don't have other features of an ASD then this could just be a sensory integration disorder. When it's an auditory processing issue, I find that listening to loud techno or other music with a strong beat helps more than other types of music, and ear-covering headphones help filter out other noise. I'm more often assaulted by textures, which I have to deal with by avoiding contact with the item(s).
As for the long day, that sounds like a matter of running out of (metaphorical) spoons. Paying attention to what activities drain or replenish said spoons, and choosing spoon-neutral or spoon-positive activities whenever they're viable options, is the way to manage this.
This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists
Given that this now opens the door for artificially designed and deployed harmful viruses, perhaps unfriendly AI falls a few notches on existentialist risk ladder.
I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).
I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?
Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.
I agree that AI deterrence will necessarily fail if:
All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and
any deterrence simulation counts as a threat.
Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?
In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?
I don't have children and don't intend to. I have two nephews and a niece, but have not had much to do with their lives, beyond sending them improving books for birthdays and Christmas. I wonder if LessWrongers, with or without children, have anything to say on how to raise children to be rational non-paperclippers?
I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is. Personally I don't think there is a method that fits all children and all relationships with them. But... You might try activities rather than presents. 'Oh cool, uncle is gone to make a video with us and we're going to do it at the zoo.' If you get the right activity (depends on child), they will remember it and what you did and said for years. I had a uncle that I only saw a few times but he showed me how to make and throw a bomerang. He explained why it returned. I have thanked him for that day for 60 years.
Impossible motion: magnet-like slopes
http://illusioncontest.neuralcorrelate.com/2010/impossible-motion-magnet-like-slopes/
http://www.nature.com/news/2010/100511/full/news.2010.233.html
Crinimal profiling, good and bad
Article discusses the shift from impressive-looking guesswork to use of statistics. Also has an egregious example of the guesswork approach privileging the hypothesis.
Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.
What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.
"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"
"Senator, I've made a thorough investigation, and I'm afraid there has been sin in the ranks of British Petroleum. BP has a deep need to re-learn penance, to accept purification, to learn on one hand forgiveness but also the need for justice."
"Thank you, Mr. Hayward. I'm glad you're on top of the situation."
I wonder if I can use this at work.
That sounds like the kind of remark that goes out of its way to offend several categories of people at once. :)
But in that category the gold standard remains Evelyn Waugh's “now that they no longer defrock priests for sexual perversities, one can no longer get any decent proofreading.”
"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"
Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.
http://www3.interscience.wiley.com/journal/123213582/abstract?CRETRY=1&SRETRY=0
Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears to be way more effective in reality than I would have expected.
http://archpsyc.ama-assn.org/cgi/content/short/67/2/146
Take your fish oil, people.
Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.
Edit: They are tagged as "siai interviews".
Rationality comix!
Hover over the red button at the bottom (to the left of the RSS button and social bookmarking links) for a bonus panel.
Edit: "Whoever did the duplication" would be a better answer than "The guy who came first", admittedly. The duplicate and original would both believe themselves to be the original, or, if they are a rationalist, would probably withhold judgment.
Speaking as an engineer, I'd think he wasn't talking about subjective aspects: "The guy who came first" is the one which was copied (perfectly) to make the clone, and therefore existed before the clone existed.
There an article in this month's Nature examining the statistical evidence for universal common descent. This is the first time someone has taken the massive amounts of genetic data and applied a Bayesian analysis to determine whether the existence of a universal common ancestor is the best model. Most of what we generally think of as evidence for evolution and shared ancestry is evidence for shared ancestry of large collections, such as mammals or birds, or for smaller groups. Some of the evidence is for common ancestry for a phylum. There is prior evidence for their shared ancestry based on primitive fossils and on the shared genetic code and extreme similarity of genomes across very different species. This is the first paper to make that last argument mathematically rigorous. When taken in this fashion, the paper more or less concludes that a Bayesian analysis using just the genetic and phylogenetic known data puts the universal common ancestor model as overwhelmingly more likely than other models. (The article is behind a paywall so until I get back to the university tomorrow I won't be able to comment on this in any substantial detail but this looks pretty cool and a good example how careful Bayesianism can help make something more precise).
Ok. Reading the paper now. Some aspects are bit technical and so I don't follow all of the arguments or genetic claims other than at a broad level. However, the money quote is "Therefore, UCA is at least 10^2,860 times more probable than the closest competing hypothesis." (I've replaced the superscript with a ^ becaause I don't know how to format superscripts). 10^2860 is a very big number.
What were they using for prior probabilities for the various candidate hypotheses? Uniform? Some form of complexity weighting? Other?
They have hypotheses concerning whether Eukarya, Archaea and Bacteria share a common ancestor or not, or possibly in pairs. All hypotheses were given equal prior likelyhood.
I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.
I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.
Does this sound viable?
Yes, under two conditions:
It is announced in advance and properly implemented.
It does not delete other affiliate codes if links are posted with affiliate codes.
Breaking both these rules is one of the many things which Livejournal has done wrong in the last few years, which is why I mention them.
Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.
I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.
Is there any reason to think the common intuition is right?
TLDR: “War” is the inter-group version of “duel” (ie, lawful conflict). “Assassination” is the inter-group version of “murder” (ie, unlawful conflict).
My first “intuition about the intuition” is that it’s a historical consequence: During most history, things like freedom, and power and responsibility for enforcement of rules when conflicts (freedom vs. freedom) occur, were stratified. Conflicts between individuals in a family are resolved by the family (e.g. by the head thereof), conflicts between families (or individuals in different families) by tribal leaders or the kind. During feudalism the “scale” was formalized, but even before we had a large series of family → group → tribe → city → barony → kingdom → empire.
The key about this system is that attempts to “cross the borders” in this system, for instance punishing someone from a different group directly rather than invoking punishment from that group’s leadership is seen as an intrusion in that group’s affairs.
So assassination becomes seen as the between-group version of murder: going around the established rules of society. That’s something that is selected against in social environments (and has been discussed elsewhere).
By contrast, war is the “normal” result when there is no higher authority to recurse to, in a conflict of groups. Note that, analogously, for much of history duels were considered correct methods of conflict resolution between some individuals, as long as they respected some rules. So as long as, at least in theory, there are laws of war, war is considered a direct extension of that instinct. Assassination is seen as breaking rules, so it’s seen differently.
A few other points:
What an excellent analysis. I voted up. The only thing I can think of that could be added is that making a martyr can backfire.
Who thinks assassination is worse than war?
I could make an argument for it, though: If countries engaged regularly in assassination, it would never come to a conclusion, and would not reduce (and might increase) the incidence of war. Phrasing it as "which is worse" makes it sound like we can choose one or the other. This assumes that an assassination can prevent a war (and doesn't count the cases where it starts a war).
I've always assumed that the norm against assassination, causally speaking, exists mostly due to historical promotion by leaders who wanted to maintain a low-assassination equilibrium, now maintained largely by inertia. (Of course, it could be normatively supported by other considerations.)
It makes sense to me that people would oversimplify the effect of assassination in basically the way you describe, overestimating the indispensability of leaders. I know I've seen a study on the effects of assassination on terrorist groups, but can't find a link or remember the conclusions.
Cool paper: When Did Bayesian Inference Become “Bayesian”?
http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf
You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.
Awwwww :D
PS: If this endorsement of house food quality encourages anyone to apply for an SIAI fellowship, note your inspiration in the e-mail! We receive referral rewards!
Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...
No-name terrorists now CIA drone targets
http://www.cnn.com/2010/TECH/05/07/wired.terrorist.drone.strikes/index.html?hpt=C1
Related, Obama authorizes assassination of US citizen. I'm amazed how little anybody seems to care.
I care, and approve, provided that Al-Awlaki can forestall it if he chooses by coming to the US to face charges.
I don't believe in treating everything with the slippery-slope argument. That way lies the madness I saw at the patent office, where every decision had to be made following precedent and procedure with syntactic regularity, without any contaminating element of human judgement.
Something problematic: if you're a cosmopolitan, as I assume most people here are, can you consistently object to assassinations of citizens if you don't object to assassinations of non-citizens?
Probably not, though you might be able to make a case that if a particular non-citizen is a significant perceived threat but there is no legal mechanism for prosecuting them then different rules apply. Most people are not cosmopolitan however and so I am more surprised at the lack of outrage over ordering the assassination of a US citizen than by the lack of outrage over the assassination of non-US citizens.
The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling. The fact that it is happening in Pakistan honestly seems more problematic to me in terms of the badness that comes with not having "clearly defined parties who can verifiably negotiate". Did the US declare war on Pakistan without me noticing? Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country? Are there even functioning Westphalian nation states in this area? (These are honest questions - I generally don't watch push media, preferring instead to formulate hypotheses and then search for news or blogs that can answer the hypothesis.)
The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US. Moreover, it seems like something I should take responsibility for doing something about because it is happening entirely within my own country.
Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.
I don't know if they have responded to this specific issue, but the ACLU is working against the breakdown of rule of law in the name of national defense.
Thanks for the link. I have sent them an email asking for advice as to whether this situation is as bad as it seems to be, and if so, what I can do to make things less bad. I have also added something to my tickler file so that on May 21 I will be reminded to respond here with a followup even if there is no response from the ACLU's National Security Project.
I think I have done my good deed for the day :-)
ETA: One thing to point out is that before sending the email I tried googling "Presidential Assassination Program" in google news and the subject seems to have had little coverage since then. This was the best followup I could find in the last few days, and it spoke of general apathy on the subject. This leading me to conclude that "not enough people had noticed" yet, so I followed through with my email.
Following up for the sake of reference...
I did not get a reply from the ACLU on this subject and just today sent a second email asking for another response. If the ACLU continues to blow me off by June 1st I may try forwarding my unanswered emails to several people at the ACLU (to see the blowoff was simply due to incompetance on the part of only the person monitoring the email).
If that doesn't work then I expect I'll try Amnesty International as suggested by Kevin. There will be at least one more comment with an update here, whatever happens, and possibly two or three :-)
This will be my final update on this subject. I received an email from a representative of the ACLU. He apologized for the delayed response and directed me to a series of links that I'm passing on here for the sake of completeness.
First, there is an April 7th ACLU press release about extra-judicial killings of US citizens, that press release notes that an FOIA request had already been filed which appears to ask for the details of the program to see specifically how it works in order to find out if it really violates any laws or not, preparatory to potential legal action.
Second, on April 19th the Washington Post published a letter for the ACLU's Executive Director on the subject. This confirms that the issue is getting institutional attention, recognition in the press, and will probably not "slip through the cracks".
Third, on April 28th the ACLU sent an open letter to President Barack Obama about extrajudicial killings which is the same date that the ACLU's update page for "targeted killings" was last updated. So it seems clear that steps have been taken to open negotiations with an individual human being who has the personal authority to cancel the program.
This appears to provide a good summary of the institutional processes that have already been put in motion to fix the problems raised in the parent posts. The only thing left to consider appears to be (1) whether violations of the constitution will be adequately prevented and (2) to be sure that we are not free riding on the public service of other people too egregiously.
In this vein, the ACLU has a letter writing campaign organized so that people can send messages to elected officials asking that they respect the rule of law and the text of treaties that the US has signed, in case the extra-judicial killings of US citizens are really being planned and accomplished by the executive branch without trail or oversight by the courts.
Sending letters like these may help solve the problem a little bit, is very unlikely to hurt anything, and may patch guilt over free riding :-)
In the meantime I think "joining the ACLU as a dues paying member" just bumped up my todo list a bit.
Pakistan does not have anything close to a force monopoly in the region we're attacking. They've as much as admitted that, I believe. I actually think I'm okay with the attacks as far as international law goes.
I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.
It is a further dehumanization of the process of killing and so tends to undermine any inbuilt human moral repugnance produced by violence. To the extent that you think that killing humans is a bad thing I suggest that is something that should be of concern. It is one more level of emotional detachment for the drone operators beyond what can be observed in the Apache pilots in the recent Wikileaks collateral murder video.
ETA: This Dylan Rattigan clip discusses some of the concerns raised by the Wikileaks video. The same concerns apply to drone attacks, only more so.
Is there a consensus on whether or not it's OK to discuss not-specifically-rationality-related politics on LW?
Doesn't bother me. I think the consensus is that we should probably try and stay at a meta-political level, looking at a much broader picture than that which is discussed on the nightly news. The community is now mature enough that anything political is not automatically taboo.
I posted this not to be political, but because people here are generally interested in killer robots and their escalation of use.
This looks like very expensive way to kill terrorists, like 100k$ per militant not counting sunken costs such as the 4.5 mil price tag per drone. And not trying to estimate the cost of civilian deaths.
If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.
This seems like the sort of activity Google Wave is (was?) meant for.
Self-forgiveness limits procrastination
No idea about the time lag-- my posts show up quickly-- but my intuition says that a fair coin has a 1/2 probability of being heads, and nothing about the experiment changes that.
Nope, new posts should show up immediately (or maybe with a half hour delay or so; I seem to recall that the sidebars are cached, but for far less than two days). Did it appear to post successfully, just not showing up? The only thing I can think of is that you might not have switched the "Post to" menu from "Drafts for neq1" to "LessWrong".
Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).
Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?
Neanderthal genome reveals interbreeding with humans:
http://www.newscientist.com/article/dn18869-neanderthal-genome-reveals-interbreeding-with-humans.
Whoooohooo! Awsomest thing in the last ten years of genetic news for me! YAAY! WHO HOO!!! /does a little dance / I want to munch on that delicious data!
Ahem.
Sorry about that.
But people 1 to 4% admixture! This is big! This gets an emotional response from me!That survived more than a thousand generations of selection, the bulk of it is probably neutral but think about how many perfectly usefull and working allels we may have today (since the Neanderthalls where close to us to start with). 600 000 or something years of speration these guys evolved sperate from us for nearly as long as the fictional Vampires in Blindsight.
It seems some of us are have a bit our ancestors picked of another species in our genes! Could this have anything to do with behavioural modernity that started off at about the same time the populations crossbred in the middle east ~100 000 years ago? Which adaptations did we pick up? Think of the possiblities!
Ok I'll stop the torrent of downvote magnet words and get back to reading about this. And then everything else my grubby little paws can get on Neanderthals, I need to brush up!
Edit: I just realized part of the reason why I got so excited is because it shows I may have a bit of exotic ancestry. Considering how much people, all else being equal, like to play up their "foreign" or "unusual" semimythical ancestors or even roots in conversation, national myths or on the census instead of the ethnicity of the majority of their ancestors this may be a more general bias, that I could of course quickly justify with a evo psych "just so" story but I'll refrain from that to search for what studies have to say about this.
I definitely think this is top-level post material but I didn't have enough to say to not piss the people off that think all top level posts need to be at least 500 words long.
I think this is very interesting but I'm not sure it should be a top-level post. Not due to the length but simply because it isn't terribly relevant to LW. Something can be very interesting and still not the focus here.
The Cognitive Bias song:
http://www.youtube.com/watch?v=3RsbmjNLQkc
Not very good, but, you know, it's a song about cognitive bias, how cool is that?
Don't know if anyone else was watching the stock market meltdown in realtime today but as the indices were plunging down the face of what looked a bit like an upside down exponential curve driven by HFT algorithms gone wild and the financial news sites started going down under the traffic I couldn't help thinking that this is probably what the singularity would look like to a human. Being invested in VXX made it particularly compelling viewing.
To save everyone the googling: VXX is an exchange traded fund (basically a stock) whose value tracks the level of the VIX index. The VIX index is a measure of the volatility of the markets, with higher values indicating higher volatility (volatility here generally implying lost market value). VIX stands at about 33 now, and was around 80 during the '08 crisis.
The unrecognized death of speech recognition
Interesting thoughts about the limits encountered in the quest for better speech recognition, the implications for probabilistic approaches to AI, and "mispredictions of the future".
What do y'all think?
Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.
Curiously, what happens when I refresh LW (or navigate to a particular LW page like the comments page) and I get the "error encountered" page with those little witticisms? Is the site 'busy' or being modified or something else ...? Also, does everyone experience the same thing at the same moment or is it a local phenomenon?
Thanks ... this will help me develop my 'reddit-page' worldview.
I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.
Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.
A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.
A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.
I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"
Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.
Continuing on the "last responsible moment" comment from one of the other responders - would it not be helpful to consider the putting off of a task until the last moment as an attempt to gather the largest amount of information persuant to the task without incurring any penalty?
Having poor focus and attention span I use an online todo-list for work and home life where I list every task as soon as I think of it, whether it is to be done within the next hour or year. The list soon mounts up, occassionally causing me anxiety, and I regularly have cause to carry a task over to the next day for weeks at a time - but what I have found is that a large number of tasks get removed because a change makes the task no longer necessary and a small proportion get notes added to them while they stay on the list so that the by the time the task gets actioned it has been enhanced by the extra information.
By having everything captured I can be sure no task will be lost, but by procrastinating I can ensure the highest level of efficiency in the tasks that I do eventually perform.
Thoughts?
the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.
I've also read that people with bipolar disorder are more likely to commit suicide as their depression lifts.
But antidepressant effects can be very complicated. I know someone who says one med made her really really want to sleep with her feet where her head normally went. I once reacted to an antidepressant by spending three days cycling through the thoughts, "I should cut off a finger" (I explained to myself why that was a bad idea) "I should cut off a toe" (ditto) "I should cut all the flesh from my ribs" (explain myself out of it again), then back to the start.
The akrasia-lifting explanation certainly seems plausible to me (although "mood" may not be the other relevant variable--it may be worldview and plans; I've never attempted suicide, but certainly when I've self-harmed or sabotaged my own life it's often been on "autopilot", carrying out something I've been thinking about a lot, not directly related to mood--mood and beliefs are related, but I've noticed a lag between one changing and the other changing to catch up to it; someone might no longer be severely depressed but still believe that killing themself is a good course of action). Still, I would also believe an explanation that certain meds cause suicidal impulses in some people, just as they can cause other weird impulses.
Interesting. Are you sure that is going on when antidepressants have paradoxical effects?
Not absolutely certain. It's an impression I've picked up from mass media accounts, and it seems reasonable to me.
It would be good to have both more science and more personal accounts.
Thanks for asking.
I suspect it’s just a figure of speech, but can you elaborate on what you meant by “evil” above?
Good observations.
Sometimes I procrastinate for weeks about doing something, generally non-urgent, only to have something happen that would have made the doing of it unnecessary. (For instance, I procrastinate about getting train tickets for a short trip to visit a client, and the day before the visit is due the client rings me to call it off.)
The useful notion here is that it generally pays to defer action or decision until "the last responsible moment"; it is the consequence of applying the theory of options valuation, specifically real options, to everyday decisions.
A top-level post about this would probably be relevant to the LW readership, as real options are a non-trivial instance of a procedure for decision under uncertainty. I'm not entirely sure I'm qualified to write it, but if no one else steps up I'll volunteer to do the research and write it up.
I work in finance (trading) and go through my daily life quantifying everything in terms of EV.
I would just caution in saying that, yes procrastinating provides you with some real option value as you mentioned but you need to weigh this against the probability of you exercising that option value as well as the other obvious costs of delaying the task.
Certain tasks are inherently valuable to delay as long as possible and can be identified as such beforehand. As an example, work related emails that require me to make a decison or choice I put off as long as is politely possible in case new information comes in which would influence my decision.
On the other hand, certain tasks can be identified as possessing little or no option value when weighted with the appropriate probabilities. What is the probability that delaying the payment of your cable bill will have value to you? Perhaps if you experience an emergency cash crunch. Or the off chance that your cable stops working and you decide to try to withhold payment (not that this will necessarily do you any good).
Is it possible to change the time zone in which LW displays dates/times?
I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.
I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.
Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...
Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.
After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)
There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.
Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?
This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci's where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community and "LessWrong-style rationalist/trans-humanist".community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci's blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).
Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about "Is Elizer alive and well" (he surely looked so on BHTV), I hope it is not considered too much off-topic.
I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless.
Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.
That diavlog was a total shocker!
Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's.
Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(
I'm going to have to remember to use the word cishumanism more often.
Welcome back.
SIAI may have built an automaton to keep donors from panicking
You can tell he's alive and well because he's posted several chapters in his Harry Potter fanfiction in that time; his author's notes lead me to believe that, as he stated long ago, he's letting LW drift so he has time to write his book.
Anyway, he can't be hurt; "Somebody would have noticed."
He's writing his book.
Geocities Less Wrong
Recycling an email I wrote in a Existential Risk Reduction Career Network discussion. The topic looked at various career options, specifically with an eye towards accumulating wealth - the two major fields recognized being finance and software development.
Frank Adamek enquired as to my (flippant) vanilla latte comments, which revealed a personal blind-spot. Namely, that my default assumption for people with an interest in accumulating wealth is that they're motivated by an interest in improving the quality of their own life (e.g., expensive gadgets, etc.).
I should know -- especially in X-Risk Network context -- that wealth accumulation is not necessarily predominantly selfish, and that instead wealth can be an effective multiplier to benefit positive futures. Thanks for mentioning this Frank.
The motivation for copying this email here is two-fold.
One, what else can further rational critic of my own rants teach me?
Two, I've lurked in this community for a long time, but can't muster gusto to contribute. The quality-bar for top-level posts is well beyond my thinking and writing skills. Which is great, because it means I get to learn and grow. But there's a flip-side, which is the gap between Less Wrong discourse and that of my day-to-day interaction with friends, family, and coworkers. I don't have a solution to this, but perhaps an increase in open-thread comment mediocrity helps close the gap.
Ugh, probably not. Alas, here goes - posted as a reply to myself, because of comment-length limits.
In a thread called Acturial vs. Software Engineering - what pays best?, somebody wrote:
My response...
I encourage most people to pursue a math or science degree, rather than comp.sci., even if their long term goals are in the field of software engineering. My opinion is based on personal hindsight (having majored in computer science, I often wish my ability to absorb and apply fundamental math or hard physics was stronger) and on eleven years industry experience (where I've noticed an inverse correlation between the amount of formal comp.sci. training a person's had and his or her strength as a software engineer).
In regards to my personal hindsight; it could well be that had I studied math or physics, that I'd feel my comp.sci. expertise would need brushing up. That's probably true to some extent, but there's another factor; namely that many comp.sci. programs are a less-than-ideal blend of theoretical math (better obtained through a dedicated programs[1]) and practical engineering (most definitely useful[2], but because of its nature easily accessible in your spare time). That last point is critical; anybody who can afford university education, has access to a computer and a compiler. So why not tinker at home - you're passionate, right? Compare with programs like mechanical engineering, chemistry, and most hard physics programs - you probably don't have access to a particle accelerator or DNA extraction lab at home.
Not yet anyway... :-)
That brings me to my observation from industry experience, namely that the best programmers I've worked with often hadn't majored in comp.sci. The point of course not that a comp.sci. education makes for worse programmers. Rather, that people with the audacity and discipline to pursue hard physics or math who also have a passion for programming have a leg-up on those who are only passionate about programming.
I'm sure there's the occasional failed particle physicist applying for a hundred programming gigs without success, but that person would've been just as unskilled as a programmer had he or she majored in comp.sci.
Having shared my view on comp.sci. education, I do wish to throw in a recommendation for pursuing a career in software development (beyond the years of formal education). Specifically in contrast to one alternative discussed earlier in this thread, namely a career in finance.
Full disclaimer, my perspective on "jobs that involve working with money" stems mostly from how the mainstream portrays it and is likely to be extremely naive. Despite what I'm about to say, I actually have a great deal of respect for money-savy people. Considering my personal financial situation is a constant source of akrasia, I'm often envious of people who are able to wield money itself as a tool to generate more of it.
I'm realistic enough to admit that income potential is a valid factor in deciding what kind of career to pursue - like most of us, I enjoy food, shelter, and expensive gadgets. Meanwhile, I also believe nobody treats money as the only factor in choosing a caree - we all rather work in fields we're passionate about.
So really, we have a realistic assessment of various career options - all of whom promise at least a decent living. Even agreeing with comments made earlier, that programming is prole and finance has higher likelihood of fast-tracking prestige (and as a programmer, I actually must admit there's some truth to this sentiment), my gut says that your passion and interest far outweighs these observations. I mean, we're not talking about whether you'll become a high-school janitor versus United States president. If you like money and you have knack for using it for growth and your benefit, go to Wall Street. If you like computers and have a knack for using them for innovation, go to Silicon Valley. In both cases you'll be able to afford a grande sugar-free vanilla low-fat soy latte every morning - if that's your cup of tea.
Now all of this is fairly generic advice, nothing you weren't told already by your parents. My reason for chiming in on this discussion has (obviously) to do with how the above is affected by accelerating chance. That's something most parents or advisors haven't really clued into yet, and I felt it worth pointing out.
The question is, assuming the kind of consequences from accelerating change that are commonly accepted in singularity circles; what type of careers promise the most leverage in the future? In other words, what skill set guarantees you can maintain or expand the amount of control you have over the reality that surrounds and affects you?
Presumably there won't be much contention over why leverage is an important metric. Now imagine the world one, two, or three decades from now - and ask yourself; what can I offer that is of value? Value comes in many forms, we can roughly categorize these as: money, ideas (and secrets), goods, labor (and skill). Of these, money and ideas are the ones with the most long term potential. The value of manual labor will dissappear rapidly, even skilled labor (biological enhancement notwithstanding). The value of goods will diminish when life moves from its reliance on matter to information, and our ability to transform and distribute matter improves. The value of secrets is likely to exist for eternity, but those who consider this a worthy pursuit should read Snowcrash, not this email.
It's my belief the only types of leverage with future potential are money and ideas, some conditions apply.
In the case of money, the assumption is that there'll exist a legal system to assure the continuous promise of value in tender. Considering the alternative is impractical barter - or worse - all-out chaos, I believe money will stick around for a long time. In the case of ideas, the assumption is that you can turn them into reality. An idea stuck in your head is useless, so you'll need money, skill, or both to make things happen.
But wait, didn't I just say that skilled labor is a dead-end path? Yes, when speaking of the mechanical kind (i.e., the things you can do by moving your limbs around, such as playing the piano). But when it comes to ideas (and the direction our society is heading) - the kind of skill I'm referring to is of the information-theoretic kind. Future creativity will occur primarily in a universe of bits and bytes, and the more adept you are at wielding these bits and bytes, the more leverage your ideas will have.
There is one more assumption in this, namely that creative information-based skill is of a different nature than biological mechanical skill. It may be that strong AI will leapfrog well past our human ability to merge and enhance, in which case both creative skill and mechanical skill will be displaced. If that's the case, I don't expect money will be much value to humans very long either, and we'll be on a short-lived dead end path.
I'm hoping for a more optimistic future, where intellectual enhancement permits us to remain competitively creative.
So unless you have money, and use it to make more money (e.g., pursue a financial career - a valid option), I recommend people become creative experts in a digital universe. That is, study theoretical computer science (through formal math education, in your spare time, or through a career), familiarize yourself breath-first with the entire hardware and software stack that permits the digital universe (from primitive electronics to silicon to computer architectures to machine language to assembly to compilers to higher level languages to creative tools for both art and process improvement), and pick two or three comp.sci. specialties in which you become a depth-first expert. Ideally, you do this alongside a grounding in a hard physical science, to keep you in touch with the universe you currently embed (it'll be around for a while to come).
That's what you'll need to escape from the consumer end of information, and become a creative source of information - which in turn is your future leverage and source of income. Those with the ability to command, influence, and transform the growing stream bits and bytes will have the most value to offer (and be able to afford two sugar free vanilla soy lattes).
On a bit of a tangential note, this is why I advocate the introduction of a mandatory comp.sci. component from kindergarten all the way up to university - on par with traditional components like math or phys-ed. To verbalize this as: "...our society relies increasingly on computers" is to state the obvious, and the point is not that everybody should become a software developer. The critical point is to raise a generation that understands the notion of algorithmic computation well enough to believe they can (in principle) be in control of a computing device, rather than it controlling them. Computers are not magic, and one day present-day humans won't be either.
Then again, even basic schooling in math and physics fails to teach many people they can (in principle) be in control of their own life. But alas, I digress - lest this become political... :-)
Long post, little value - time to return to my computer and become a better programmer. Gotta make a living...
Two cents,
Jaap Suter - http://jaapsuter.com
[1] To be clear, I love the fundamentals of computer science. It's a great passion of mine. But I believe its place in education is by and large a sub-field of math. I suspect that'll change over time, but I'm not yet sure in which direction (math absorbing computer science, or theoretical computer science growing enough meat to justify recognition as being a field on its own.)
[2] With the additional remark that the fundamental habits of good engineering are timeless and emerge from developing your expertise in the humanities (both in one's ability to interact and cooperate with other people to achieve your goals, and the study of interactions between man, his environment, and the fruits of your labor). The tools we use along the way are fleeting - software and hardware is commonly outdated by the time you've become an expert - better to recognize the underlying patterns.
By the way: getting crashes on the comments page again. Prior to 1yp8 works and subsequent to 1yp8 works; I haven't found the thread with the broken comment.
Edit: It's not any of the posts after 23andme genome analysis - $99 today only in Recent Posts, I believe.
Edit 2: Recent Comments still broken for me, but ?before=t1_1yp8 is no longer showing the most recent comments to me - ?before=t1_1yqo continues where the other is leaving off.
Edit 3: Recent Comments has now recovered for me.
I'm going to be giving a lecture soon on rationality. I'm probably going to focus on human cognitive bias. Any thoughts on what I should absolutely not miss including?
I recall "Knowing About Biases Can Hurt People":
Thanks; that sort of thing is exactly why I asked.
Continuing discussion with Daniel Varga:
It's difficult to discuss the behavioral dispositions of these imagined cosmic civilizations guided by a single utility function, without making a lot of assumptions about cosmology, physics, and their cosmic abundance. For example, the accelerating expansion of the universe implies that the universe will eventually separate into gravitationally bound systems (galactic superclusters, say) which will be causally isolated from each other; everything else will move beyond the cosmological horizon. The strategic behavior of such a civilization may be very different if it expects a rival or it expects no rivals to have formed independently in its supercluster. It's the difference between expecting a future of unimpeded expansion and expecting to have to negotiate or fight.
Regarding my multipronged flame attack on your supposed intellectual sins :-) ... OK, you're not a platonist. "Tegmark's multiverse" is just an inventory of possible formal structures for you, and you want to know which ones could describe a universe that contains time. That's still a hard question. ata says, quite reasonably, that there had better be some form of sequential structure, so you can have temporal succession and temporal dynamics. But we also regard relativity as providing a model of time, only then you don't have a universal time. Technically, you don't have a total order on the set of all events, only a partial order. So will we say that any partially ordered set can provide a model of time? Then I wonder about generalizations of relativity in which you have more than one timelike direction. Is that a formal generalization which exceeds the possibility of an interpretation in terms of time? I think that phenomenon - formal overgeneralization - exists and is hardly talked about, again because of our dereliction of integrated ontology in favor of our combination of rigorous formalism and fuzzy thinking about how the formalism relates to reality. You can see this in logic, I believe. Classical logic is formalized, and then the formal system is generalized, and the new formalism is treated as if it describes "a logic", but one may reasonably ask if it has become simply a set of rules of symbolic manipulation that no longer corresponds to any valid form of reasoning. I would not want to say that non-Euclidean geometry is not a geometry, so some forms of formal generalization will retain a connection to their alleged meaning, but the whole issue is hardly addressed.
As for "emergent time". You say you agree with Barbour, but you think time is real. Well, I don't know what you mean by time then. To me, time is about change. Becoming, not just being. Things aren't just sitting there inertly in static eternity; change is real. And I do not at all see how change can be "emergent". There may be parts of reality that don't change, and parts of reality that do, and maybe there's a definable boundary. But talking of emergent time makes it sound like you're trying to have it both ways at once: you don't have time and you do have time. You have a universe without change, and yet if you look at it differently it is changing.
I don't buy that. Either change is real or it isn't. You may have a static description of a changing reality. That is, it may be possible to talk about the set of all physical events in the history of the universe, and say things about their relationships and the patterns they form, without referring to time or change; but that doesn't mean you can start by postulating that reality consists of a set of physical conditions in timeless stasis, and then somehow get time back by magic. It's a matter of interpretation of the formalism. Either it refers to time or it doesn't.
Feel free to refute me by explaining what the emergence of time could possibly mean (even better, the emergence of time from memory).
Time is just a dimension in which there is determinism, so that the slice of the universe at position t is a function of the slice at position t-1.
Uh, wow. In our universe, that makes all dimensions time.
No. The state of affairs along the slice of space passing through Earth's equator, for example, does not uniquely determine the state of affairs at 1° north latitude. But the state of affairs now, does determine the state of affairs one second in the future. (Relativistic motion can tilt the axes somewhat, but not enough to interchange space and time.)
All our physical models are described by local partial differential equations. Given the data on an (n-1) dimensional slice (including derivatives, of course), we can propagate that to cover the whole space. (there are complications once GR is in the picture making the notion of global slices questionable, but the same result holds "locally".)
If the data at the slice doesn't include derivatives, you can't propagate in time either.
In that generality, this is false. Not all differential equations are causal in all directions. I doubt that it's true of most physical examples. In particular, everyone I've ever heard talk about reconstruction in GR mentioned space-like hypersurfaces.
UPDATE: Actually, it's true. Until I redefine causal.
I don't doubt that pathological examples exist. I don't suppose you have any handy? I really would be interested. I do doubt that physical examples happen (except perhaps along null vectors).
The prototypical 4-d wave equation is f_tt - f_xx - f_yy - f_zz = 0. I don't see how rearranging that to f_tt = f_xx + f_yy + f_zz provides any more predictive power in the t direction than f_xx = f_tt - f_yy - f_zz provides in the x direction. (There are numerical stability issues, it's true.)
Well, that's partially an artifact of that being the sort of question we tend to be interested in: given this starting condition (e.g. two orbiting black holes), what happens? But this is only a partial answer. In GR we can only extend in space so long as we know the mass densities at those locations as well. Extending QFT solutions should do that. The problem is that we don't know how to combine QFT and GR, so we use classical mechanics, which is indeed only causal in the time direction. But for source-free (no mass density) solutions to GR, we really can extend a 2+1 dimensional slice in the remaining spatial direction.
How does that square with e.g. the fact that the gravity of a spherically symmetric object from the outside is the same as that of the same mass compressed into a point at the same center of gravity?
The short answer is that there's more there besides the gravitational field[1] (in the approximation that we can think of it as that). There are the various elementary particle fields. These will have their own values[2] and derivatives, which are part of a giant system of PDEs intertwining them. Two different spherically symmetric objects with the same gravitational field will have different particle fields.
For this entire discussion, I'm missing the part where the theories/models being discussed lead us to anticipate certain experiences.
I am not sure if this is a reasonable request. It is impossible to talk about experiences (not to mention anticipation :) ) without accessing higher levels of some reductionist hierarchy. I am interested in the emergence of the thermodynamic arrow of time from more basic notions. I leave to other reductionists the task to reduce the notion of experience to the more basic notion of information processing in a space-time continuum. People like Dennett and Minsky had spectacular successes in this other task.
Would it be reasonable to request a LW open thread digest to accompany these posts? A simple bullet list of most of the topics covered would be nice.
I have a request. My training is in science & engineering, but I am totally ignorant of basic economics. I have come to see this as a huge blind spot. I feel my views on social issues are fairly well-reasoned, but when it comes to anything fiscal, it's all very touchy-feely at present.
Can anyone recommend intro material on economics (books, tutorials)? I ask on LW because I have no idea where to start and who to trust. If you offer a recommendation of a book pushing some particular economic "school of thought," that's fine, but I'd like to know what that school is.
Thanks!
The book I used in my college Econ 101 class was this one.
Economics in One Lesson by Henry Hazlitt is a good slim introduction to the economic mindset. For a different approach focused on the application of economic thinking to everyday life The Logic of Life by Tim Harford is worth a look. Neither book covers much of the math of economics but I think that is a good thing since most of the math heavy parts of economics are the least useful and relevant.
ETA: Economics In One Lesson is a heavily free market / free trade 'classical' economic slant.
MIT OpenCourseWare has a lot of material. I also like Bryan Caplan's lecture notes (these sometimes have a libertarian slant).
I recalled the strangest thing an AI could tell you thread, and I came up with another one in a dream. Tell me how plausible you think this one is:
Claim: "Many intelligent mammals (e.g. dogs, cats, elephants, cetaceans, and apes) act just as intelligently as feral humans, and would be capable of human-level intelligence with the right enculturation."
That is, if we did to pet mammals something analogous to what we do to feral humans when discovered, we could assimilate them; their deficiencies are the result of a) not knowing what assimilation regimen is necessary for pets/zoo mammals; and b) mammals in the wild being currently at a lower level of cultural development, but which humans at one time passed through.
Thoughts?
I don't know that we've ever successfully assimilated a feral human either.
Some people have curious ideas about what LW is; from http://www.fanfiction.net/r/5782108/18/1/ :
I'm not sure I even know how to parse "wikipedia blog on rationality". But at least in some sense, we apparently are Wikipedia. Congrats.
Ask A Rationalist--choosing a cryonics provider:
I'm sold on the concept. We live in a world beyond the reach of god; if I want to experience anything beyond my allotted threescore and ten, I need a friendly singularity before my metabolic processes cease; or information-theoretic preservation from that cessation onward.
But when one gets down to brass tacks, the situation becomes murkier. Alcor whole body suspension is nowhere near as cheap as numbers that get thrown around in discussions on cryonics--if you want to be prepared for senescence as well as accidents, a 20 year payoff on whole life insurance and Alcor dues runs near $200/month; painful but not impossible for me.
The other primary option, Cryonics Institute, is 1/5th the price; but the future availability--even at additional cost--of timely suspension is called into question by their own site.
Alcor shares case reports, but no numbers for average time between death and deep freeze, which seems to stymie any easy comparison on effectiveness. I have little experience reading balance sheets, but both companies seem reasonably stable. What's a prospective immortal on a budget to do?
Why not save some money and lose what's below the neck?
So, I'm somewhat new to this whole rationality/Bayesianism/(nice label that would describe what we do here on LessWrong). Are there any podcasts or good audiobooks that you'd recommend on the subjects of LessWrong? I have a large amount of time at work that I can listen to audio, but I'm not able to read during this time. Does anyone have any suggestions for essential listening/reading on subjects similar to the ones covered here?
I know you said you don't have a ton of time to read but Gary Drescher's Good and Real has been called Less Wrong in book form on occasion. If nothing else, I found it an enjoyable read that gives a good start to getting into the mindset people have in this community.
I recently heard a physics lecture claim that the luminiferous aether didn't really get kicked out of physics. We still have a mathematical structure, which we just call "the vacuum", through which electromagnetic waves propagate. So all we ever did was kill the aether's velocity-structure, right?
That reminds me of this discussion.
Of course if you define "luminiferous aether" as generally as "whatever mechanism results in the propagation of electromagnetic waves", then it exists, because electromagnetic waves do propagate. But when it was under serious scientific consideration, the luminiferous aether theory made testable predictions, and they failed. Just saying "they're different concepts" is easier than saying "it's the same basic concept except it has a different name and the structure of the theory is totally different".
I could sympathize with trying to revive the name "luminiferous aether" (or even better, "luminiferous æther"), though. It's a pretty awesome name. (I go by "Luminiferous Æther Bunny" on a few other forums.)
Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?
A dystopian society is unlikely to thaw out and revive people in cryostasis. Cryostasis revival makes sense for societies that are benevolent and have a lot of free resources. Also, be careful not to try to generalize from fictional examples. They are not evidence. That's all the more the case here because science fiction is in general a highly reactionary genre that even as it uses advance technology either warns about the perils or uses it as an excuse to hearken back to a more romantic era. For example look how many science fiction stories and universes have feudal systems of government.
This is a little too broad for me to be comfortable with. There are certainly subgenres and authors who are reactionary but then there are those that are quite the opposite. Military SF and space opera (which, frankly, is just fantasy with lasers) are usually quite reactionary. Cyberpunk is cautionary but not so much about technology as about capitalism. Post-apocalyptic sf is sometimes about technology getting to great for us to handle but the jewel of the genre, A Canticle for Leibowitz is about the tragedy of a nationwide book burning. Post-cyberpunk is characterized by it's relative optimism. Hard sf varies in its political sensibilities (there seem to be a lot of libertarians) but it's almost always pro-tech for obvious reasons.
I'm having a hard time coming up with authors that fit the reactionary bill, but that might be because I read the wrong subgenres. And the libertarians are hard to classify. Michael Crichton is the obvious one that occurs to me. Larry Niven, I suppose. Card and Heinlein could be put there though both are more complicated than that. Herbert. In the other camp: Brin, Kim Stanley Robinson, LeGuin, Dick, Neil Stephenson, Gibson, Vonnegut, Orwell, Doctorow, Bradbury. Asimov and Clark probably fall in the second camp...
Am I just missing the reactionary stuff?
I think it would be fair to say that the more famous authors in general are less reactionary. But if I had to list reactionaries I'd list Herbert, Crichton, Pournelle, Weber, Anderson, McCaffrey' (arguable, but definite aspects in Pern), Koontz, Shelley, Lovecraft and to some extent Niven and Card.
Also, there seems to be a lot more of a general reactionary bent in the less successful scifi. The major authors seem to have less of that (possibly because their views are so unique that they override anything as simple as being reactionary or not).
The example you give of a Canticle for Liebowitz is more complicated: While book burning and such is portrayed as bad, that's still a response to a nuclear apocalypse. Indeed, in that regard, almost any science fiction that's post nuclear war has a reactionary aspect.
If we move outside literature directly, and say into movies and TV the general pattern is pretty clear. While people often think of Star Trek as optimistic about technology, even in TOS many episodes dealt with the threat of new technologies (androids and intelligent computers both came up). The Outer Limits in both its original form and reincarnation were generally anti-technology. It was a safe bet in any episode of the reincarnation that any new knowledge or new technology was going to fail or cause horribly disturbing side effects that would be summarized with a moralistic voice over at the end that would make Leon Kass proud. Similarly Doctor Who has had multiple incarnations of the Doctor lecture about how bad trying to be immortal is. Movies have a similar track record (The Terminator, Hollowman, The Sixth Day, for just a few examples. Many more examples can be given)
I agree that overall this was likely a hasty generalization. Science fiction has reactionary elements but it is by no means an intrinsically reactionary genre.
Shelley and Lovecraft are good calls, I had forgotten to think about the early stuff. We can put Vern in the progressive camp, I think.
There is sort of an interesting division among the "cautionary tales". There's the Crichton/Shelley/Romero zombie tradition of humans try to play God and get their asses kicked as punishment unless traditional values/folkways come to the rescue. And then theres the more leftist tradition: new technology has implications capitalism or statism isn't equipped to deal with, here we include H.G. Wells, Brave New World and other dystopias, cyberpunk, Gattaca, a lot of post-nuke war stuff, etc.
Are both groups reactionary under your definition or just the first?
I totally agree about Hollywood. There is also the whole alien invasion subgenre which originally was really about Cold War anxiety. Cloverfield is probably counts as a modern-day equivalent.
The original The War of the Worlds by H.G. Wells has many similarities to the era's "invasion stories" in which a hostile foreign power (usually Germany or France) launches a very successful surprise invasion of Great Britain. Wells just replaced Germany with Martians.
For anyone who hasn't already seen it — Caveman Science Fiction!
How do you classify Egan? Pretty pro-tech in his novels, iirc, but a pretty high proportion of his short stories are effectively horror about new tech.
There certainly is a large chunk of science fiction that could be accurately described as medieval fantasy moved to a superficially futuristic setting.
There is also the legitimate question of how fragile our liberal norms and economy are -- do they depend on population density? on the ratio between the reach of weapons and the reach of communications? on the dominance of a particular set of subcultures that attained to industrial hegemony through what amounts to chance and might not be repeated?
If egalitarianism is not robust to changes in the sociological environment, then there might simply be many more possible futures with feudal regimes than with capitalist or democratic regimes.
Yes, but how often do they bother to explain this rise other than in some very vague way? And it isn't just feudalism. Look for example at Dune where not only is there a feudal system but the technology conveniently makes sword fighting once again a reasonable melee tactic. Additional evidence for the romantic nature is that almost invariably the stories are about people who happen to be nobles. So there's less thinking and focusing on how unpleasant feudalism is for the lower classes.
The only individual I've ever seen give a plausible set of explanations for the presence of feudal cultures is Bujold in her Vorkosigan books. But it is important to note that there there are many different governmental systems including dictatorships and anarcho-capitalist worlds and lots of other things. And she's very aware that feudalism absolutely sucks for the serfs.
I don't think that most of these writers are arriving at their societies by probabilistic extrapolation. Rather, they are just writing what they want their societies to have. (Incidentally, I suspect that many of these cultural and political norms are much more fragile than we like to think. There are likely large swaths of the space of political systems that we haven't even thought about. There might well be very stable systems that we haven't conceived of yet. Or there might be Markov chains of what systems are likely to transfer to other systems).
Those aren't the only possibilities - much more likely is the Rule of Cool. Wielding a sword is cooler than wielding a gun, and swordfights are more interesting than gunfights.
Granted. Some are, though. Two more counter-examples, besides Bujold:
Asimov's Foundation, e.g. the planet of Anacreon. Feudalism is portrayed as the result of a security dilemma and the stagnation of science, as reducing the access of ordinary people to effective medicine and nuclear power, and as producing a variety of sham nobles who deserve mockery.
Brave New World. Feudalism is portrayed as a logical outgrowth of an endless drive toward bureaucratic/administrative efficiency in a world where personal freedom has been subordinated to personal pleasure. Regionally-based bureaucrat-lords with concentrically overlapping territories 'earn' their authority not by protecting ordinary serfs from the danger of death but from the danger of momentary boredom or discomfort. Huxler doesn't seem overly fond of this feudalism; the question of whether a romantic would prefer this sort of system is, at worst, left as an exercise for the reader.
Huh. I had not really thought Brave New World as using a feudal system but that really is what it is. It might be more accurate to then make the point that the vast majority of the other cases have systems that aren't just feudal but are ones in which the positions are inherited.
I agree that some of these writers are extrapolating. Since Asimov is explicitly writing in a world where the running theme is the ability to reliably predict social changes it shouldn't be that surprising that he'd actually try to do so. (Note also that Asimov also avoids here the standard trap of having protagonists who are nobles).
Now that's a reasonable argument: benevolent, resource rich societies are more likely to thaw people. Thanks.
And yes, that's true, science fiction does often look at what could go really wrong.
Science fiction has a bias towards things going wrong.
In the particular case of cryonics, if there's a dystopian future where the majority of people have few or no rights, it's a disaster all around, but as ata says, you can presumably commit suicide. There's a chance that even that will be unfeasible-- for example if brains are used, while conscious, for their processing power. This doesn't seem likely, but I don't know how to evaluate it in detail.
The other case-- people in general have rights, but thawed people, or thawed people from before a certain point in time, do not-- requires that thawed people do not have a constituency. This doesn't seem terribly likely, though as I recall, Niven has it that it takes a very long time for thawing to be developed.
Normally, I would expect for there to be commercial and legal pressures for thawed people to be treated decently. (I've never seen an sf story in which thawed people are a political football, but it's an interesting premise.)
I think the trend is towards better futures (including richer, with less reason to enslave people), but there's no guarantee. I think it's much more likely that frozen people won't be revived than that they'll be revived into a bad situation.
All fiction has a bias towards things going wrong. Need some kind of conflict.
(Reality also has a bias towards things going wrong, but if Fun Theory is correct, then unlike with fiction, we can change that condition without reducing the demand for reality.)
Science fiction has a stronger bias towards things going wrong on a grand scale than most fiction does.
Otherwise, the advanced technology would just make everything great. They need extra-conflict to counter-out.
Can't speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it's invented, if we haven't destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn't possibly bear to live in, you can just kill yourself and be no worse off than if you hadn't tried cryonics in the first place.
Here's my question to everyone:
What do you think are the benefits of reading fiction (all kinds, not just science fiction) apart from the entertainment value? Whatever you're learning about the real world from fiction, wouldn't it be more effective to read a textbook instead or something? Is fiction mostly about entertainment rather than learning and improvement? Any thoughts?
We are wired for individual rather than general insights. Stories are much more effective at communicating certain things than treatises are. I would never have believed, in theory, that a man who enjoyed killing could be worthy of respect; only a story could convince me. To use Robin Hanson's terminology, narrative can bring near mode and far mode together.
Why not true stories? I think there you get into Aristotle and why versimilitude can be more effective than mere reality. True stories are good too, but life is disorderly and not necessarily narrative. It's a truism of writing workshops and creative writing classes that whenever you see a particularly unrealistic event in a story, the author will protest "But that really happened!" It doesn't matter; it's still unrealistic. Narrative is, I think, a particular kind of brain function that humans are good at, and it's a painting, not a photograph. To tap into our ability to understand each other through narrative, we usually need to fictionalize the world, apply some masks and filters.
It was not until I read Three Worlds Collide that I began to embrace moral consequentialism. I would not have found an essay or real-life case study nearly as convincing.
ETA: I didn't change my mind just because I liked the story. The story made me realize that in a particular situation, I would be a moral consequentialist.
My take on works of fiction, especially written fiction, is that they're thought experiments for your emotional intelligence. The best ones are the ones written for that purpose, since I think they tend to better optimize the net value of entertainment and personal growth.
Morality in particular usually stems from some sort of emotional intelligence, like empathy, so it makes sense to me that written fiction could help especially with that.
A possible benefit of fiction is that it leads you to experience emotions vicariously that it would be much more expensive to experience for real, yet the vicarious experience is realistic enough that it serves as useful practice, a way of "taming" the emotions. Textbooks don't convey emotions.
I seem to recall this argument from a review of Cloverfield, or possibly the director's commentary. Broadcast images such as from the 9/11 aftermath generated lots of anxiety, and seeing similar images - the amateurish, jerky camcorder type - reframed in a fictional setting which is "obviously" over the top helps you, the audience, come to terms with the reality.
Fiction is good for teasing out possibilities and counterfactuals, experimenting with different attitudes toward the world (as opposed to learning facts about the world), and learning to be cool.
On the other hand (and I speak as a person who really likes fiction), it's possible that you learn more about the human range by reading letters and diaries-- whatever is true in fiction may be distorted to make good stories.
Is anyone else here disturbed over the recent Harvard incident where Stephanie Grace's perfectly reasonable email where she merley expreses agnosticism over the posiblity that the well documented IQ differences between groups are partially genetic is worthy of harsh and inaccurate condemnation from the Harvard Law school dean?
I feel sorry for the girl since she trusted the wrong people (the email was alegedly leaked by one of her girlfriends who got into a dispute with her over a man). We need to be extra carefull to selfcensure any rationalist discusions about cows "everyone" agrees are holy. These are things I don't feel comfortable even discussing here since they have ruined many carrers and lives due to relentless persecution. Even recanting dosen't help at the end of the day, since you are a google away and people who may not even understand the argument will hate you intensly. Scary.
I mean surley everyone here agrees that the only way to discover truth is to allow all the hypothesies to stand on their own without giving them the privilige of supressing competition to a few. Why is our society so insane that this regurarly happens even concerning views that many relevant academics hold in private (or even the majority of if in certain fields if the polling is anon)?
PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person? This is a disturbing view since half by definition will always be below average. And we're all going to be terribly stupid compared to AIs in the near future, such implicit values are dangerus in the context of the time we may be living in.
See Michael Vassar's discussion of this phenomenon. Also, I think that people discussing statements they see as dangerous often implicitly (and unconsciously) adopt the frames that make those statements dangerous, which they (correctly) believe many people unreflectively hold and can't easily be talked out of, and treat those frames as simple reality, in order to more simply and credibly call the statement and the person who made it dangerous and Bad.
I think there's something to be said for not posting opinions such that 1) LW is likely to agree with the opinion, and 2) sites perceived as agreeing with the opinion are likely to be the target of hate campaigns.
Perhaps there should be a "secret underground members only" section where we can discuss these things?
Logic would suggest that such a section would be secret, if it existed. It would be simple enough to send private messages to trusted members alerting them to the existence of a private invitation-only forum on another website where such discussions could be held.
Naturally, I would say none of this if I knew of such a forum, or had any intention of creating such. And I would not appreciate any messages informing me of the existence of such a forum - if for no other reason than that I am the worst keeper of secrets I have ever known.
The first rule of rationality club is: you do not talk about rationality club.
There could still be a lower level of 'secrecy' where it wont show up on google and you cant actually read it unless you have the minimum karma, but its existence is acknowledged.
It's not where you'd plan to take over the world, but I'd hope it'd be sufficient for talking about race/intelligence issues
This is the best exposition I have seen so far of why I believe strongly that you are very wrong.
Please read the whole thing and remember that this is where the road inevitably leads.
I'm sympathetic to this as a general principle, but it's not clear to me that LW doesn't have specific battles to fight that are more important than the general principle.
Yes, self-censorship is Prisoner's Dilemma defection, but unilaterally cooperating has costs (in terms of LW's nominal purpose) which may outweigh that (and which may in turn be outweighed by considerations having nothing to do with this particular PD).
Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".
I share your concern. Literal hate campaigns seem unlikely to me, but such opinions probably do repulse some people, and make it considerably easier for us to lose credibility in some circles, that we might (or might not) care about. On the other hand, we pretty strongly want rationalists to be able to discuss, and if necessary slay, sacred cows, for which purpose leading by example might be really valuable.
I'm a bit upset.
In my world, that's dinner-table conversation. If it's wrong, you argue with it. If it upsets you, you are more praiseworthy the more you control your anger. If your anti-racism is so fragile that it'll crumble if you don't shut students up -- if you think that is the best use of your efforts to help people, or to help the cause of equality -- then something has gone a little screwy in your mind.
The idea that students -- students! -- are at risk if they write about ideas in emails is damn frightening to me. I spent my childhood in a university town. This means that political correctness -- that is, not being rude on the basis of race or ethnicity -- is as deep in my bones as "please" and "thank you." I generally think it's a good thing to treat everyone with respect. But the other thing I got from my "university values" is that freedom to look for the truth is sacrosanct. And if it's tempting to shut someone up, take a few deep cleansing breaths and remember your Voltaire.
My own beef with those studies is that you cannot (to my knowledge) isolate the genetics of race from the experience of race. Every single black subject whose IQ is tested has also lived his whole life as black. And we have a history and culture that makes race matter. You can control for income and education level, because there are a variety of incomes and education levels among all races. You can control for home environment with adoption and twin studies, I guess. But you can't control for what it's like to live as a black person in a society where race matters, because all black people do. So I can't see how such a study can really ever isolate genetics alone. (But correct me if I'm missing something.)
Since mixed racial background should make a difference in genes but makes only a small difference in the way our culture treats a person, if the IQ gap is the result of genetics we should see find that the those with mixed race backgrounds have higher IQs than those of mostly or exclusively African descent. This has been approximated with skin tone studies in the past and my recollection is that one study showed a slight correlation between lighter skin tone and IQ and the other study showed no correlation. There just hasn't been much research done and I doubt there will ever be much research (which is fine by me).
I'm still not confident because we're not, as Nancy mentioned, completely binary about race even in the US.
What you'd really need to do is a comparative study between the US and somewhere like Brazil or Cuba, which had a different history regarding mixed race. (The US worked by the one-drop-of-blood rule; Spanish and Portuguese colonies had an elaborate caste system where more white blood meant more legal rights.) If it's mainly a cultural distinction, we ought to see a major difference between the two countries -- the light/dark gap should be larger in the former Spanish colony than it is in the US. If culture doesn't matter much, and the gap is purely genetic, it should be the same all around the world.
The other thing I would add, which is easy to lose track of, is that this is not research that should be done exclusively by whites, and especially not exclusively by whites who have an axe to grind about race. Bias can go in that direction as well, and a subject like this demands extraordinary care in controlling for it. Coming out with a bad, politically motivated IQ study could be extremely harmful.
Minnesota Trans-Racial Adoption Study suggests that a lot of the difference is cultural and/or that white parents are better able to protect their children from the effects of prejudice.
I also have no idea what the practical difference of 4 IQ points might be.
I don't know where you'd find people who were interested enough in racial differences in intelligence to do major studies on it, but who didn't have preconceived ideas.
Afaik, skin tone, hair texture, and facial features make a large difference in how African Americans treat each other.
White people, in my experience, are apt to think of race in binary terms, but this might imply that skin tone affects how African Americans actually get treated.
The Harvard incident is business as usual: http://timtyler.org/political_correctness/
Here is the leaked email by Stephanie Grace if anyone is interested.
A few minor fallacies but overall quite respectable and even stimulating conversation nothing any reasonable person would consider should warrant ostracism. Note the reference to "disscused over Dinner". She was betrayed by someone she socialised with.
And yes I am violating my own advice by boldening that one sentence. ;) I just wanted to drive home how close she may be to a well meaning if perhaps a bit untactfull poster on Less Wrong. Again, we need to be carefull. What society considers taboo changes over time as well, so one must get a feel for where on the scale of forbidden a subject is at any time and where the winds of change are blowing before deciding whether to discuss it online. Something inoccus could cost you your job a decade or so in the future.
Edit: For anyone wondering what a "Larry Summers" is.
Here's a bit more on the "privileging the hypothesis" bit, taken from here:
My "wrong-headed thinking" radar is picking up more bleeps from this than from the incriminating email:
Paul Graham's "What You Can't Say"
One of the people criticizing the letter accused the letter writer of privileging the hypothesis - that it's only because of historical contingency (i.e. racism) that someone would decide to carve reality between "African-Americans" and "whites" instead of, say, "people with brown eyes" and "people with blue eyes". (She didn't use that exact phrase, but it's what she meant.)
Isn't nearly everything a social construct though? We can divide people based into two groups, those with university degrees and those without. People with them may tend to live longer or die earlier, they may earn more money or earn less, ect. We may also divide people into groups based on self identification, do blondes really have more fun than brunettes or do hipsters really feel superior to nonhipsters or do religious people have lower IQs than self-identified atheists ect Concepts like species, subspecies and family are also constructs that are just about as arbitrary as race.
I dosen't really matter in the end. Regardless of how we carve up reality, we can then proceed to ask questions and get answers. Suppose we decided to in 1900 take a global test to see whether blue eyed or brown eyed people have higher IQs. Lo and behold we see brown eyed people have higher IQs. But in 2050 the reverse is true. What happened? The population with brown eyes was heterogeneous and its demographics changed! However if we took skin cancer rates we would still see people with blue eyes have higher rates of skin cancer in both periods.
So why should we bother carving up reality on this racial metric and ask questions about it? For the same reason we bother to carve up reality on the family or gender metric. We base policy on it. If society was colour blind, there would be no need for this. But I hope everyone here can see that society isn't colour blind.
For example Affirmative action's ethical status (which is currently framed as a nesecary adjustment against biases and not reparations for past wrongs) depends on what the data has to about say about group differences.
If the data shows we people with blue eyes in our country have lower mean IQs when controlling for socioeconomic status and such, we shouldn't be accusing racism for their higher college drop out rates if the rates are what is to be expected when controlling for IQs. To keep this policy would mean to discriminate against competent brown eyed people. But if there are no difference well then the policy is justified unless it turns out there is another reason that has nothing to do with discrimination behind it.
I hope that you however agree that (regardless of what the truth of this particular matter is) someone should not be vilified for asking questions or proposing hypothesises regarding social constructs we have in place, regularly operate with and even make quantifiable claims about.
Undiscriminating skepticism strikes again: here's the thread on the very topic of genetic IQ differences.
Oh good. Make it convenient for the guys running background searches.
Today, while I was attending an honors banquet, a girl in my class and her boyfriend were arguing over whether or not black was a color. When she had somewhat convinced him that it wasn't (I say somewhat because the argument was more-or-less ending and he didn't have a rebuttal), I asked "Wait, are you saying I can't paint with black paint?" She conceded that, of course black paint can be used to paint with, but that black wasn't technically a color. At which point I explained that we were likely using two different definitions of color, and that we should explain what we mean. I gave two definitions: 1] The various shade which a human eye was seeing and the brain was processing. 2] The specific wavelength of light that a human eye can pick up. The boyfriend and I were using definition 1, where as she was using definition 2. And with that cleared up, the debate ended.
Note: Both definitions aren't word for word, but somewhat close. I was simply making the distinction between the wavelength itself and the process of seeing something and placing it in a certain color category.
By her definition, the yellow color you see on a computer screen is not a color at all, since it's made up of two wavelengths of light which happen to stimulate the red and green cone cells in your retina in approximately the same way that yellow light would.
This will replace Eliezer's tree falling in a forest sound as my go-to example of how an algorithm feels on the inside about wrong questions.
One could argue that definition 2 is Just Wrong, because it implies that purple isn't a color (purple doesn't have a wavelength, it is non-spectral).
Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.
<30 seconds later>
After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p
Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.
Does anyone know the relative merits of folding@home and rosetta@home, which I currently run? I don't understand enough of the science involved to compare them, yet I would like to contribute to the project which is likely to be more important. I found this page, which explains the differences between the projects (and has some information about other distributed computing projects), but I'm still not sure what to think about which project I should prefer to run.
Personally I run Rosetta@home because, based on my research, it could be more useful to designing new proteins and computationally predicting the function of proteins. Folding seems to be more about understanding how they proteins fold, which can help with some diseases, but isn't nearly the game changing that in silico design and shape prediction would be.
I also think that the SENS Foundation (Aubrey de Grey & co) have some ties to Rosetta, and might use it in the future to design some proteins.
I'm a member of the Lifeboat Foundation team: http://lifeboat.com/ex/rosetta.home
But we could also create a Less Wrong team if there's enough interest.
Hooray! Hooray! It's First of May!
Question: Which strongly held opinion did you change in a notable way, since learning more about rationality/thinking/biases?
I stopped being a theist a few years ago. That was due more to what Less Wrong people would call "traditional rationalism" than the sort often advocated here (I actually identify as closer to a traditionalist rationalist than a strict Bayesianism but I suspect that the level of disagreement is smaller than Eliezer makes it out to be). And part of this was certainly also emotional reactions to having the theodicy problem thrown in my face rather than direct logic.
One major update that occurred when I first took intro psych was realizing how profoundly irrational the default human thinking processes were. Before then, my general attitude was very close to humans as the rational animal. I'm not sure how relevant that is, since that's saying something like "learning about biases taught me that we are biased." I don't know if that's very helpful.
My political views have updated a lot on a variety of different issues. But I suspect that some of those are due to spending time with people who have those views rather than actually getting relevant evidence.
I've updated on how dangerous extreme theism is. It may sound strange, but this didn't arise as much out of things like terrorism, but rather becoming more aware of how many strongly held beliefs about the nature of the world there were out there that were motivated by religion and utterly at odds with reality. This was not about evolution which even in my religious phases I understood and was annoyed at by the failure of religious compatriots to understand. Rather this has included geocentrism among the Abrahamic religions, flat-Earthism among some Islamic extremists, spontaneous generation among ultra-Orthodox Jews (no really. Not a joke. And not even microscopic spontaneous generation but spontaneous generation of mice), belief among some ultra-Orthodox Jews that the kidneys are the source of moral guidance (which they use as an argument against kidney transplants).
My three most recent major updates (last six months or so) are 1) Thinking that cryonics has a substantial success probability (although I still think it is very low). This came not from actually learning more about rationality, but rather after reading some of the stuff here going back and trying to find out more about cryonics. Learning that the ice formation problem is close to completely solved substantially changed my attitude. 2) Deciding that there's a high chance that we'll have space elevators before we have practical fusion power. (This is a less trivial observation than one might think since once one has a decent space elevator it becomes pretty cheap to put up solar power satelites). This is to some extent a reevaluation based primarily on time-frames given by relevant experts. 3) Deciding that there's a substantial chance that P=NP may undecidable in ZFC. This update occurred because I was reading about how complexity results can be connected to provability of certain classes of statements in weakened forms of the Peano axioms. That makes this sound more potentially like it might be in a class of problems that have decent reasons for being undecidable.
It is! I am repeatedly surprised about a) basic level insights that are not wide spread and b) insights that other people consider basic that I do not have c) applications of an idea i understand in an area I did not think of applying it too
To list a few: People are biased => I am biased! Change is possible Understanding is possible I am a brain in a vat. Real life rocks :-)
Even after learning about cached thought, happy death and many others I still managed to fall into the traps of those.
So i consider it helpful to see where someone applies biases.
That statement in itself looks like a warning sign.
Yeah, being aware that there are biases at play doesn't always mean I'm at all sure I'm able to correct for all of them. The problem is made more complicated by the fact that for each of the views in questions, I can point to new information leading to the updates. But I don't know if in general that's the actual cause of the updates.
Theism. Couldn't keep it. In the end, it wasn't so much that the evidence was good -- it had always been good -- as that I lost the conviction that "holding out" or "staying strong" against atheism was a virtue.
Standard liberal politics, of the sort that involved designing a utopia and giving it to people who didn't want it. I had to learn, by hearing stories, some of them terrible, that you have no choice but to respect and listen to other people, if you want to avoid hurting them in ways you really don't want to hurt them.
I just listened to UC Berkeley's "Physics for Future Presidents" course on iTunes U (highly recommended) and I thought, "Surely no one can take theism seriously after experiencing what it's like to have real knowledge about the universe."
Disagreed. My current opinion is that you can be a theist and combine that with pretty much any other knowledge. Eliezer points to Robert Aumann as an example. For someone that has theism hardcoded into their brain and treats it as a different kind of knowledge than physics there can be virtually no visible difference in everyday life from a normal a-theist. I think the problem is not so much the theism, but that people use it to base decisions on it.
oh it's true. I know deeply religious scientists. Some of them are great scientists. Let's not get unduly snide about this.
There seems to be a common thought-pattern among intelligent theists. When they learn a lot about the physics of the Universe, they don't think "I should only be satisfied with beliefs in things that I understand in this deep way." Instead, they think, "As smart as I am, I have only this dim understanding of the universe. Imagine how smart I would have to be to create it! Truly, God is wonderful beyond comprehension."
I'm no longer a propertarian/Lockean/natural rights libertarian. Learning about rationality essentially made me feel comfortable letting go of a position that I honestly didn't have a good argument for (and I knew it). The ev-psych stuff scared the living hell out of me (and the libertarianism* apparently).
*At least that sort of libertarianism
To answer my own question:
changed political and economic views (similar to Matt).
changed views on the effects of Nutrition and activity on health (including the actions that follow from that)
changed view on the dangers of GMO (yet again)
I became aware of areas where I am very ignorant of opposing arguments, and try to counterbalance
I finally understand the criticisms about the skeptics movement
I repeatedly underestimated the amount of ignorance in the world, and got shocked when discovering that
And on the funnier side. Last week I found out that i learned a minor physics fact wrong. That was not a strongly held opinion, just a fact i never looked up again till now. For some reason i always was convinced that the volume increase in freshly frozen water is 10x, while its actually more like 9%
Very interesting. If you find time, could you elaborate on these. I am particularly interested in hearing more on the criticism of the skeptics movement.