You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Is anyone else here disturbed over the recent Harvard incident where Stephanie Grace's perfectly reasonable email where she merley expreses agnosticism over the posiblity that the well documented IQ differences between groups are partially genetic is worthy of harsh and inaccurate condemnation from the Harvard Law school dean?
I feel sorry for the girl since she trusted the wrong people (the email was alegedly leaked by one of her girlfriends who got into a dispute with her over a man). We need to be extra carefull to selfcensure any rationalist discusions about cows "everyone" agrees are holy. These are things I don't feel comfortable even discussing here since they have ruined many carrers and lives due to relentless persecution. Even recanting dosen't help at the end of the day, since you are a google away and people who may not even understand the argument will hate you intensly. Scary.
I mean surley everyone here agrees that the only way to discover truth is to allow all the hypothesies to stand on their own without giving them the privilige of supressing competition to a few. Why is our society so insane that this regurarly happens even concerning views that many re...
I'm a bit upset.
In my world, that's dinner-table conversation. If it's wrong, you argue with it. If it upsets you, you are more praiseworthy the more you control your anger. If your anti-racism is so fragile that it'll crumble if you don't shut students up -- if you think that is the best use of your efforts to help people, or to help the cause of equality -- then something has gone a little screwy in your mind.
The idea that students -- students! -- are at risk if they write about ideas in emails is damn frightening to me. I spent my childhood in a university town. This means that political correctness -- that is, not being rude on the basis of race or ethnicity -- is as deep in my bones as "please" and "thank you." I generally think it's a good thing to treat everyone with respect. But the other thing I got from my "university values" is that freedom to look for the truth is sacrosanct. And if it's tempting to shut someone up, take a few deep cleansing breaths and remember your Voltaire.
My own beef with those studies is that you cannot (to my knowledge) isolate the genetics of race from the experience of race. Every single black subject whose...
Here is the leaked email by Stephanie Grace if anyone is interested.
...… I just hate leaving things where I feel I misstated my position.
I absolutely do not rule out the possibility that African Americans are, on average, genetically predisposed to be less intelligent. I could also obviously be convinced that by controlling for the right variables, we would see that they are, in fact, as intelligent as white people under the same circumstances. The fact is, some things are genetic. African Americans tend to have darker skin. Irish people are more likely to have red hair. (Now on to the more controversial:)
Women tend to perform less well in math due at least in part to prenatal levels of testosterone, which also account for variations in mathematics performance within genders. This suggests to me that some part of intelligence is genetic, just like identical twins raised apart tend to have very similar IQs and just like I think my babies will be geniuses and beautiful individuals whether I raise them or give them to an orphanage in Nigeria. I don’t think it is that controversial of an opinion to say I think it is at least possible that African Americans are less intelligent on a gene
Isn't nearly everything a social construct though? We can divide people based into two groups, those with university degrees and those without. People with them may tend to live longer or die earlier, they may earn more money or earn less, ect. We may also divide people into groups based on self identification, do blondes really have more fun than brunettes or do hipsters really feel superior to nonhipsters or do religious people have lower IQs than self-identified atheists ect Concepts like species, subspecies and family are also constructs that are just about as arbitrary as race.
I dosen't really matter in the end. Regardless of how we carve up reality, we can then proceed to ask questions and get answers. Suppose we decided to in 1900 take a global test to see whether blue eyed or brown eyed people have higher IQs. Lo and behold we see brown eyed people have higher IQs. But in 2050 the reverse is true. What happened? The population with brown eyes was heterogeneous and its demographics changed! However if we took skin cancer rates we would still see people with blue eyes have higher rates of skin cancer in both periods.
So why should we bother carving up reality on this racial m...
My "wrong-headed thinking" radar is picking up more bleeps from this than from the incriminating email:
PS Also why does the Dean equate inteligence with genetic superiority and imlicitly even worth as a person?
See Michael Vassar's discussion of this phenomenon. Also, I think that people discussing statements they see as dangerous often implicitly (and unconsciously) adopt the frames that make those statements dangerous, which they (correctly) believe many people unreflectively hold and can't easily be talked out of, and treat those frames as simple reality, in order to more simply and credibly call the statement and the person who made it dangerous and Bad.
This is the best exposition I have seen so far of why I believe strongly that you are very wrong.
On a Bus in Kiev
I remember very little about my childhood in the Soviet Union; I was only seven when I left. But one memory I have is being on a bus with one of my parents, and asking something about a conversation we had had at home, in which Stalin and possibly Lenin were mentioned as examples of dictators. My parent took me off the bus at the next stop, even though it wasn’t the place we were originally going.
Please read the whole thing and remember that this is where the road inevitably leads.
Yes, self-censorship is Prisoner's Dilemma defection, but unilaterally cooperating has costs (in terms of LW's nominal purpose) which may outweigh that (and which may in turn be outweighed by considerations having nothing to do with this particular PD).
Also, I think that's an overly dramatic choice of example, especially in conjunction with the word "inevitably".
I'm sympathetic to this as a general principle, but it's not clear to me that LW doesn't have specific battles to fight that are more important than the general principle.
Dinnertime conversations between regular, even educated people do not contain probabilistic causal analyses. In the email Grace claimed something was a live possibility and gave some reasons why. Her argument was not of the quality we expect comments to have here at Less Wrong. And frankly, she does sound kind of annoying.
But that all strikes me as irrelevant compared to being made into a news story and attacked on all sides, by her dean, her classmates and dozens of anonymous bloggers. By the standards of normal, loose social conversation she did nothing deserving of this reaction.
I feel a chilling effect and I've only ever argued against the genetic hypothesis. Frankly, you should too since in your comment you quite clearly imply that you don't know for sure there is no genetic component. My take from the reaction to the email is that the only socially acceptable response to encountering the hypothesis is to shout "RACIST! RACIST!" at the top of your lungs. If you think we'd be spared because we're more deliberate and careful when considering the hypothesis you're kidding yourself.
He who controls the karma controls the world.
Less Wrong dystopian speculative fiction: An excerpt.
JulXutil sat, legs crossed in the lotus position, at the center of the Less Wrong hedonist-utilitarian subreddit. Above him, in a foot-long green oval, was his karma total: 230450036. The subreddit was a giant room with a wooden floor and rice paper walls. In the middle the floor was raised, and then raised again to form a shallow step pyramid with bamboo staircases linking the levels. The subreddit was well lit. Soft light emanated from the rice paper walls as if they were lit from behind and Japanese lanterns hung from the ceiling.
Foot soldiers, users JerelYu and Maxine stood at the top of each staircase to deal with the newbies who wanted to bother the world famous JulXutil and to spot and downvote trolls before they did much damage. They also kept their eyes out for members of rival factions because while /lw/hedutil was officially public, every Less Wrong user knew this subreddit was Wireheader territory and had been since shortly after Lewis had published his famous Impossibility Proof for Friendliness. The stitched image of an envelope on JulXutil’s right sleeve turned red. H...
Ask A Rationalist--choosing a cryonics provider:
I'm sold on the concept. We live in a world beyond the reach of god; if I want to experience anything beyond my allotted threescore and ten, I need a friendly singularity before my metabolic processes cease; or information-theoretic preservation from that cessation onward.
But when one gets down to brass tacks, the situation becomes murkier. Alcor whole body suspension is nowhere near as cheap as numbers that get thrown around in discussions on cryonics--if you want to be prepared for senescence as well as accidents, a 20 year payoff on whole life insurance and Alcor dues runs near $200/month; painful but not impossible for me.
The other primary option, Cryonics Institute, is 1/5th the price; but the future availability--even at additional cost--of timely suspension is called into question by their own site.
Alcor shares case reports, but no numbers for average time between death and deep freeze, which seems to stymie any easy comparison on effectiveness. I have little experience reading balance sheets, but both companies seem reasonably stable. What's a prospective immortal on a budget to do?
I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.
I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.
Does this sound viable?
I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.
I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.
Today, while I was attending an honors banquet, a girl in my class and her boyfriend were arguing over whether or not black was a color. When she had somewhat convinced him that it wasn't (I say somewhat because the argument was more-or-less ending and he didn't have a rebuttal), I asked "Wait, are you saying I can't paint with black paint?" She conceded that, of course black paint can be used to paint with, but that black wasn't technically a color. At which point I explained that we were likely using two different definitions of color, and that we should explain what we mean. I gave two definitions: 1] The various shade which a human eye was seeing and the brain was processing. 2] The specific wavelength of light that a human eye can pick up. The boyfriend and I were using definition 1, where as she was using definition 2. And with that cleared up, the debate ended.
Note: Both definitions aren't word for word, but somewhat close. I was simply making the distinction between the wavelength itself and the process of seeing something and placing it in a certain color category.
I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.
Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.
A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.
A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.
I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"
Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.
the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.
Question: Which strongly held opinion did you change in a notable way, since learning more about rationality/thinking/biases?
Theism. Couldn't keep it. In the end, it wasn't so much that the evidence was good -- it had always been good -- as that I lost the conviction that "holding out" or "staying strong" against atheism was a virtue.
Standard liberal politics, of the sort that involved designing a utopia and giving it to people who didn't want it. I had to learn, by hearing stories, some of them terrible, that you have no choice but to respect and listen to other people, if you want to avoid hurting them in ways you really don't want to hurt them.
Has anybody considered starting a folding@home team for lesswrong? Seems like it would be a fairly cheap way of increasing our visibility.
After a brief 10 word discussion on #lesswrong, I've made a lesswrong team :p
Our team number is 186453; enter this into the folding@home client, and your completed work units will be credited.
Self-forgiveness limits procrastination
...Wohl's team followed 134 first year undergrads through their first mid-term exams to just after their second lot of mid-terms. Before the initial exams, the students reported how much they'd procrastinated with their revision and how much they'd forgiven themselves. Next, midway between these exams and the second lot, the students reported how positive or negative they were feeling. Finally, just before the second round of mid-terms, the students once more reported how much they had procrastinated in their exam prep
I recalled the strangest thing an AI could tell you thread, and I came up with another one in a dream. Tell me how plausible you think this one is:
Claim: "Many intelligent mammals (e.g. dogs, cats, elephants, cetaceans, and apes) act just as intelligently as feral humans, and would be capable of human-level intelligence with the right enculturation."
That is, if we did to pet mammals something analogous to what we do to feral humans when discovered, we could assimilate them; their deficiencies are the result of a) not knowing what assimilation re...
Yes, and there's been a lot of work with African Greys already. Irene Pepperberg and her lab have done most of the really pioneering work. They've shown that Greys can recognize colors, small numbers and in some cases produce very large vocabs. There's also evidence that Grey's sometimes overcorrect. That is, they apply complete grammatical rules to conjugate/decline words even when the words are irregular. This happens with human children as well. Thus for example, human children will frequently say "runned" when they mean "ran" or "mouses" when they mean "mice" and many similar examples. This is strong evidence that they are internalizing general rules rather than simply repeating words they've heard. Since Greys do the same thing, we can conclude that parrots aren't just parroting.
I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?
The Cognitive Bias song:
http://www.youtube.com/watch?v=3RsbmjNLQkc
Not very good, but, you know, it's a song about cognitive bias, how cool is that?
Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.
Edit: They are tagged as "siai interviews".
Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...
Impossible motion: magnet-like slopes
http://illusioncontest.neuralcorrelate.com/2010/impossible-motion-magnet-like-slopes/
http://www.nature.com/news/2010/100511/full/news.2010.233.html
Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.
I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.
Is there any reason to think the common intuition is right?
Neanderthal genome reveals interbreeding with humans:
http://www.newscientist.com/article/dn18869-neanderthal-genome-reveals-interbreeding-with-humans.
I have a request. My training is in science & engineering, but I am totally ignorant of basic economics. I have come to see this as a huge blind spot. I feel my views on social issues are fairly well-reasoned, but when it comes to anything fiscal, it's all very touchy-feely at present.
Can anyone recommend intro material on economics (books, tutorials)? I ask on LW because I have no idea where to start and who to trust. If you offer a recommendation of a book pushing some particular economic "school of thought," that's fine, but I'd like to know what that school is.
Thanks!
So, I'm somewhat new to this whole rationality/Bayesianism/(nice label that would describe what we do here on LessWrong). Are there any podcasts or good audiobooks that you'd recommend on the subjects of LessWrong? I have a large amount of time at work that I can listen to audio, but I'm not able to read during this time. Does anyone have any suggestions for essential listening/reading on subjects similar to the ones covered here?
This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists
Scientists in the US have succeeded in developing the first synthetic living cell.
The researchers constructed a bacterium's "genetic software" and transplanted it into a host cell.
The resulting microbe then looked and behaved like the species "dictated" by the synthetic DNA.
I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).
I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?
Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.
In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?
I don't have children and don't intend to. I ...
Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.
What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.
"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"
"Senator, I've made a thorough investigation, and I'm afraid there has been sin in th...
You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.
If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.
Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).
Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?
Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.
Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...
Recycling an email I wrote in a Existential Risk Reduction Career Network discussion. The topic looked at various career options, specifically with an eye towards accumulating wealth - the two major fields recognized being finance and software development.
Frank Adamek enquired as to my (flippant) vanilla latte comments, which revealed a personal blind-spot. Namely, that my default assumption for people with an interest in accumulating wealth is that they're motivated by an interest in improving the quality of their own life (e.g., expensive gadgets, etc.)...
Some people have curious ideas about what LW is; from http://www.fanfiction.net/r/5782108/18/1/ :
"HO-ley **! That was awesome! You might also be interested to know that my brother, my father and I all had a wonderful evening reading that wikipedia blog on rationality that you are named for. Thank you for this, most dearly and truly."
"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"
Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.
http://www3.interscience.wiley.com/journal/123213582/abstract?CRETRY=1&SRETRY=0
Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears ...
Would it be reasonable to request a LW open thread digest to accompany these posts? A simple bullet list of most of the topics covered would be nice.
Question: How many of you, readers and contributers here on this site, actually do work on some (nontrivial) AI project?
Or have an intention to do that in the future?
I was going thru the rationality quotes, and noticed that I always glanced at the current point score before voting. I wasn't able to not do that.
It might be useful to have a setting under which the points on a comment, and maybe also on a post, would be hidden until after you voted on it.
Question: How do you apply the rationalist ideas you learned on lesswrong in your own (professional and/or private) life?
I want to understand Bayesian reasoning in detail, in the sense that, I want to take up a statement that is relevant to our daily life and then try to find exactly how much should I believe in it based on the the beliefs that I already have. I think this might be a good exercise for the LW community? If yes, then let's take up a statement, for example, "The whole world is going to be nuked before 2020." And now, based on whatever you know right now, you should form some percentage of belief in this statement. Can someone please show me exactly how to do that?
Has anyone read "Games and Decisions: Introduction and Critical Survey" by R. Duncan Luce and Howard Raiffa? Any thoughts on its quality?
Hover over the red button at the bottom (to the left of the RSS button and social bookmarking links) for a bonus panel.
Edit: "Whoever did the duplication" would be a better answer than "The guy who came first", admittedly. The duplicate and original would both believe themselves to be the original, or, if they are a rationalist, would probably withhold judgment.
Cool paper: When Did Bayesian Inference Become “Bayesian”?
http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf
No-name terrorists now CIA drone targets
http://www.cnn.com/2010/TECH/05/07/wired.terrorist.drone.strikes/index.html?hpt=C1
I tried to post this discussion of the sleeping beauty problem to less wrong, but it didn't work
http://neq1.wordpress.com/2010/05/07/beauty-quips-id-shut-up-and-multiply/
So I just posted it on my blog.
Can there be a big lag time between when you submit something to lesswrong and to when it shows up in recent posts? (I waited two days before giving up)
The unrecognized death of speech recognition
Interesting thoughts about the limits encountered in the quest for better speech recognition, the implications for probabilistic approaches to AI, and "mispredictions of the future".
What do y'all think?
Curiously, what happens when I refresh LW (or navigate to a particular LW page like the comments page) and I get the "error encountered" page with those little witticisms? Is the site 'busy' or being modified or something else ...? Also, does everyone experience the same thing at the same moment or is it a local phenomenon?
Thanks ... this will help me develop my 'reddit-page' worldview.
By the way: getting crashes on the comments page again. Prior to 1yp8 works and subsequent to 1yp8 works; I haven't found the thread with the broken comment.
Edit: It's not any of the posts after 23andme genome analysis - $99 today only in Recent Posts, I believe.
Edit 2: Recent Comments still broken for me, but ?before=t1_1yp8 is no longer showing the most recent comments to me - ?before=t1_1yqo continues where the other is leaving off.
Edit 3: Recent Comments has now recovered for me.
I'm going to be giving a lecture soon on rationality. I'm probably going to focus on human cognitive bias. Any thoughts on what I should absolutely not miss including?
Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?
Here's my question to everyone:
What do you think are the benefits of reading fiction (all kinds, not just science fiction) apart from the entertainment value? Whatever you're learning about the real world from fiction, wouldn't it be more effective to read a textbook instead or something? Is fiction mostly about entertainment rather than learning and improvement? Any thoughts?
I have a cognitive problem and I figured someone might be able to help with it.
I think I might have trouble filtering stimuli, or something similar. A dog barking, an ear ache, loud people, or a really long day can break me down. I start to have difficulty focusing. I can't hold complex concepts in my head. I'll often start a task, and quit in the middle because it feels too difficult and try to switch to something else, ultimately getting nothing done. I'll have difficulty deciding what to work on. I'll start to panic or get intimidated. It's really an is...
Crinimal profiling, good and bad
Article discusses the shift from impressive-looking guesswork to use of statistics. Also has an egregious example of the guesswork approach privileging the hypothesis.
There an article in this month's Nature examining the statistical evidence for universal common descent. This is the first time someone has taken the massive amounts of genetic data and applied a Bayesian analysis to determine whether the existence of a universal common ancestor is the best model. Most of what we generally think of as evidence for evolution and shared ancestry is evidence for shared ancestry of large collections, such as mammals or birds, or for smaller groups. Some of the evidence is for common ancestry for a phylum. There is prior eviden...
Don't know if anyone else was watching the stock market meltdown in realtime today but as the indices were plunging down the face of what looked a bit like an upside down exponential curve driven by HFT algorithms gone wild and the financial news sites started going down under the traffic I couldn't help thinking that this is probably what the singularity would look like to a human. Being invested in VXX made it particularly compelling viewing.
Continuing discussion with Daniel Varga:
It's difficult to discuss the behavioral dispositions of these imagined cosmic civilizations guided by a single utility function, without making a lot of assumptions about cosmology, physics, and their cosmic abundance. For example, the accelerating expansion of the universe implies that the universe will eventually separate into gravitationally bound systems (galactic superclusters, say) which will be causally isolated from each other; everything else will move beyond the cosmological horizon. The strategic behavio...
I recently heard a physics lecture claim that the luminiferous aether didn't really get kicked out of physics. We still have a mathematical structure, which we just call "the vacuum", through which electromagnetic waves propagate. So all we ever did was kill the aether's velocity-structure, right?
If the future of the universe is a 'heat death' in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?
Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.
We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as 'utility') in the universe.
But if the universe reaches a...
HELP NEEDED Today if at all possible.
So I'm working on a Bayesian approach to the Duhem-Quine problem. Basically, the problem is that any experiment never tests a hypothesis directly but only the conjunction of the hypothesis and other auxiliary assumption. The standard method for dealing with this is to make
P(h|e)=P(h & a|e) + P(h & -a|e) (so if e falsifies h&a you just use the h&-a)
So if e falsifies h&a you end up with:
P(h|e) = P(e|h&-a) * P(h&-a) / P(e)
This guy Strevens objects on the grounds that e can impact h without impac...
It's a vicious cycle-- if you work on something that sounds crank-ish, you get defensive about being seen as a crank, and that defensiveness is also characteristic of cranks. Lather, rinse, repeat.
This seems possibly broadly applicable to me; e.g. replace “crank” with “fanboy”.
Science.
To me it is a process, a method, an outlook on life. But so often it is used as a pronoun: "Science says tomatoes are good for you".
It should be used to encourage rational thinking, clarity of arguement and assumption and rigorous unbiased testing. The pursuit of knowledge and truth. Instead it is often seen as a club, to which you either belong by working in a scientific profession, or you do not.
As a child of a mixed religeon household I felt like an outcast from religeon from an early age - it didn't matter that I have beliefs of my ow...
I don't think that the math in Aumann's agreement theorem says what Aumann's paper says that it says. The math may be right, but the translation into English isn't.
Aumann's agreement theorem says:
Let N1 and N2 be partitions of Omega ... Ni is the information partition of i; that is, if the true state of the world is w [an element of Omega], then i is informed of that element Pi(w) of Ni that contains w.
Given w in Omega, an event E is called common knowledge at w if E includes that member of meet(N1, N2) that contains w.
Let A be an event, and let Qi denote...
The moral life of babies. This is an article that also recently appeared in the New York Times Magazine.
It covers various scientific experiments to explore the mental life of babies, finding evidence of moral judgements, theory of mind, and theory of things (e.g. when two dolls are placed behind a screen, and the screen is removed, 5-month-old babies expect to see two dolls).
Unlike many psychological experiments which produce more noise than signal, "these results were not subtle; babies almost always showed this pattern of response."
It also disc...
How should one reply to the argument that there is no prior probability for the outcome to some quantum event that that already happened and splits the world into two worlds, each with a different outcome to some test (say, a "quantum coin toss")? The idea is that if you merely sever the quantum event and consider different outcomes to the test (say, your quantum coin landed heads), and consider that the outcome could have been different (your quantum coin could have landed tails), there is no way to really determine who would be "you."...
I'm looking at the forecast for the next year on CNN Money for Google stock (which will likely be an outdated link very soon). But while it's relevant...
I don't know much economics, but this forecast looks absurd to me. What are the confidence intervals? According to this graph, am I pretty much guaranteed to make vast sums of money simply by investing all of what I have in Google stock? (I'm assuming that this is just an example of the world being mad. Unless I really should buy some stock?) What implications does this sort of thing have on very unsavvy i...
Pre-commitment Strategies in Behavioral Economics - PowerPoint by Russell James. Not deep, which is sometimes a good thing.
First step in the AI take-over: gather funds. Yesterday's massive stock market spike took place in a matter of minutes, and it looks like it was in large part due to "glitches" in automatic trading programs. Accenture opened and closed at $41/share, but at one point was trading for $0.01/share. Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.
http://www.npr.org/blogs/money/2010/05/the_market_just_flipped_out_ma.html
Next month: our new overlords reveal themselves?
Does anybody know what happened to Roko Mijic's blog 'Transhuman Goodness'? It completely vanished. Looks like it has been deleted?
I've created a mailing list interested in the future of computation. Not very Sl4, but I think it is well worth exploring if RSI doesn't work as people expect.
Looking for help here... does anyone know a good model for cognitive dissonance resolution? I looked at this "constraint satisfaction" model, an I'm not pleased about it:
I did some simulations with the recursion they suggest, and it produces values outside the "activation range" of their "units" if the edge weights, which represent conflicts, aren't chosen carefully (a "unit" represents a belief or cognition, and its "activation" is how strongly present it is in the mind) .
Lacking a decent model for this i...
Here is a little story I've written. "When the sun goes down."
In the dawning light a boy steps out onto the shadowed plane. All around him lie the pits of entry into the other realms. When he reaches the edge of these pits he skirts his way around slowly and carefully until reaching the open space between them. He continues walking in the shadowed plane until the sun has waned completely, and all is dark. And then the shadowed plane becomes the night plane.
In the night the pits cannot be seen, and the boy can no longer walk his way among them to ...
I have read about the argument that the Self-Indication Assumption (SIA) cancels the Doomsday Argument (DA), and I think that the argument fails to take into account the fact that it is not more likely that an observer of a particular birth rank will have existed given the general hypothesis that there will have been many observers in her reference class than given the general hypothesis that there will have been few observers in her reference class, as long as there will have been at least as many as would be necessary for anyone with her birth rank to ex...
A friend linked me to this rather ambitiously described paper: An Algorithm for Consciousness:
...This document offers a complete explanation of the hard problems of consciousness and free will, in only 34 pages. The explanation is given as an algorithm, that can be implemented on a computer as a software program. (Open-)Source code will be released by Jan 2011. A solid background in psychology, computer science & artificial intelligence is useful, but if you're prepared to follow the hyperlinks in the document, it should be possible for most people to e
Danger! You're not looking at the whole system. Children's knowledge doesn't just come from their experience after birth (or even conception), but is implicitly encoded by the interplay between their DNA, the womb, and certain environmental invariants. That knowledge was accumulated through evolution.
So children are not, in their early development, using some really awesome learning algorithm (and certainly not a universally applicable one); rather, they are born with a huge "boost", and their post-natal experiences need to fill in relatively little information, as that initial, implicit knowledge heavily constrains how sensory data should be interpreted and gives useful assumptions that help in modeling the world.
If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using. It's not that there's a problem with the scientific method (though there is), or that we have some powerful Bayesian algorithm to learn from childhood development, but rather, children are springboarding off of a larger body of knowledge.
You seem to be endorsing the discredited "blank slate" paradigm.
A better strategy would be to look at how evolution "learned" and "encoded" that data, and how to represent such assumptions about this environment, which is what I'm attempting to do with a model I'm working on: it will incorporate the constraints imposed by thermodynamics, life, and information theory, and see what "intelligence" means in such a model, and how to get it.
(By the way, I made essentially this same point way back when. I think the same point holds here.)
Re: "If you were to look only at what sensory data children get, you would find that it is woefully insufficient to "train" them to the level they eventually reach, no matter what epistemology they're using."
That hasn't been demonstrated - AFAIK.
Children are not blank slates - but if they were highly intelligent agents with negligible a-priori knowledge, they might well wind up eventually being much smarter than adult humans. In fact that would be strongly expected - for a sufficiently smart agent.