Open Thread: April 2010
An Open Thread: a place for things foolishly April, and other assorted discussions.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (524)
Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.
Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.
People have been worrying about that one since Malthus. Turns out, production capacity can increase exponentially too, and when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.
Simple thermodynamics guarantees that any growing consumption of resources is unsustainable on a long enough timescale - even if you dispute the implicit timescale in Dr. Bartlett's talk*, at some point planning will need to account for the fundamental limits. Ignoring the physics is a common error in economics (even professional economics, depressingly).
* Which you appear not to have watched through - for shame!
Yes, obviously thermodynamics limits exponential growth. I'm saying that exponential growth won't continue indefinitely, that people (unlike bugs) can, will, and in fact have already begun to voluntarily curtail their reproduction.
What kind of reproductive memes do you think get selected for?
How strong is the penalty for defection?
Yeah, this obviously matters a lot. Right now low to non-existent outside the People's Republic of China, though I suppose that could change. There are a lot of barriers to effective enforcement of reproductive prohibitions: incredibly difficult to solve cooperation issues, organized religions, assorted rights and freedoms people are used to. I suppose a sufficiently strong centralized power could solve the problem though such a power could be bad for other reasons. My sense is the prospects for reliable enforcement are low but obviously a singularity type superintelligence could change things.
I’m not quite sure that penalties are that low outside China.
There are of course places where penalties for many babies are low, and there are even states that encourage having babies — but the latter is because birth rates are below replacement, so outside of our exponential growth discussion; I’m not sure about the former, but the obvious cases (very poor countries) are in the malthusian scenario already due to high death rates.
But in (relatively) rich economies there are non-obvious implicit limits to reproduction: you’re generally supposed to provide a minimum of care to children; even more, that “minimum” tends to grow with the richness of the economy. I’m not talking only about legal minimum, but social ones: children in rich societies “need” mobile phones and designer clothes, adolescents “need” cars, etc.
So having children tends to become more expensive in richer societies, even absent explicit legal limits like in China, at least in wide swaths of those societies. (This is a personal observation, not a proof. Exceptions exist. YMMV. “Satisfaction guaranteed” is not a guarantee.)
The legal minimum care requirement is a good point. With the social minimum: I recognize that this meme exists but it doesn't seem like there are very high costs to disobeying it. If I'm part of a religion with an anti-materialist streak and those in my religious community aren't buying their children designer clothes either... I can't think of what kind of penalty would ensue (whereas not bathing or feeding your children has all sorts of costs if an outsider finds out). It seems better to think of this as a meme which competes with "Reproduce a lot" for resources rather than as a penalty for defection.
Your observation is a good one though.
Yes, for a while. The simplest factor driving this is exponentially more laborers. Then there's better technology of all sorts. Still, after a certain point we start hitting hard limits.
(a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture?
(b) Even if it is guaranteed to happen, will the race be won by increasing population hitting hard limits, or populations lifting themselves out of poverty?
I believe it's a quite general phenomenon - Japan did it, Russia did it, USA did it, all of Europe did it, etc. It looks like a pretty solid rich=slower-growth phenomenon: http://en.wikipedia.org/wiki/File:Fertility_rate_world_map.PNG
And if there were a rich country which continued to grow, threatening neighbors, there's always nukes & war.
I think "hard limits" is the wrong way to frame the problem. The only limits that appear truly unbeatable to me right now are the amounts of mass-energy and negentropy in our supergalactic neighborhood, and even those limits may be a function of the map, rather than the territory.
Other "limits" are really just inflection points in our budget curve; if we use too much of resource X, we may have to substitute a somewhat more costly resource Y, but there's no reason to think that this will bring about doom.
For example, in our lifetime, the population of Earth may expand to the point where there is simply insufficient naturally occurring freshwater on Earth to support all humans at a decent standard of living. So, we'll have to substitute desalinized oceanwater, which will be expensive -- but not nearly as expensive as dying of drought.
Likewise, there are only so many naturally occurring oxygen atoms in our solar system, so if we keep breathing oxygen, then at a certain population level we'll have to either expand beyond the Solar System or start producing oxygen through artificial fusion, which may cost more energy than it generates, and thus be expensive. But, you know, it beats choking or fighting wars over a scarce resource.
There are all kinds of serious economic problems that might cripple us over the next few centuries, but Malthusian doom isn't one of them.
It's true that many things have substitutes. All these limits are soft in the sense that we can do something else, and the magic of the market will select the most efficient alternative. At some point this may be no kids, rather than desalinization plants, however, cutting off the exponential growth.
(Phosphorus will be a problem before oxygen. Technically, we can make more phosphorus, and I suppose the cost could go down with new techniques other than "run an atom smasher and sort what comes out".)
But there really are hard limits. The volume we can colonize in a given time goes up as (ct)^3. This is really, really. really fast. Nonetheless, the required volume for an exponentially expanding population goes as e^(lambda t), and will get bigger than this. (I handwave away relativistic time-dilation -- it doesn't truly change anything.)
Or, more precisely, less kids. I don't insist that we're guaranteed to switch to a lower birth rate as a species, but if we do, that's hardly an outcome to be feared.
Fascinating. That sounds right; do you know where in the Solar System we could try to 'mine' it?
Not until we start getting close to relativistic speeds. I could care less about the time-dilation, but for the next few centuries, our maximum cruising speed will increase with each new generation. If we can travel at 0.01 c, our kids will travel at 0.03 c, and so on for a while. Since our cruising velocity V is increasing with t, the effective volume we colonize per generation increases at more than (ct)^3. We should also expect to sustainably extract more resources per unit volume as time goes on, due to increasing technology. Finally, the required resources per person are not constant; they decrease as population increases because of economies of scale, economies of scope, and progress along engineering learning curves. All these factors mean that it is far too early to confidently predict that our rate of resource requirements will increase faster than our ability to obtain resources, even given the somewhat unlikely assumption that exponential population growth will continue indefinitely. By the time we really start bumping up against the kind of physical laws that could cause Malthusian doom, we will most likely either (a) have discovered new physical laws, or (b) have changed so much as to be essentially non-human, such that any progress human philosophers make today toward coping with the Malthusian problem will seem strange and inapposite.
Sam Harris gave a TED talk a couple months ago, but I haven't seen it linked here. The title is Science can answer moral questions.
My reaction was: bad talk, wrong answers, not properly thought through.
I'm always impressed by Harris's eloquence and clarity of thought.
He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.
That isn't all he is claiming though:
It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.
Harris has also written a blog post nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism of Harris' talk (going by this tweet and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response.
As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.
Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.
Discuss.
Telling people frantically about problems that are not on a very short list of "approved emergencies" like fire, angry mobs, and snakes is a good way to get people to ignore you, or, failing that, to dislike you.
It is only very recently (in evolutionary time) that ordinary people are likely to find important solutions to important social problems in a context where those solutions have a realistic chance of being implemented. In the past, (a) people were relatively uneducated, (b) society was relatively simpler, and (c) arbitrary power was held and wielded relatively more openly.
Thus, in the past, anyone who was talking frantically about social reform was either hopelessly naive, hopelessly insane, or hopelessly self-promoting. There's a reason we're hardwired to instinctively discount that kind of talk.
You should present the easily implemented, obviously better solution at the same time as the problem.
If the solution isn't easy to implement by the person you're talking to, then cost/benefit analysis may be in favor of the status quo or you might be talking to the wrong person. If the solution isn't obviously better, then it won't be very convincing as a solution or you might not have considered all opinions on the problem. And if there is no solution, then why complain?
Some fantastic singularity-related jokes here:
http://crisper.livejournal.com/242730.html
Voted up for having jokes with cautionary power, and not just amusement value.
I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.
First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.
ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.
Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?
The real problem with anthropic reasoning is that it's just a default starting point. We are tricked because it seems very powerful in contrived thought experiments in which no other evidence is available.
In the real world, in which there is a wealth of evidence available, it's just a reality check saying "most things don't last forever."
In real world situations, it's also very easy to get into a game of reference class tennis.
It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:
Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.
Who knows, maybe the author will even decide to decloak and tell us who to thank?
Magnificent. (I've sent it to some of my friends, most of whom are thoroughly enjoying it too; many of them are into Harry Potter but not advanced rationalism, so maybe it will turn some of them on to the MAGIC OF RATIONALITY!)
Edit: Sequel idea which probably only works as a title: "Harry Potter and the Prisoner's Dilemma of Azkaban". Ohoho!
Edit 2: Also on my wishlist: Potter-Evans-Verres Puppet Pals.
I could see that working as a prison mechanism, actually. Azkaban would be an ironic prison, akin to Dante's contrapasso. (The book would be an extended treatise on decision theory.)
The reward for both inmates cooperating is escape from Azkaban, the punishment really horrific torture, and the inmates are trapped as long as they are conniving cheating greedy bastards - but no longer.
(The prison could be like a maze, maybe, with all sorts of different cooperation problems - magic means never having to apologize for Omega.)
So if one prisoner cooperates and the other defects, then the defector goes free and the cooperator doesn't? That doesn't sound very effective for keeping conniving cheating greedy bastards in prison.
I'm 98% confident it's Eliezer. He's been taunting us about a piece of fanfiction under a different name on fanfiction.net for some time. I guess this means I don't have to bribe him with mashed potatoes to get the URL after all.
Edit: Apparently, instead, I will have to bribe him with mashed potatoes for spoilers. Goddamn WIPs.
I know, right? This would have been a wonderful story for me to read 10 years ago or so, and not just because now I'm having difficulty explaining to my girlfriend why I spent friday night reading a Harry Potter fanfic instead of calling her...
Yeah, I don't think I can plausibly deny responsibility for this one.
Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...
Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.
Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.
Holy fucking shit that was awesome.
This Harry is so much like Ender Wiggin.
Really? I picture him looking like a younger version of this.
This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.
Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).
And the last chapter's disclaimer?
If the parallels aren't intentional I'm going insane.
And going back a few chapters, I'm betting that what Harry saw as wrong with himself is hair-trigger rage.
I normally read within {nonfiction} U {authors' other works} but I had such a blast with Methods of Rationality that I might try some more fiction.
I like all of Eliezer's fiction... if you want more like this, see the pseudo-sequel, http://lesswrong.com/lw/18g/the_finale_of_the_ultimate_meta_mega_crossover/ It is too insane of a story to recommend to most people, but assuming you've read Eliezer's non-fiction, you can jump right in.
Otherwise, just about all of Eliezer's fiction is worth reading, Three World's Collide is his best work of fiction.
This story reminded me distinctly of Harry Potter and the Nightmares of Futures Past -- you might enjoy that one. Harry works until he's 30 to kill Voldemort, and by the time he succeeds, everyone he loves is dead. He comes up with a time travel spell that breaks if the thing being transported has any mass, so he kills himself, and lets his soul do the travelling. 30-year-old Harry's soul merges with 11-year-old Harry, and a very brilliant, very prepared, very powerful, and deeply disturbed young wizard enters Hogwarts.
I've finished reading that.
It's very well written technically - better than Eliezer who overindulges in speechifying, hyperbole, and italics - but in general Harry doesn't seem disturbed enough, heals too easily, and there are too few repercussions from his foreknowledge. (Snape leaving and usurping Kakaroff at Durmstang seems to be about it.)
That, and the author may never finish, which is so frustrating an eventuality that I'm not sure I could recommend it to anyone.
AH... spoiler!
lol
This is a lot of fun so far, though I think McGonnagal was in some ways more in the right than Harry in chapter 6. Also, I kind of feel like Draco's behavior here is a bit unfair to the wizarding world as portrayed in the canon - the wizarding world is clearly not at all medieval in many ways (especially in the treatment of women where the behavior we actually see is essentially modern), so I'm not sure why it should necessarily be so in that way. Regardless of my nitpicking it's a brilliant fanfic and it's nice to see muggle-world ideas enter the wizarding world (which always seemed like it should have happened already).
Also, there is a sharply limited supply of people who speak Japanese, Hebrew, English, math, rationality, and fiction all at once. If it wasn't you, it was someone making a concerted effort to impersonate you.
I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!
Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)
It gets a strong vote of approval from my girlfriend. She made it about halfway through Three Worlds Collide without finishing, for comparison. We'll see if I can get my parents to read this one...
Edit: And I think this is great. Looking forward to when Harry crosses over to the universe of the Ultimate Meta Mega Crossover.
Let's make that a Prediction. Harry becomes the ultimate Dark Lord by destroying the universe and escaping to the Metametaverse of the Ultimate Meta Mega Crossover.
You also have the approval of several Tropers, only one of which is me.
Do I have to guess right? ;)
There is a reason I didn't look for it. It isn't done. Having found it anyway via link above, of course I read it because I have almost no self-control, but I didn't look for it!
Are you sure you wouldn't rather have the mashed potatoes? There's a sack of potatoes in the pantry. I could mash them. There's also a cheesecake in the fridge... I was thinking of making soup... should I continue to list food? Is this getting anywhere?
No, no, it's not Eliezer.
It's an alternate personality, which acts exactly the same and shares memories, that merely believes it's Eliezer.
Sounds like an Eliezer to me.
like an Eliezer, yes.
For the record, it's currently the first Google autocomplete result for "harry potter and the me", with apparently multiple pages of forum posts and such about it.
Harry Potter as a boy genius smart-aleck aspiring rationalist works surprisingly well. And the idea of extending the pull of rationalism a bit beyond its standard sci-fi hunting grounds using Harry Potter fanfiction is brilliant.
Fb, sebz gur cbvag bs ivrj bs na Nygreangr-Uvfgbel, V nffhzr gur CBQ vf Yvyyl tvivat va naq svkvat Crghavn'f jrvtug ceboyrz. Gung jbhyq graq gb vzcebir Crghavn'f ivrj bs ure zntvpny eryngvirf, naq V nffhzr gur ohggresyvrf nera'g rabhtu gb fnir Wnzrf naq Yvyyl sebz Ibyqrzbeg. Tvira gur infgyl vapernfrq vagryyvtrapr bs Uneel, V nffhzr ur vf abg trargvpnyyl gur fnzr puvyq jr fnj va gur obbxf, nygubhtu vzcebirq puvyqubbq ahgevgvba pbhyq nyfb or n snpgbe.
The probability of magic should make any effort on testing the hypothesis unjustified. Testing theories no matter how improbable is generally incorrect dogma. (One should distinguish improbable from silly though.)
You have not taken into account that testing magical hypotheses may be categorized as "play" and pay its rent on time and effort accordingly.
Then this activity shouldn't be rationalized as being the right decision specifically for the reasons associated with the topic of rationality. For example, the father dismissing the suggestion to test the hypothesis is correct, given that the mere activity of testing it doesn't present him with valuable experience.
You've just taken the conclusion presented in the story, and wrote above it a clever explanation that contradicts the spirit of the story.
I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?
Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.
This (injunction?) is equivalent to ascribing much higher probability to the hypothesis (magic) than it deserves. It might be a good injunction, but we should realize that at the same time, it asserts inability of people to correctly judge impossibility of such hypotheses. That is, this rule suggests that probability of some hypothesis that managed to make it in your conscious thought isn't (shouldn't be believed to be) 10^-[gazillion], even if you believe it is 10^-[gazillion].
I guess it depends a bit on how you came to consider the proposition to be tested, but I’m not sure how to formalize it.
I wouldn’t waste a moment’s attention in general to some random person proposing anything like this. But if someone like my mother or father, or a few of my close friends, suddenly came with a story like this (which, mark you, is quite different from the usual silliness), I would spend a couple of minutes doing a test before calling a psychiatrist. (Though I’d check the calendar first, in case it’s April 1st.)
Especially if I were about that age. I was nowhere near as bright and well-read rationalist!Harry at that age (nor am I now). I read a lot though, and I had a pretty clear idea of the distinction between fact and fiction, but I remember I just didn’t have enough practical experience to classify new things as likely true or false at a glance.
I remember at one time (between 8 and 11 years old) I was pondering the feasibility of traveling to Florida (I grew up in Eastern Europe) to check if Jules Verne’s “From the Earth to the Moon” was real or not, by asking the locals and looking for remains of the big gun. It wasn’t an easy test, so I concluded it wasn’t worth it. However, I also remember I did check if I had psychic powers by trying to guess cards and the like; that took less than two minutes.
The probability that you have no grasp on the situation is high enough to justify an easy, simple, harmless test.
And I'd appreciate it if spoilers for the story were ROT13'd or something - I haven't read it.
You mean the plot point that Harry Potter tested the Magic hypothesis? I don't think most plot points in the introductions of stories really count as spoilers.
Yeah, that's not a spoiler any more than "Obi-Wan Kenobi is a Jedi" is a spoiler.
A "Jedi"? Obi-Wan Kenobi?
I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.
One of the goals was to get his parents to stop fighting over whether or not magic was real.
How would it work? As expected outcome is that no magic is real, we'd need to convince the believer (mother) to disbelieve. An experiment is usually an ineffective means to that end. Rather, we'd need to mend her epistemology.
Well, Harry did spend some time making sure that this experiment would convince either of his parents if it went the appropriate way, though he had his misgivings. As a child who isn't respected by his parents, what better options does he have to stop the fight? (serious question)
It was strongly implied that some element of Harry's mind had skewed that prior dramatically. Perhaps his horcrux, perhaps infant memories, but either way it wasn't as you'd expect. Even for an eleven-year-old.
-- Al Gore on Futurama
Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.
Note that they were specifically looking for transfer effects. The specific tasks practised did themselves show improvements.
Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.
If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.
The only way I know to track karma changes is having an old tab with my Recent Comments visible and comparing it to the new one. That captures a lot of the change - >90% - but not the old threads.
I would love to know how hard it would be to have a "Recent Karma Changes" feed.
A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.
Available without the paywall from the author's home page.
It's also an argument in favor of using checklists.
Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively according to the Shroedinger equation. Some questions:
Does anyone have a good source for the technical background I will need to implement such a simulation? Specifically more technical details of the Shroedinger equation (the wikipedia article is unhelpful)
I imagine this will quickly become intractable quite as I try to simulate more complex systems with more particles. How quickly, though? Could I simulate, e.g., the interaction of two H_2 ions in a reasonable time (say, no more than a few hours)?
Surely others have tried this. Any links/references would be much appreciated.
A couple of articles on the benefits of believing in free will:
Vohs and Schooler, "The Value of Believing in Free Will"
Baumeister et al., "Prosocial Benefits of Feeling Free"
The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.
References from a Sci. Am. article.
[1] Cough.
ETA: This is also relevant.
Cool. Since a handful of studies suggest a narrow majority believe moral responsibility and determinism to be incompatible this shouldn't actually be that surprising. I want to know how people act after being exposed to statements in favor of compatibilism.
I'd like to plug a facebook group:
Once we reach 4,096 members, everyone will donate $256 to SingInst.org.
Folks may also be interested in David Robert's group:
1 million people, $100 million to defeat aging.
How to deal with a program that has become self aware? - April Fools on StackOverflow.
Rats have some ability to distinguish between correlation and cauation
the abstract
I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.
BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.
You can do better than frequentist approach without using the "magic" universal prior. You can just use a prior that represents initial ignorance of the frequency at which the machine produces head-biased and tail-biased coins. (dP(f) = 1df). If you want to look for repeating patterns, you can assign probability (1/2)(1/2^n) to the theory that the machine produces each type of coin on a frequency depending on the last n coins it produced. This requires treating a probability as a strength of belief, and not the frequency of anything, which is what (as I understand it) frequentists are not willing to do.
Note the universal prior, if you can pull it off, is still better than what I described. The repeating pattern seeking prior will not notice, for example, if the machine makes head biased coins on prime-numbered trials, but tailbiased coins on composite-numbered trials. This is because it implicitly assigns probability 0 to that type of machine, which takes infinite evidence to update.
I second this feature request.
ETA: I did not notice earlier Steve Rayhawk made the same comment.
Seconded. See also JenniferRM on editorial-level versus object-level comments.
Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent.
This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.
Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems.
The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before I resolve that question?
The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors. Internally, the agent is computing the optimal strategy (as best as it can) from a preference that's stated in terms of "the real world" and maybe also in terms of subjective anticipation. If we could somehow translate those preferences directly into preferences on mathematical structures, we would be able to bypass those computational limitations and errors without having to single them out.
An important principle of FAI design to remember here is "be lazy!". For any problem that people would want to solve, where possible, FAI design should redirect that problem to FAI, instead of actually solving it in order to construct a FAI.
Here, you, as a human, may be interested in "nature of reality", but this is not a problem to be solved before the construction of FAI. Instead, the FAI should pursue this problem in the same sense you would.
Syntactic preference is meant to capture this sameness of pursuits, without understanding of what these pursuits are about. Instead of wanting to do the same thing with the world as you would want to, the FAI having the same syntactic preference wants to perform the same actions as you would want to. The difference is that syntactic preference refers to actions (I/O), not to the world. But the outcome is exactly the same, if you manage to represent your preference in terms of your I/O.
You may still know the process of discovery that you want to follow while doing what you call getting to know your own preference. That process of discovery gives definition of preference. We don't need to actually compute preference in some predefined format, to solve the conceptual problem of defining preference. We only need to define a process that determines preference.
This issue is actually the last conceptual milestone I've reached on this problem, just a few days ago. The trouble is in how would the agent reason about the possibility of corruption of its own hardware. The answer is that human preference is to a large extent concerned with consequentialist reasoning about the world, so human preference can be interpreted as modeling the environment, including the agent's hardware. This is an informal statement, referring to the real world, but the behavior supporting this statement is also determined by formal syntactic preference that doesn't refer to the real world. Thus, just mathematically implementing human preference is enough to cause the agent to worry about how its hardware is doing (it isn't in any sense formally defined as its own hardware, but what happens in the agent's formal mind can be interpreted as recognizing the hardware's instrumental utility). In particular, this solves the issues of possible morally harmful impact of the FAI's computation (e.g. simulating tortured people and then deleting them from memory, etc.), and of upgrading the FAI beyond the initial hardware (so that it can safely discard the old hardware).
Once we implement this kind of FAI, how will we be better off than we are today? It seems like the FAI will have just built exact simulations of us inside itself (who, in order to work out their preferences, will build another FAI, and so on). I'm probably missing something important in your ideas, but it currently seems a lot like passing the recursive buck.
ETA: I'll keep trying to figure out what piece of the puzzle I might be missing. In the mean time, feel free to take the option of writing up your ideas systematically as a post instead of continuing this discussion (which doesn't seem to be followed by many people anyway).
FAI doesn't do what you do; it optimizes its strategy according to preference. It's more able than a human to form better strategies according to a given preference, and even failing that it still has to be able to avoid value drift (as a minimum requirement).
Preference is never seen completely, there is always loads of logical uncertainty about it. The point of creating a FAI is in fixing the preference so that it stops drifting, so that the problem that is being solved is held fixed, even though solving it will take the rest of eternity; and in creating a competitive preference-optimizing agent that ensures the preference to fair OK against possible threats, including different-preference agents or value-drifted humanity.
Preference isn't defined by an agent's strategy, so copying a human without some kind of self-reflection I don't understand is pretty pointless. Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.
FAI is not built without exact and complete definition of preference. The uncertainty about preference can only be logical, in what it means/implies. (At least, when we are talking about syntactic preference, where the rest of the world is necessarily screened off.)
Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind.
I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.
After reading Nesov's latest posts on the subject, I think I better understand what he is talking about now. But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.
Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?
What other approaches do you refer to? This is just the direction my own research has taken. I'm not confident it will lead anywhere, but it's the best road I know about.
I have some ideas, though too vague to usefully share (I wrote about a related idea on the SIAI decision theory list, replying to Drescher's bounded Newcomb variant, where a dependence on strategy is restored from a constant syntactic expression in terms of source code). For "semantic preference", we have the ontology problem, which is a complete show-stopper. (Though as I wrote before, interpretations of syntactic preference in terms of formal "possible worlds" -- now having nothing to do with the "real world" -- are a useful tool, and it's the topic of the next blog post.)
At this point, syntactic preference (1) solves the ontology problem, (2) gives focus to investigation of what kind of mathematical structure could represent preference (strategy is a well-understood mathematical structure, and syntactic preference is something allowing to compute a strategy, with better strategies resulting from more computation), and (3) gives a more technical formulation of the preference extraction problem, so that we can think about it more clearly. I don't know of another effort towards clarifying/developing preference theory (that reaches even this meager level of clarity).
Returning to this point, there are two show-stopping problems: first, as I pointed out above, there is the ontology problem: even if humans were able to write out their preference, the ontology problem makes the product of such an effort rather useless; second, we do know that we can't write out our preference manually. Figuring out an algorithmic trick for extracting it from human minds automatically is not out of the question, hence worth pursuing.
P.S. These are important questions, and I welcome this kind of discussion about general sanity of what I'm doing or claiming; I only saw this comment because I'm subscribed to your LW comments.
Correct. Note that "strategy" is a pretty standard term, while "I/O map" sounds ambiguous, though it emphasizes that everything except the behavior at I/O is disregarded.
An agent is more than its strategy: strategy is only external behavior, normal form of the algorithm implemented in the agent. The same strategy can be implemented by many different programs. I strongly suspect that it takes more than a strategy to define preference, that introspective properties are important (how the behavior is computed, as opposed to just what the resulting behavior is). It is sufficient for preference, when it is defined, to talk about strategies, and disregard how they could be computed; but to define (extract) a preference, a single strategy may be insufficient, it may be necessary to look at how the reference agent (e.g. a human) works on the inside. Besides, the agent is never given as its strategy, it is given as its source code that normalizes to that strategy, and computing the strategy may be tough (and pointless).
An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!
People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.
I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).
Extensive treatment and relation to other epistemic paradoxes here.
Let's not forget that the clever student will be indeed very surprised by a test on any day, since he thinks he's proven that he won't be surprised by tests on those days. It seems he made an error in formalizing 'surprise'.
(imagine how surprised he'll be if the test is on Friday!)
Why not give a test on Monday, and then give another test later that day? I bet they would be surprised by a second test on the same day.
...and yet...
Probably.
If a 50% chance of having a test that day would leave a student surprised he can be 87.5% confident in being able to fullfil his assertion.
However, if the teacher was a causal decision agent then he would not be able to provide a surprise test without making the randomization process public (or a similar precommitment).
The problem with choosing at day at random is, what if it turns out to be Friday? Friday would not be a surprise, since the test will be either Monday, Wednesday or Friday, and so by Thursday the students would know by process of elimination that it had to be Friday.
Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.
Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, this sort of problem led me to reject the death penalty on practical grounds. Then, as I lost my religious views, I stopped seeing it as a punishment at all. I started to see it as a the same basic thing as putting down an aggressive dog. After all, dead people have a pretty encouraging recidivism rate.
I began to wonder if I could reject the death penalty on principle. A large swath of America believes that the words of the Declaration of Independence are as pertinent to our country as the Constitution. This would mean that we could disallow execution because it conflicts with our "inalienable" right to life. But then, I can't justify using the same argument as the people who try to prove that America is a Christian nation. As an interesting corollary, it seems that anyone citing the Declaration in this manner will have a very hard time also supporting the death penalty for this reason.
So basically, I think I would find the death penalty morally acceptable, but only in the hypothetical realm of virtual certainty that the inmate is guilty of a heinous crime. And I have no bound for what that virtual certainty is. Certainly a 5% chance of being falsely accused is too high. I wouldn't kill one innocent man to rid the world of 19 bad ones. But then, I would kill an innocent person to stop a billion headaches (an example I just read in Steven Landsburg's The Big Questions), so I obviously don't demand 100% certainty.
It seems like I might be asking: "What are the chances that someone was falsely accused, given that they were accused of an execution-worthy crime?" And a follow-up "What is an acceptable chance for killing an innocent person?"
Can Bayes help here? I am eager to hear some actual opinions on this matter. So far I've come up with precious little when talking to friends and family.
My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?
Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit stronger, but the evidence is unclear on that. Provides closure to the victim's family? Execution seems like more definitive closure, but they have to wait until years after sentencing to get it.
The criminal justice system is a big important topic, and I think it's too bad that this little piece of it (capital punishment) soaks up so much of our attention to it. Overall, my stance on capital punishment is ambivalent, leaning against it because it's not worth the trouble, though in some cases (like McVeigh) it's nice to have around and I could be swayed by a big deterrent effect. I'd prefer for more of the focus to be on this sort of thing (pdf).
Good post. I have never seen strong evidence that the death penalty has a meaningful deterrent effect but I'd be curious to see links one way or the other.
I lean towards prison abolition, but it's an idealistic notion, not a pragmatic one. I suppose we could start by getting rid of prisons for non-violent crimes and properly funding mental hospitals. http://en.wikipedia.org/wiki/Prison_abolition_movement I can't see that happening when we can't even decriminalize marijuana.
There is strong Bayesian evidence that the USA has executed one innocent man. http://en.wikipedia.org/wiki/Cameron_Todd_Willingham By that I mean that an Amanda Knox test type analysis would clearly show that Willingham is innocent, probably with greater certainty than when the Amanda Knox case was analyzed. Does knowing that the USA has indeed provably executed an innocent person change your opinion?
What are the practical advantages of death over life in prison? US law allows for true life without parole. Life in an isolated cell in a Supermax prison is continual torture -- it is not a light punishment by any means. Without a single advantage given for the death penalty over life in prison without parole, I think that ~100% certainty is needed for execution.
I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.
Kevin, thank you for the specific example. It definitely strengthened my practical objection to the practice. I strongly suspect that the current number of false positives lies outside of my acceptance zone.
Rain, I agree that politics is a mind-killer, but thought it worthy of at least brushing the cobwebs off some cached thoughts. Good point about Nitrogen. I wonder why we choose gruesome methods when even CO would be cheap, easy and effective.
Morendil, I appreciate the other questions. You have a good point that if Omega were brought in on the justice system, it would definitely find better corrective measures than the kill command. I think Eliezer once talked about how predicting your possible future decisions is basically the same as deciding. In that case, I already changed many things on this Big Question, and am just finally doing what I predicted I might do last time I gave any thought to capital punishment. Which happened to be at the conclusion (if there is such a thing) of a murder trial where my friend was a victim. Lots of bias to overcome there, methinks.
Unnamed, interesting points. I hadn't actually considered how similar life imprisonment is to execution, with regard to the pertinent facts. I was recently introduced to the concept of restorative justice which I think encompasses your article. I find it particularly appealing because it deals with what works, instead of worthless Calvinist ideals like punishment. From my understanding, execution only fulfills punishment in the most trivial of senses.
"Crimes against humanity" is one of the crimes that for most practical purposes means "... and lost".
The more judicious question, I am coming to realize, isn't so much "Which of these two Standard Positions should I stand firmly on".
The more useful question is, why do the positions matter? Why is the discussion currently crystallized around these standard positions important to me, and how should I fluidly allow whatever evidence I can find to move me toward some position, which is rather unlikely (given that the debate has been so long crystallized in this particular way) to be among the standard ones. And I shouldn't necessarily expect to stay at that position forever, once I have admitted in principle that new evidence, or changes in other beliefs of mine, must commit me to a change in position on that particular issue.
In the death-penalty debate I identify more strongly with the "abolitionist" standard position because I was brought up in an abolitionist country by left-wing parents. That is, I find myself on the opposite end of the spectrum from you. And yet, perhaps we are closer than is apparent at first glance, if we are both of us committed primarily to investigating the questions of values, the questions of fact, and the questions of process that might leave either or both of us, at the end of the inquiry, in a different position than we started from.
Would I revise my "in principle" opposition to the death penalty if, for instance, the means of "execution" were modified to cryonic preservation? Would I then support cryonic preservation as a "punishment" for lesser crimes such as would currently result in lifetime imprisonment?
Would I still oppose the death penalty if we had a Truth Machine? Or if we could press Omega into service to give us a negligible probability of wrongful conviction? Or otherwise rely on a (putatively) impartial means of judgment which didn't involve fallible humans? Is that even desirable, if it was at all possible?
Would I support the death penalty if I found out it was an effective deterrent, or would I oppose it only if I found that it didn't deter? Does deterrence matter? Why, or why not?
How does economics enter into such a decision? How much, whatever position I arrive at, should I consider myself obligated to actively try to ensure that the society I live in espouses that position? For what scope of "the society I live in" - how local or global?
Those are topics and questions I encounter in the process of thinking about things other than the death penalty; practically every important topic has repercussions on this one.
There's an old systems science saying that I think applies to rational discussions about Big Questions such as this one: "you can't change just one thing". You can't decide on just one belief, and as I have argued before, it serves no useful purpose to call an isolated belief "irrational". It seems more appropriate to examine the processes whereby we adjust networks of beliefs, how thoroughly we propagate evidence and argument among those networks.
There is currently something of a meta-debate on LW regarding how best to reflect this networked structure of adjusting our beliefs based on evidence and reasoning, with approaches such as TakeOnIt competing against more individual debate modeling tools, with LessWrong itself, not so much the blog but perhaps the community and its norms, having some potential to serve as such a process for arbitrating claims.
But all these prior discussions seem to take as a starting point that "you can't change just one belief". That's among the consequences of embracing uncertainty, I think.
Yeah, that's why I try to avoid hot topics. Too much work.
Well, even relatively uncontroversial topics have the same entangled-with-your-entire-belief-network quality to them, but (to most people) less power to make you care.
The judicious response to that is to exercise some prudence in the things you choose to care about. If you care too much about things you have little power to influence and could easily be wrong about, you end up "mind-killed". If you care too little and about too few things except for basic survival, you end up living the kind of life where it makes little difference how rational you are.
The way it's worked out for me is that I've lived through some events which made me feel outraged, and for better or for worse the outrage made me care about some particular topics, and caring about these topics has made me want to be right about them. Not just to associate myself with the majority, or with a set of people I'd pre-determined to be "the right camp to be in", but to actually be right.
Standard response: politics is the mind-killer.
Personal response: I'm opposed to the death penalty because it costs more than putting them in prison for life due to the huge number of appeals they're allowed (vaguely recall hearing in newspapers / reports). I feel the US has become so risk-averse and egalitarian that it cannot properly implement a death penalty. This is reflected in the back-and-forth questions you ask.
I also oppose it on the grounds that it is often used as a tool of vengeance rather than justice. Nitrogen poisoning (I think that was the gas they were talking about) is a safe, highly reliable, and euphoric means of death, but the US still prefers electrocution (can take minutes), injection (can feel like the veins are burning from the inside out while the body is paralyzed), etc.
That said, I don't care enough about the topic to try and alter its use, whether through voting, polling, letters, etc, nor do I desire to put much thought into it. Best to let hot topics alone.
And after asking about Bayes, you should ask for math rather than opinions.
Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.
Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.
For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:
Gauss Is Not Mocked
So You Think You Have a Power Law — Well Isn't That Special?
Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line
Power-law distributions in empirical data
Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169
Another very relevant and readable paper:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305
That gives a whole new meaning to Mar's Law.
Thank you, I never knew this fallacy has its own name, and I have been annoyed by it since ages. Actually, since 2003, when I was working on one of the first online social network services (iwiw.hu). The structure of the network was contradicting most of the claims made by the then-famous popular science books on networks. Not scale-free, (not even truncated power-law), not attack-sensitive, most of the edges were strong links. Looking at the claims of the original papers instead of the popular science books, the situation was not much better.
You could try "Ubiquity" by Mark Buchanan for the power law stuff, but it's been a while since I read it, so I can't vouch for it completely. (Confusingly, Amazon lists three books with that title and different subtitles, all by that author, all published around 2001-2002.)
My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?
ETA: Their father is non-religious. I don't know why he's putting up with this.
Introduce them to really cool, socially near, atheists. In particular, provide contact with attractive opposite-gender children who are a couple of years older and are atheists.
Teach them the basis of bayesian reasoning without any connection to religion. This will help them in more ways and will lay the foundation for later when they naturally start questioning religion. Also their parents wont have anything against it you merely introduce it as a method for physics or chemistry or with the standard medical examples.
I'm not speaking from experience here, but that doesn't stop me from having opinions.
I don't believe this is an emergency. Are the kid's lives being affected negatively by the religion? What do they think of what they're being taught?
Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?
Their minds aren't a battlefield between you and religious school-- what they believe is, well not exactly their choice because people aren't very good at choosing, but more their choice than yours.
I recommend teaching them a little thoughtful cynicism, with advertisements as the subject matter.
I haven't seen any evidence that they're being bothered by anything.
Mostly, I just want to make it clear that, unlike a lot of other things they're learning in school, there are a lot of people who have good reasons to think the stories aren't true - to make it clear that there's a difference between "Moses led the Jews out of Egypt" and "George Washington was the first President of the United States."
Possibly introducing them to some of the content in A Human's Guide to Words, such as dissolving the question, would lead them to theological noncognitivism. The nice thing about that as opposed to direct atheism is it's more "insidious" because instead of saying, "I don't believe" the kids would end up making more subtle points, like, "What do you even mean by omnipotent?" This somehow seems a lot less alarming to people, so it might bother the parents much less, or even seem like "innocent" questioning.
I wouldn't proselytize too directly - you want to stay on their (and their mother's) good side, and I doubt it would be very effective anyways. You're better off trying to instill good values - open-mindedness, curiosity, ability to think for oneself, and other elements of rationality & morality - rather than focusing on religion directly. Just knowing an atheist (you) and being on good terms with him could help lead them to consider atheism down the road at some point, which is another reason why it's important to maintain a good relationship. Think about the parallel case of religious relatives who interfere with parents who are raising their kids non-religiously - there are a lot of similarities between their situation and yours (even though you really are right and they just think they are) and you could run into a lot of the same problems that they do.
I haven't had the chance to try it out personally, but Dale McGowan's blog seems useful for this sort of thing, and his books might be even more useful.
I think that's some very good advice, and I'd like to elaborate a bit. The thing that made me ditch my religion was the fact that I already had a secular, socially liberal, science-friendly worldview, and it clashed with everything they said in church. That conflict drove my de-conversion, and made it easier for me to adjust to atheism. (I was even used to the idea, from most of my favorite authors mentioning that they weren't religious. Harry Harrison, in particular, had explicitly atheistic characters as soon as his publishers would let him.)
So, yeah, subtlety is your friend here.
Dangerous situation!
How do the parents feel about science and science fiction? I believe that stuff has good effects.
One thing to do is make sure the kids understand that the Bible is just a bunch of stories. My mom teaches Reform Jewish Sunday school and makes this clear to her students. I make fun of her for cranking out little atheists.
Teaching that the bible is a bunch of stories written by multiple humans over time is not nearly as offensive as preaching atheism. Start there. This bit of knowledge should be enough to get your young relatives thinking about religion, if they want to start thinking about it.
US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin
Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?
The causes of "too big to fail" are:
Corporate personhood laws makes it harder to punish the actual people in charge.
Problems in tort law (in the US) make it difficult to sue corporations for certain kinds of damages.
A large government (territorial monopoly of jurisdiction) makes it more profitable for any sufficiently large company to use the state as a bludgeon against its competitors (lobbying, bribes, friends in high places) instead of competing directly on the market.
Letting companies that waste resources go bankrupt causes short-term damage to the economy, but it is healthy in the long term because it allows more efficient companies to take over the tied-up talent and resources. Politicians care more about the short term than the long term.
For pharmaceutical companies there is an additional embiggening factor. Testing for FDA drug approval costs millions of dollars, which constitutes a huge barrier to entry for smaller companies. Hence the large companies can grow larger with little competition. This is amplified by 1 and 2, and 3 suggests that most of the competition among Big Pharma is over legislators and regulators, not market competition.
Disclosure: I am a "common law" libertarian (I find all monopolies counterproductive, including state governments).
I'd add trauma from the Great Depression (amplified by the Great Recession) which means that any loss of jobs sounds very bad, and (not related to the topic but a corollary) anything which creates jobs can be made to sound good.
Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.
It would have been even more frustrating had the protagonist not also been guessing the teacher's password. It seemed that the protagonist just had a better memory of what more authoritative teachers had said.
The protagonist was closer to being able to derive π himself, but that played no part in his argument.
The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.
The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.
The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.
These are important pieces of knowledge, and they are why I said that they protagonist was closer to being able to derive π himself.
The result only came out wrong relative to his own memorized teacher-password. Except for his memory of what the first five digits of π really were, he gave no argument that they weren't the same as the first five digits of 22/7.
Y'know, there's something this blogger I read once wrote that seems kinda applicable here:
I did not criticize the protagonist. He acted entirely appropriately in his situation. Trying to derive digits of π (by using Archimedes's method, say) would not have been an effective way to convince his teammates under those circumstances. In some cases, such as a timed exam, going with an accurately-memorized teacher-password is the best thing to do. [ETA: Furthermore, his and our frustration at his teammates was justified.]
But the fact remains that the story was one of conflicting teacher-passwords, not of deep knowledge vs. a teacher-password. Although the protagonist possessed deeper knowledge, and although he might have been able to reconstruct Archimedes's method, he did not in fact use his deeper knowledge in the argument to make 3.1415 more probable than the first five digits of 22/7.
Again, I'm not saying that he should have had to do that. But it would have made for a better anti-teacher-password story.
I see what you mean. I think the confusion we've had on this thread is over the loaded term "teacher's password" - yes, the question only asked for the password, but it would be less misleading to say that both the narrator and the schoolteachers had memorized the results, but the narrator did a better job of comprehending the reference material.
22/7 gives "something like" something like 3.1427 ?!? Surely it is more like some other things that that!
Well, yes - it's more like 3.142857 recurring. But that's fairly minor.
(Footnote: I originally thought the teachers had performed the division incorrectly, rather than the anonymous commenter incorrectly recount the number, so this comment was briefly incorrect.)
Quite depressing. Makes me even less likely to have my kids educated in the states. I wonder how bad Europe is on that count? Is it really better here? It can be hard to tell from inside; correcting for the fact that most info I get is biased one way or the other leaves me with pretty wide confidence intervals.
AAAAAIIIIIIIIEEEEEEEE
BOOM
Clearly, your math teacher biting powers are called for.
In first grade, I threw a crayon at the principal. Can I help? ;)
Let's not get too hasty. They still might know logarithms. ;)
Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.
I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my entire day around motivating an early bedtime, that often works, but at an unacceptably high cost; the point of going to bed early is to have more surplus time/energy, not to spend all of my time/energy on going to bed.
I am happy to test various hypotheses, but don't have a good sense of which hypotheses to promote or how to generate plausible hypotheses in this context.
Melatonin. Also, getting my housemates to harass me if I don't go to bed.
Mass_Driver's comment is kind of funny to me, since I had addressed exactly his issue at length in my article.
Which, I couldn't help but notice, you have thoughtfully linked to in your comment. I'm new here; I haven't found that article yet.
If you're not being sarcastic, you're welcome.
If you're being sarcastic, my article is linked, in Nick_Tarleton's very first sentence; it would be odd for me to simply say 'my article' unless some referent had been defined in the previous two comments, and there is only one hyperlink in those two comments.
Gwern, I apologize for the sarcasm; it wasn't called for. As I said, I'm new here, and I guess I'm not clicking "show more above" as much as I should.
However, a link still would have been helpful. As someone who had never read your article, I had no way of knowing that a link to "Melatonin" contained an extensive discussion about willpower and procrastination. It looked to me like a biological solution, i.e., a solution that was ignoring my real concerns, so I ignored it.
Having now read your article, I agree that taking a drug that predictably made you very tired in about half an hour could be one good option for fighting the urge to stay up for no reason, and I also think that the health risks of taking melatonin long-term -- especially at times when I'm already tired -- could be significant. I may give it a try if other strategies fail.
I strongly disagree, but I also dislike plowing through as enormous a literature as that on melatonin and effectively conducting a meta-study, since Wikipedia already covers the topic and I wouldn't get a top-level article out of such an effort, just some edits for the article (and old articles get few hits, comments, or votes, if my comments are anything to go by).
I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most:
EDIT: Apparently keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity:
"...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)
That all sounds awfully biological -- are you sure fixing monitor light levels is a solution for akrasia?
What do you do instead of going to bed? I notice myself spending time on the Internet.
Either that or painting (The latter is harder to do because the cats tend to want to help me paint, yet don't get the necessity of oppose-able thumbs ... umm...Opposeable? Opposable??? anyway....)
Since I have had sleep disorders since I was 14, I've got lots of practice at not sleeping (pity there was no internet then)... So, I either read, draw, paint, sculpt, or harass people on the opposite side of the earth who are all wide awake.
Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?
Am I missing something here?
Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055
Hacker News rather than Reddit this time, which makes it a little easier.
I've been trying to do this since November for a close family member. So far the reaction has been fairly positive, but she has still not decided to go for it.
David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:
From the blog post where he announced the paper:
Rather sad to see Chalmers embracing the dopey "singularity" terminology.
He seems to have toned down his ideas about development under conditions of isolation:
"Confining a superintelligence to a virtual world is almost certainly impossible: if it wants to escape, it almost certainly will."
Still, the ideas he expresses here are not very realistic, IMO. People want machine intelligence to help them to attain their goals. Machines can't do that if they are isolated off in virtual worlds. Sure there will be test harnesses - but of course we won't keep these things permanently restrained on grounds of sheer paranoia - that would stop us from using them.
53 pages with only 2 mentions of zombies - yay.
http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies
"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."
Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong
Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.
It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are the best and worst case scenarios conditioning on Friendly AI being IMpossible?
Has this been addressed before? As a disclaimer, I haven't thought much about this and I suspect that I'm dressing up the problem in a way that sounds different to me only because I don't fully understand the implications.
First, define "friendly" in enough detail that I know that it's different from "will not blow up in our faces".
Such an eventuality would seem to require that (a) human beings are not computable or (b) human beings are not Friendly.
In the latter case, if nothing else, there is [individual]-Friendliness to consider.
I think human history has demonstrated that (b) is certainly true... sometimes I am surprised we are still here.
The argument from (b)* is one of the stronger ones I've heard against FAI.
* Not to be confused with the argument from /b/.
Incidentally, /b/ might be good evidence for (b). It's a rather unsettling demonstration of what people do when anonymity has removed most of the incentive for signaling.
I find chans' lack of signaling highly intellectually refreshing. /b/ is not typical - due to ridiculously high traffic only meme-infested threads that you can reply to in 5 seconds survive. Normal boards have far better discussion quality.
PDF: "Are black hole starships possible?"
This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.
I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.
This might be interesting in combination with the a "balanced drive". They were invented by science fiction author Charles Sheffield who attributed them his character Arthur Morton McAndrew so they are sometimes also called a "McAndrew Drive" or a "Sheffield Drive".
The basic trick is to put an incredibly dense mass at the end of a giant pole such that the inverse square law of gravity is significant along the length of the pole. The ship flies "mass forward" through space. Then the crew cabin (and anything else incapable of surviving enormous acceleration) is set up on the pole so that the faster the acceleration the closer it is to the mass. The cabin, flying "floor forward", changes its position while the floor flexes as needed so that the net effect of the ship's acceleration plus the force of gravity balance out to something tolerable. When not under acceleration you still get gravity in the cabin by pushing it out to very tip of the pole.
The literary value of the system is that you can do reasonably hard science fiction and still have characters jaunt from star to star so long as they are willing to put up with the social isolation because of time dilation, but the hard part is explaining what the mass at the end of the pole is, and where you'd get the energy to move it.
If you could feed a black hole enough to serve as the mass while retaining the ability to generate Hawking radiation, that might do it. Or perhaps simply postulating technological control of quantum black holes and then use two in your ship: a big one to counteract acceleration and a small one to get energy from a "Crane-Westmoreland Generator".
I prefer links to the abstract, when possible.
http://arxiv.org/abs/0908.1803
The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:
5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM
Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.
EDIT: Sorry, Sunday, not Monday.
Found this entirely by chance - do a top level post?
Do a top-level post.
Done. I hesitated as I wasn't in any sense the organiser of this event, just someone who had heard about it, but better me than no-one!
Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.
FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."
This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < small rat brain", it seems highly likely that "large cow brain > small cow brain". The fact that a large cow brain is wildly inefficient compared to a more optimized smaller brain is irrelevant to natural selection, a process that "search[es] the immediate neighborhood of its present point in the solution space, over and over and over." It's not as if cow evolution is an intelligent being that can go take a peek at rat evolution and copy its processes.
Still, why don't we see such apparent resource-wasting in other organs? My guess is that the brain is special, in that
1) As with other organs, it seems plausible that the easiest/fastest "immediate neighbor" adaptation to selective pressure on a large animal to acquire more intelligence is simply to grow a larger brain.
2) But in contrast with other organs, if a larger brain is very expensive (hard for the rat to fit into tight places, scampers slower, requires much more food), there are other ways to dramatically improve brain performance - albeit ones that natural selection may be slower to hit upon. Why slower? Presumably because they are more complex, less suited to an "immediate neighbor" search, more suited to an intelligent search or re-design. (The evolution process would be even slower in large animals with longer life cycles.)
I bolded "dramatically" because the possibility of substantial intelligence gains by code optimization alone (without adding parallel processors, for instance) also seems to be a key factor in the AI "FOOM" argument. Maybe that's a clue.
Be careful about making assumptions about the intelligence of cows. I used to think sheep were stupid, then I read that sheep can tell humans apart by sight (which is more than I can do for them!), and I realized on reflection I never had any actual reason to believe sheep were stupid, it was just an idea I'd picked up and not had any reason to examine.
Also, be careful about extrapolating from the intelligence of domestic cows (which have lived for the last few thousand years with little evolutionary pressure to get the most out of their brain tissue) to the intelligence of their wild relatives.
I'm not sure if it's useful to speak of a domesticated animal's raw "intelligence" by citing how they interact with humans.
"Little evolutionary pressure" means "little NORMAL evolutionary pressure" for animals protected by humans. That is, surviving and propagating is less about withstanding normal natural situations, and more about successfully interacting with humans.
So, sheep/cows/dogs/etc. might have pools of genius in the area of "find a human that will feed you," and may be really dumb in almost other areas.
At the risk of repeating the same mistake as my previous comment, I'll do armchair genetics this time:
Perhaps genes controlling the size of various mammalian organs and body regions tend to grow or shrink uniformly, and only become disproportionate when there is a stronger evolutionary pressure. When there is a mutation leading to more growth, all the organs tend to grow more.
(I now see this answered in the first few comments on the link eliezer posted.)
Purely armchair neurology: To answer the question of why cow brains would need to be bigger than rat brains, I asked what would go wrong if we put a rat brain into a cow. (Ignoring organ rejection and cheese crazed, wall-eating cows)
We would need to connect the rat brain to the cow body, but there would not be a 1 to 1 correspondence of connections. I suspect that a cow has many more nerve endings throughout it's body. At least some of the brain/body correlation must be related to servicing the body nerves. (both sensory and motor)
The cow needs more receptors, and more activators. However, this would lead one to expect the relationship of brain size to body size to follow a power-law with an exponent of 2/3 (for receptors, which are primarily on the skin); or of 1 (for activators, which might be in number proportional to volume). The actual exponent is 3/4. Scientists are still arguing over why.
West and Brown has done some work on this which seemed pretty solid to me when I read it a few months ago. The basic idea is that biological systems are designed in a fractal way which messes up the dimensional analysis.
From the abstract of http://jeb.biologists.org/cgi/content/abstract/208/9/1575:
A Science article of theirs containing similar ideas: http://www.sciencemag.org/cgi/content/abstract/sci;284/5420/1677
Edit: A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown:
http://www.nature.com/nature/journal/v464/n7289/abs/nature08920.html
.
Can something be mathematical and yet not strict?
Overly-simple mathematical models don't always work in the real world.
.
After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.
Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.
I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.
The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.
I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.
Getting up in the morning is not noticeably easier.
No evidence that it's habit forming. I'm currently not taking it on weekends (I found myself needing a nap even after getting 10-11 hours of sleep), and I don't notice any additional difficulty going to bed beyond what I would normally have.
I seemed to have more intense dreams the first several days taking it, but they seem to have gone back to normal (or I've gotten used to them/don't remember them).
Overall it seems to work (for me at least) exactly as gwern described, and I'd happily recommend it to anyone else who has difficulty sleeping.
I've been trying it as well for ~2 months (with some gaps).
Normally I have trouble falling asleep, but have no problem staying asleep, so the main reason I take melatonin is to help fall asleep.
Currently, I take 2 5mg pills. Taking 1 doesn't have a very noticeable effect on my ability to fall asleep, but 2 seems to do the trick. However, I have to be sure that I give myself 7-8 hours for sleep, otherwise getting up is more difficult and I may be very groggy the next day. This can be problematic because sometimes I just have to stay up slightly later doing homework and because I can't take the melatonin I end up barely getting any sleep at all.
I haven't noticed any habit forming effects, though some slight effects might be welcome if it helped me to remember to take the supplement every night ;)
edit: its actually two 3mg pills, not 5mg. I googled the brand walmart carries since that's where I bought mine from, and it said 5mg on the bottle. Now that I'm home, I see that my bottle is actually 3mg.
I also tried it out after reading that LW post. At first it was fantastic at getting me to fall asleep within 30 minutes (I'm a good sleeper, it would only take me 30 minutes because I would be going to sleep not tired in order to wake up earlier) and I would wake up feeling alert.
Now unfortunately I wake up feeling the same and basically have stopped noticing its effects. The only time I take it is when I want to go to sleep and I'm not tired.
Also: During the initial 1-2 week period of effectiveness, I had intense and vivid and stressful dreams (or maybe I simply remembered my normal dreams better).
I took it for at least 8 weeks, primarily on weekdays. I found after a while that I was waking up at 4am, sometimes unable to get back to sleep. I had some night sweats too. May not be a normal response, but I found that if I take it in moderation it does not have these effects.
I wonder if you need to get back to sleep after waking up at 4 AM.
The easily available product for me is a blend of 3mg melatonin/25mg theanine. 25mg is a heavy tea-drinker's dose, and I see no reason to consume theanine at all (even dividing the pills in half), so I haven't bought any.
Does anyone have some evidence recommending for/against taking theanine? In my view, the health benefits of tea drinking are negligible, and theanine is just one of many compounds in tea.