Open Thread: April 2010

4 Post author: Unnamed 01 April 2010 03:21PM

An Open Thread: a place for things foolishly April, and other assorted discussions.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads.  Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.

Comments (524)

Comment author: Matt_Simpson 26 April 2010 02:05:46AM 0 points [-]

A couple of physics questions, if anyone will indulge me:

Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality? I was browsing A Brief History of Time at a bookstore, and the chapter on the Heisenberg uncertainty principle seem to suggest the latter - what I read of it, anyway.

If this is just a dumb question for some reason, feel free to let me know - I've only taken two classes in physics, and we never escaped the Newtonian world.

On a related note, I'm looking for a good physics book that will take me through quantum mechanics. I don't want a textbook because I don't really have the time to spend learning all of the details, but I want something with some equations in it. Any suggestions?

Comment author: Mitchell_Porter 26 April 2010 10:39:02AM 3 points [-]

Is quantum physics actually an improvement in the theory of how reality works?

It explains everything microscopic. For example, the stability of atoms. Why doesn't an electron just spiral into the nucleus and stay there? The uncertainty principle means it can't be both localized at a point and have a fixed momentum of zero. If the position wavefunction is a big spike concentrated at a point, then the momentum wavefunction, which is the Fourier transform of the position wavefunction, will have a nonzero probability over a considerable range of momenta, so the position wavefunction will start leaking out of the nucleus in the next moment. The lowest energy stable state for the electron is one which is centered on the nucleus, but has a small spread in position space and a small spread in momentum "space".

However, every quantum theory ever used has a classical conceptual beginning. You posit the existence of fields or particles interacting in some classical way, and then you "quantize" this. For example, the interaction between electron and nucleus is just electromagnetism, as in Faraday, Maxwell, and Einstein. But you describe the electron (and the nucleus too, if necessary) by a probabilistic wavefunction rather than a single point in space, and you also do the same for the electromagnetic field. Curiously, when you do this for the field, you get particles as emergent phenomena. A "photon" is actually something like a bookkeeping device for the probabilistic movement of energy within the quantized electromagnetic field. You can also get electrons and nucleons (and their antiparticles) from fields in this way, so everywhere in elementary particle physics, you have this "field/particle duality". For every type of elementary particle, there is a fundamental field, and vice versa. The basic equations that get quantized are field equations, but the result of quantization gives you particle behavior.

Everyone wants to know how to think about the uncertainty in quantum physics. Is it secretly deterministic and we just need a better theory, or do things really happen without a cause; does the electron always have a definite position even when we can't see it, or is it somehow not anywhere in particular; and so on. These conceptual problems exist because we have no derivation of quantum wavefunctions from anything more fundamental. This is unlike, say, the distributions in ordinary probability theory. You can describe the output of a quincunx using the binomial distribution, but you also have a "microscopic model" of where that distribution comes from (balls bouncing left and right as they fall down). We don't have any such model for quantum probabilities, and it would be difficult to produce (see: "Bell's theorem"). Sum over histories looks like such a model, but the problem is that histories can cancel ("interfere destructively"). It is as if, in the quincunx device, there were slots at the bottom where balls never fell, and you explained this by saying that the two ways to get there cancelled each other out - which is how sum-over-histories explains the double-slit experiment: no photons arrive in the dark regions because the "probability amplitude" for getting there via one slit cancels the amplitude for getting there from the other slit.

As a practical matter, most particle physicists think of reality in quasi-classical terms - in terms of fields or particles, whichever seems appropriate, but then blurred out by the uncertainty principle. Sum over histories is an extension of the uncertainty principle to movement and interaction, so it's a whole process in time which is uncertain, rather than just a position.

The actual nature of the uncertainty is a philosophical or even ideological matter. The traditional view effectively treats reality as classical but blurry. There is a deterministic alternative theory (Bohmian mechanics) but it is obscure and rather contrived. The popular view on this site is "the many-worlds interpretation" - all the positions, all the histories are equally real, but they live in parallel universes. I believe this view is, like Bohmian mechanics, a misguided philosophical whimsy rather than the future of physics. Like Bohmian mechanics, it can be given a mathematical and not just a verbal form, but it's an artificial addition to the real physics. It's not contributing to progress in physics. Its biggest claim to practical significance is that it helped to inspire quantum computation; but one is not obliged to think that a quantum computer is actually in all states at once, rather than just possibly in one of them.

So, I hold to the traditional view of the meaning of quantum theory - that it's an introduction of a little uncertainty into a basically classical world. It doesn't make sense as an ultimate description of things; but I certainly don't believe the ideas, like Bohm (nonlocal determinism) or Everett (many worlds), which try to make a finished objective theory by just adding an extra mathematical and metaphysical facade. The extra details they posit have a brittle artificiality about them. They do link up with genuine aspects of the quantum mathematical formalism, and so they may indirectly contribute to progress just a little, but I think the future lies more with the traditional view.

Comment author: NancyLebovitz 26 April 2010 12:07:01PM 1 point [-]

However, every quantum theory ever used has a classical conceptual beginning.

I don't know if I'm the only person who thinks this is funny, but every theory in physics has a basis in naive trust in qualia, even if it's looking at the readout from an instrument or reading the text of an article.

Comment author: Jack 26 April 2010 12:54:08PM *  0 points [-]

I just take all scientific theories to ultimately be theories about phenomenal experience. No naive trust required.

Comment author: RobinZ 26 April 2010 12:45:05PM 0 points [-]

What do you mean?

Comment author: NancyLebovitz 26 April 2010 01:24:14PM 0 points [-]

The conclusion may be that matter is almost entirely empty space, but you still have to let your interactions with the way you get information about physics use the ancient habit of assuming that what seems to be solid is solid.

Comment author: RobinZ 26 April 2010 01:37:22PM 0 points [-]

I think you may misunderstand what the physics actually says. Compared to the material of neutron stars, yes, terrestrial matter is almost entirely empty space ... but it's still resists changes to shape and volume. And you don't need to invoke ancient habits anywhere - those conclusions fall right out of the physics without modification.

Comment author: NancyLebovitz 26 April 2010 02:44:19PM *  1 point [-]

I've beginning to think that I've been over-influenced by "goshwow" popular physics, which tries to present physics in the most surpising way poosible. It's different if I think of that "empty space" near subatomic particles as puffed up by energy fields.

Comment author: Nick_Tarleton 26 April 2010 02:35:53AM 2 points [-]

Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality?

The Quantum Physics Sequence

Comment author: Matt_Simpson 26 April 2010 03:36:40AM *  0 points [-]

thanks, but I was hoping for a quick answer. Working through that sequence is on my "Definitely do sometime when I have nothing too important to do" list.

Comment author: rhollerith_dot_com 26 April 2010 03:48:01AM *  1 point [-]

OK, a quick answer: classical physics cannot be true of the reality we find ourselves in. Specifically, classical physics is contradicted by experimental results such as the photoelectric effect and the double-slit experiment. The parts of reality that require you to know quantum physics affect such important things as chemistry, semiconductors and whether our reality can contain such a thing as a "solid object". The only reason we teach classical physics is that it is easier than quantum physics. If everyone could learn quantum physics, there would be no need to teach classical physics anymore.

Comment author: Matt_Simpson 26 April 2010 03:53:59AM 1 point [-]

First of all, thanks.

The only reason we teach classical physics is that it is easier than quantum physics. If everyone could learn quantum physics, there would be no need to teach classical physics anymore.

Really? Isn't classical physics used in some contexts because the difference between the classical model and reality isn't enough to justify extra complications? I'm thinking specifically of engineers.

Comment author: rhollerith_dot_com 26 April 2010 04:06:19AM 1 point [-]

Isn't classical physics used in some contexts because the difference between the classical model and reality isn't enough to justify extra complications?

True. Revised sentence: the only reasons for using classical physics are that it is easier to learn, easier to calculate with and it helps you understand people who know only classical physics.

Comment author: RobinZ 26 April 2010 02:18:52AM 0 points [-]

On the first point: I try never to categorize questions as intelligent or dumb, but is quantum mechanics an improvement? Unquestionably. To give only the most obvious example, lasers work by quantum excitation.

I, too, would be interested in learning quantum mechanics from a good textbook.

Comment author: cupholder 26 April 2010 05:47:14AM 0 points [-]

I understand that Claude Cohen-Tannoudji et al.'s two-volume Quantum Mechanics is supposed to be exceptional, albeit expensive, time consuming to work through fully, and targeted at post-graduates rather than beginners. (Another disclaimer: I have not used the textbook myself.) Cohen-Tannoudji got the 1997 Nobel Prize in Physics for his work with...lasers!

Comment author: wnoise 26 April 2010 09:18:20AM *  0 points [-]

It was my undergraduate textbook. It is certainly thorough, but other than that, I'm not sure I can strongly recommend it. (The typography is painful).

I think starting with Quantum Computation and Quantum Information and hence discrete systems might be a better way to start, and then later expand to systems with continuous degrees of freedom.

Comment author: RobinZ 26 April 2010 10:31:42AM 0 points [-]

I'm confused: "typography"? The font on the Amazon "LOOK INSIDE" seems perfectly legible to me.

Comment author: wnoise 26 April 2010 06:37:37PM 2 points [-]

The typesetting of the equations in particular. There were several things that hampered the readability for me -- like using a period for the dot product, rather than a raised dot. I expect a full stop to mean the equation has ended. Exponents are set too big. Integral signs are set upright, rather than slanted (conversely the "d"s in them are italicized, when they should be viewed as an operator, and hence upright). Large braces for case expansion of definitions are 6 straight lines, rather than smooth curves. The operator version of 1 is an ugly outline. The angle brackets used for bras and kets are ugly (though at least distinct from the less than and greater than signs).

I'm not being entirely fair: these are really nits. On the other hand, these and other things actually made it harder for me to use the book. And it's not an easy book to start with.

Comment author: RobinZ 26 April 2010 06:49:07PM 0 points [-]

Thanks for the elaboration. I'll bear that in mind if I have a chance to pick up a copy.

Comment author: NancyLebovitz 23 April 2010 01:19:08PM 0 points [-]

I'm looking at the question of whether it's certainly the case that getting an FAI is a matter of zeroing in directly on a tiny percentage of AI-space.

It seems to me that an underlying premise is that there's no reason for a GAI to be Friendly, so Friendliness has to be carefully built into its goals. This isn't unreasonable, but there might be non-obvious pulls towards or away from Friendliness, and if they exist, they need to be considered. At the very least, there may be general moral considerations which incline towards Friendliness, and which would be more stable than starting from a definition of humanity and then trying to protect that.

Here's an example of a seemingly open choice where there are non-obvious biases towards particular outcomes-- D&D alignments. You look at the tidy little two-dimensional chart, and you might think you can equally play any alignment which appeals to you.

The truth is that Chaotic and/or Evil and/or Neutral alignments tend to make coordination inside parties more difficult. It's possible to play them successfully, but it takes more skill than making Lawful and/or Good work,. Some players find out that playing from the first batch with too much gusto makes gaming less fun. Some GMs put restrictions on the first batch of alignments or how they can be played.

Comment author: RichardKennaway 21 April 2010 07:10:27AM *  0 points [-]

I wonder how alarming people find this? I guess that if something fooms, this will provide the infrastructure for an instant world takeover. OTOH, the "if" remains as large as ever.

RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment.

Bringing a new meaning to the phrase "experience is the best teacher", the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction.

They're shortly having a workshop at a large robotics conference in Alaska.

Comment author: RichardKennaway 20 April 2010 07:20:27PM *  3 points [-]

Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.

These results provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults. This was true for both the ‘general cognitive training’ group (experimental group 2) who practised tests of memory, attention, visuospatial processing and mathematics similar to many of those found in commercial brain trainers, and for a more focused training group (experimental group 1) who practised tests of reasoning, planning and problem solving. Indeed, both groups provided evidence that training-related improvements may not even generalize to other tasks that use similar cognitive functions.

Note that they were specifically looking for transfer effects. The specific tasks practised did themselves show improvements.

Comment author: RobinZ 20 April 2010 07:44:13PM 0 points [-]

Brain training, for those not following the link, refers to playing games involving particular mental skills (e.g. memory). The study ran six weeks.

I don't think the experiment looks definite - the control group did not appear as thoroughly distinguished from the test groups as I would have liked - but the MRC Cognition and Brain Sciences Unit (who were partners in the experiment) is well-regarded enough that I would call the null result major evidence.

Comment author: Jack 20 April 2010 07:36:25PM 0 points [-]

The fact that they studied adults rather than children may make a difference.

Comment author: NancyLebovitz 19 April 2010 01:11:29PM 5 points [-]

Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.

If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.

Comment author: RobinZ 19 April 2010 02:25:37PM 1 point [-]

The only way I know to track karma changes is having an old tab with my Recent Comments visible and comparing it to the new one. That captures a lot of the change - >90% - but not the old threads.

I would love to know how hard it would be to have a "Recent Karma Changes" feed.

Comment author: NancyLebovitz 19 April 2010 11:20:47AM 0 points [-]

CFS: creative non-fiction about immortality

BOOK PROJECT: Immortality postmark deadline August 6, 2010

For a new book project to be published by Southern Methodist University Press, entitled "Immortality," we're seeking new essays from a variety of perspectives on recent scientific developments and the likelihood, merits and ramifications of biological immortality. We're looking for essays by writers, physicians, scientists, philosophers, clergy--anyone with an imagination, a vision of the future, and a dream (or fear) of living forever.

Essays must be vivid and dramatic; they should combine a strong and compelling narrative with a significant element of research or information, and reach for some universal or deeper meaning in personal experiences. We’re looking for well-written prose, rich with detail and a distinctive voice.

For examples, see Creative Nonfiction #38 (Spring 2010).

Guidelines: Essays must be: unpublished, 5,000 words or less, postmarked by August 6, 2010, and clearly marked “Immortality” on both the essay and the outside of the envelope. Please send manuscript, accompanied by a cover letter with complete contact information (address, phone, and email) and SASE to:

Creative Nonfiction Attn: Immortality 5501 Walnut Street, Suite 202 Pittsburgh, PA 15232

Comment author: alexflint 15 April 2010 02:44:37PM 0 points [-]

How does the notion of time consistency in decision theory deal with the possibility of changes to our brains/source code? For example, suppose I know that my brain is going to be forcibly re-written in 10 minutes, and that I cannot change this fact. Then decisions I make after that modification will differ from those I make now, in the presence of the same information (?).

Comment author: RobinZ 15 April 2010 03:04:48PM 1 point [-]

"Forcibly rewritten" implies your being a different person afterwards. Naively, time consistency would suggest treating them as such.

Comment author: alexflint 18 April 2010 10:07:38PM *  1 point [-]

But if a mind's source code is changed just a little then shouldn't its decisions be changed just a little too (for sufficiently small changes in source code)? In so, then what does time consistency even mean? If not, then how big does a modification have to be to turn a mind into "a different person" and why does such a dichotomy make sense?

Comment author: wnoise 19 April 2010 03:01:46AM 1 point [-]

Not necessarily: if (a < b) changing to if (a > b) is a very small change in source with a potentially very large effect.

Comment author: alexflint 19 April 2010 08:30:34AM 1 point [-]

Right, so I suppose what I should've said is that if I want to make some arbitrarily small change to the decisions made by mind X (measured as some appropriate quantity of "change") then there exists some change I could make to X's source code such that no decision would deviate by more than the desired amount from X's original decision.

How to measure "change in decisions" and "change in source code" is all a bit fluffy but the point is just that there is a continuum of source code modifications from those with negligible effect to those with large effect. This makes it hard to believe that all modifications can be classified as either "X is now a different person" or "X is the same person" with no middle ground.

And, if on the contrary middle ground is allowed, then what does time consistency mean in such a case?

Comment author: RobinZ 19 April 2010 01:36:17AM 0 points [-]

Well, it's not much of a problem for me in particular, as I'm fairly generous toward other people as a rule - the main problem is continuity of values and desires. A random stranger is not likely to agree with me on most issues, so I'm not sure I want my resources to become theirs rather than Mom's. If there is likely to be significant continuity of a coherent-extrapolated-volition sort, I'd probably not worry.

Comment author: alexflint 15 April 2010 01:28:06PM *  0 points [-]

If you were going to predict the emergence of AGI by looking at progress towards it over the past 40 years and extrapolate into the future, then what parameter(s) would you measure and extrapolate?

Kurzweil et al measure raw compute power in flops/$, but as has been much discussed on LessWrong there is more to AI than raw compute power. Another popular approach is to chart progress in terms of the animal kingdom, saying things like "X years ago computers were as smart as jellyfish, now they're as smart as a mouse, soon we'll be at human level", but it's hard to say whether a computer is "as smart" as some organism, and even harder to extrapolate that sensibly into the future.

What other approaches?

Dislaimer: I'm not saying this is actually a good way to predict when AGI will emerge!

Comment author: beriukay 14 April 2010 08:51:48AM 4 points [-]

A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.

Comment author: Risto_Saarelma 21 April 2010 12:44:28PM 2 points [-]

Available without the paywall from the author's home page.

Comment author: NancyLebovitz 14 April 2010 08:55:53AM 1 point [-]

It's also an argument in favor of using checklists.

Comment author: alexflint 13 April 2010 11:00:38PM *  1 point [-]

Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively according to the Shroedinger equation. Some questions:

  • Does anyone have a good source for the technical background I will need to implement such a simulation? Specifically more technical details of the Shroedinger equation (the wikipedia article is unhelpful)

  • I imagine this will quickly become intractable quite as I try to simulate more complex systems with more particles. How quickly, though? Could I simulate, e.g., the interaction of two H_2 ions in a reasonable time (say, no more than a few hours)?

  • Surely others have tried this. Any links/references would be much appreciated.

Comment author: RichardKennaway 07 April 2010 07:39:04PM *  6 points [-]

A couple of articles on the benefits of believing in free will:

Vohs and Schooler, "The Value of Believing in Free Will"

Baumeister et al., "Prosocial Benefits of Feeling Free"

The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.

References from a Sci. Am. article.

[1] Cough.

ETA: This is also relevant.

Comment author: Jack 07 April 2010 07:46:51PM *  3 points [-]

Cool. Since a handful of studies suggest a narrow majority believe moral responsibility and determinism to be incompatible this shouldn't actually be that surprising. I want to know how people act after being exposed to statements in favor of compatibilism.

Comment author: NancyLebovitz 07 April 2010 01:17:17PM 0 points [-]

In spite of the rather aggressive signaling here in favor of atheism, I'm still an agnostic on the grounds that it isn't likely that we know what the universe is ultimately made of.

I'm even willing to bet that there's something at least as weird as quantum physics waiting to be discovered.

Discussion here has led me to think that whatever the universe is made of, it isn't all that likely to lead to a conclusion there's a God as commonly conceived, though if we're living in a simulation, whoever is running it may well have something like God-like omnipotence and omnipresence. "May well" because the simulation-runner may be subject to legal, social, economic, or [unimaginable] constraints.

While I'm on the subject, is there any reason to think Omega is possible? Or is Omega simply a handy tool for thinking about philosophical problems?

I haven't seen "I don't know and you don't either" agnosticism addressed here.

Comment author: Jack 07 April 2010 05:29:36PM 4 points [-]

it isn't all that likely to lead to a conclusion there's a God as commonly conceived

The Bayesian translation of this is "I'm an atheist".

While I'm on the subject, is there any reason to think Omega is possible? Or is Omega simply a handy tool for thinking about philosophical problems?

Interesting. I'm not sure I know enough about Omega to say. But for one thing: I think it is probably impossible for Omega to predict it's own future mental states (there would be an infinite recursion). This will introduce uncertainty into its model of the universe.

Comment author: Matt_Simpson 07 April 2010 03:43:35PM *  3 points [-]

The justification for atheism over agnosticism is essentially Occam's Razor. As far as we know, there are no exceptions to physics as we understand it. So God/Gods explains nothing that isn't already explained by physics. So P(physics is true) >= P(Physics is true AND God/Gods exist(s))

Comment author: RichardKennaway 07 April 2010 03:24:10PM 3 points [-]

I've always taken Omega to be just a handy tool for thinking about philosophical problems. "Omega appears and tells you X" is short for "For the purposes of this conundrum, imagine that X is true, that you have undeniably conclusive evidence for X, and that the nature of this evidence and why it convinces you is irrelevant to the problem."

In a case where X is impossible ("Omega appears and tells you that 2+2=3") then the conundrum is broken.

Comment author: Peter_de_Blanc 07 April 2010 02:05:20AM 2 points [-]

I'd like to plug a facebook group:

Once we reach 4,096 members, everyone will donate $256 to SingInst.org.

Folks may also be interested in David Robert's group:

1 million people, $100 million to defeat aging.

Comment author: RobinZ 07 April 2010 01:08:49AM 1 point [-]

Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.

Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.

Comment author: Strange7 07 April 2010 01:18:30AM 4 points [-]

People have been worrying about that one since Malthus. Turns out, production capacity can increase exponentially too, and when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.

Comment author: wnoise 07 April 2010 02:37:32AM *  0 points [-]

Turns out, production capacity can increase exponentially too,

Yes, for a while. The simplest factor driving this is exponentially more laborers. Then there's better technology of all sorts. Still, after a certain point we start hitting hard limits.

when any given child has a high enough chance of survival, the strategy shifts from spamming lots of low-investment kids (for farm labor) to having one or two children and lavishing resources on them, which is why birthrates in the developed world are dropping below replacement.

(a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture?
(b) Even if it is guaranteed to happen, will the race be won by increasing population hitting hard limits, or populations lifting themselves out of poverty?

Comment author: gwern 07 April 2010 09:43:43PM *  2 points [-]

(a) Is this guaranteed to happen, a human universal or is it a contingent feature of our culture?

I believe it's a quite general phenomenon - Japan did it, Russia did it, USA did it, all of Europe did it, etc. It looks like a pretty solid rich=slower-growth phenomenon: http://en.wikipedia.org/wiki/File:Fertility_rate_world_map.PNG

And if there were a rich country which continued to grow, threatening neighbors, there's always nukes & war.

Comment author: Mass_Driver 07 April 2010 04:02:22AM 2 points [-]

I think "hard limits" is the wrong way to frame the problem. The only limits that appear truly unbeatable to me right now are the amounts of mass-energy and negentropy in our supergalactic neighborhood, and even those limits may be a function of the map, rather than the territory.

Other "limits" are really just inflection points in our budget curve; if we use too much of resource X, we may have to substitute a somewhat more costly resource Y, but there's no reason to think that this will bring about doom.

For example, in our lifetime, the population of Earth may expand to the point where there is simply insufficient naturally occurring freshwater on Earth to support all humans at a decent standard of living. So, we'll have to substitute desalinized oceanwater, which will be expensive -- but not nearly as expensive as dying of drought.

Likewise, there are only so many naturally occurring oxygen atoms in our solar system, so if we keep breathing oxygen, then at a certain population level we'll have to either expand beyond the Solar System or start producing oxygen through artificial fusion, which may cost more energy than it generates, and thus be expensive. But, you know, it beats choking or fighting wars over a scarce resource.

There are all kinds of serious economic problems that might cripple us over the next few centuries, but Malthusian doom isn't one of them.

Comment author: wnoise 07 April 2010 04:58:27AM 2 points [-]

It's true that many things have substitutes. All these limits are soft in the sense that we can do something else, and the magic of the market will select the most efficient alternative. At some point this may be no kids, rather than desalinization plants, however, cutting off the exponential growth.

(Phosphorus will be a problem before oxygen. Technically, we can make more phosphorus, and I suppose the cost could go down with new techniques other than "run an atom smasher and sort what comes out".)

But there really are hard limits. The volume we can colonize in a given time goes up as (ct)^3. This is really, really. really fast. Nonetheless, the required volume for an exponentially expanding population goes as e^(lambda t), and will get bigger than this. (I handwave away relativistic time-dilation -- it doesn't truly change anything.)

Comment author: Strange7 07 April 2010 06:32:45PM 0 points [-]

Actually, if we figure out how to stabilize traversible wormholes, the colonizable volume goes up faster than (ct)^3. I'm not sure exactly how much faster, but the idea is, you send one mouth of the wormhole rocketing off at relativistic speed, and due to time dialation, the home-end of the gate opens up allowing travel to the destination in less than half the time it would take a lightspeed signal to travel to the destination and back.

Comment author: bogdanb 08 April 2010 08:48:47AM 1 point [-]

Assuming zero space inflation, the “exit” mouth of the wormhole can’t travel faster than c with respect to the entry. So for expansion purposes (where you don’t need (can’t, actually, due to lack of space) to go back), you’re limited to c (radial) expansion. Which is the same as without wormholes.

In other words, the volume covered by wormholes expands as (c×t)³ relative to when you start sending wormholes. The number of people is exponential relative to when you start reproducing. Even if you start sending wormholes a long time before you start reproducing exponentially, you’re still going to fill the wormhole-covered volume.

(The fault in your statement is that you can go in “less” than half the time only for travel within the volume already covered by wormholes. For arbitrarily far distances you still need to wait for the wormhole exit to reach there, with is still below c.)

Space inflation doesn’t help that much. Given long enough time, the “distance” between the wormhole entry and exit point can grow at more than c (because the space between the two expands; the exit points still travel below c). In other words, far parts of the Universe can fall outside your event horizon, but the wormhole can keep them still accessible (for various values of can...). This can allow you unbounded-growth in the volume of space for expansion (exponentially, if the inflation is exponential), but note that the quantity of matter accessible is still the same volume that was in your (c×t)³ (without inflation) volume of space.

Comment author: Mitchell_Porter 08 April 2010 09:07:37AM 2 points [-]

Strange7 is referring to this essay, especially section 6.

Wormholes sent to the Andromeda at near light speeds arrive in approx year 2,250,000 co-moving time, but in year 15 empire-time (setting year zero at start of expansion).

Comment author: bogdanb 30 May 2010 12:44:03AM 0 points [-]

I still don’t get how you can get more than c×t³ as a colonized volume.

With wormholes you could travel within that volume very quickly, which will certainly help you approach c-speed expansion faster, since engine innovations at home can be propagated to the border immediately. And, of course, your volume will be more “useful” because of the lower communication costs (time-wise, & presuming worm-holes are not very expensive otherwise). But I don’t see how you can expand the volume quicker than c, since the border expansion will still be limited by it.

(Disclaimer: I didn’t read everything there, mostly the section you pointed out.)

Comment author: Mass_Driver 07 April 2010 05:28:19AM 1 point [-]

the magic of the market will select the most efficient alternative. At some point this may be no kids

Or, more precisely, less kids. I don't insist that we're guaranteed to switch to a lower birth rate as a species, but if we do, that's hardly an outcome to be feared.

Phosphorus will be a problem before oxygen.

Fascinating. That sounds right; do you know where in the Solar System we could try to 'mine' it?

The volume we can colonize in a given time goes up as (ct)^3.

Not until we start getting close to relativistic speeds. I could care less about the time-dilation, but for the next few centuries, our maximum cruising speed will increase with each new generation. If we can travel at 0.01 c, our kids will travel at 0.03 c, and so on for a while. Since our cruising velocity V is increasing with t, the effective volume we colonize per generation increases at more than (ct)^3. We should also expect to sustainably extract more resources per unit volume as time goes on, due to increasing technology. Finally, the required resources per person are not constant; they decrease as population increases because of economies of scale, economies of scope, and progress along engineering learning curves. All these factors mean that it is far too early to confidently predict that our rate of resource requirements will increase faster than our ability to obtain resources, even given the somewhat unlikely assumption that exponential population growth will continue indefinitely. By the time we really start bumping up against the kind of physical laws that could cause Malthusian doom, we will most likely either (a) have discovered new physical laws, or (b) have changed so much as to be essentially non-human, such that any progress human philosophers make today toward coping with the Malthusian problem will seem strange and inapposite.

Comment author: RobinZ 07 April 2010 02:08:17AM *  1 point [-]

Simple thermodynamics guarantees that any growing consumption of resources is unsustainable on a long enough timescale - even if you dispute the implicit timescale in Dr. Bartlett's talk*, at some point planning will need to account for the fundamental limits. Ignoring the physics is a common error in economics (even professional economics, depressingly).

* Which you appear not to have watched through - for shame!

Comment author: Strange7 07 April 2010 06:24:09PM 3 points [-]

Yes, obviously thermodynamics limits exponential growth. I'm saying that exponential growth won't continue indefinitely, that people (unlike bugs) can, will, and in fact have already begun to voluntarily curtail their reproduction.

Comment author: Jack 07 April 2010 06:32:11PM 2 points [-]

What kind of reproductive memes do you think get selected for?

Comment author: RobinZ 07 April 2010 06:41:19PM 1 point [-]

How strong is the penalty for defection?

Comment author: Jack 07 April 2010 07:43:41PM 2 points [-]

Yeah, this obviously matters a lot. Right now low to non-existent outside the People's Republic of China, though I suppose that could change. There are a lot of barriers to effective enforcement of reproductive prohibitions: incredibly difficult to solve cooperation issues, organized religions, assorted rights and freedoms people are used to. I suppose a sufficiently strong centralized power could solve the problem though such a power could be bad for other reasons. My sense is the prospects for reliable enforcement are low but obviously a singularity type superintelligence could change things.

Comment author: bogdanb 08 April 2010 08:30:54AM 2 points [-]

I’m not quite sure that penalties are that low outside China.

There are of course places where penalties for many babies are low, and there are even states that encourage having babies — but the latter is because birth rates are below replacement, so outside of our exponential growth discussion; I’m not sure about the former, but the obvious cases (very poor countries) are in the malthusian scenario already due to high death rates.

But in (relatively) rich economies there are non-obvious implicit limits to reproduction: you’re generally supposed to provide a minimum of care to children; even more, that “minimum” tends to grow with the richness of the economy. I’m not talking only about legal minimum, but social ones: children in rich societies “need” mobile phones and designer clothes, adolescents “need” cars, etc.

So having children tends to become more expensive in richer societies, even absent explicit legal limits like in China, at least in wide swaths of those societies. (This is a personal observation, not a proof. Exceptions exist. YMMV. “Satisfaction guaranteed” is not a guarantee.)

Comment author: Jack 08 April 2010 04:12:05PM 3 points [-]

The legal minimum care requirement is a good point. With the social minimum: I recognize that this meme exists but it doesn't seem like there are very high costs to disobeying it. If I'm part of a religion with an anti-materialist streak and those in my religious community aren't buying their children designer clothes either... I can't think of what kind of penalty would ensue (whereas not bathing or feeding your children has all sorts of costs if an outsider finds out). It seems better to think of this as a meme which competes with "Reproduce a lot" for resources rather than as a penalty for defection.

Your observation is a good one though.

Comment author: bogdanb 30 May 2010 12:54:36AM 0 points [-]

Sure, within a relatively homogeneous and sufficiently “socially isolated”* community the social cost is light.

(*: in the sense that “social minimum” pressures from outside don’t affect it significantly, including by making at least some members “defect to consumerism” and start a consumerist child-pampering positive feedback loop.)

I seem to think that such communities will not become very rich, but I can’t justify it other than with a vague “isolation is bad for growth” idea, so I don’t trust my thought.

Do you have any examples of “rich” societies (by current 1st-world standards) which are socially isolated in the way you describe? (Ie, free from “consumerist” pressure from inside and immune to it from outside.) I can’t think of any.

Comment author: Strange7 07 April 2010 06:40:57PM 0 points [-]

I'm not sure I understand what you mean. This isn't a matter of interpersonal communication, it's just individual married couples more-or-less rationally pursuing the 'pass on your genes' mandate by maximizing the survival chances of one or two children rather than hedging their bets with a larger number of individually-riskier children.

Comment author: Jack 07 April 2010 07:33:59PM 0 points [-]

If a gene leads to greater fertility rates with no drop in survival rates, it spreads. Similarly if a meme leads to greater fertility with no drop in survival rate and is sufficiently resistant to competing memes it too spreads. Thus, those memes/memetic structures that encourage more reproduction have a selection advantage.

Comment author: Strange7 07 April 2010 08:38:47PM 0 points [-]

In this case, the meme in question leads to a drop in fertility rates, but increases survival rates more than enough to compensate.

Comment author: Jack 07 April 2010 09:05:22PM 1 point [-]

I don't really think your characterization of the global drop in fertility rate is right (farmers with big families survive just fine!) but that isn't really the point. The point is, mormons aren't dying and neither are lots of groups which encourage reproduction among their members. Unless there are a lot of deconversions or enforced prohibitions against over reproducing the future will consist of lots of people whose parents believed in having lots of children and those people will likely feel the same way. They will then have more children who will also want to have lots of children. This process is unsustainable.

Comment author: Strange7 07 April 2010 09:23:21PM 1 point [-]

Unless there are a lot of deconversions

I'm expecting a lot of deconversions. Mormons already go to a lot of trouble to retain members and punish former members, which suggests there's a corresponding amount of pressure to leave. Catholics did the whole breed-like-crazy thing, and that worked out well for a while, but catholicism doesn't rule the world.

I think the relative zeal of recent converts as compared to lifelong believers has something to do with how siblings raised apart are more likely to have sexual feelings for each other, but that's probably a topic for another time.

Comment author: humpolec 06 April 2010 10:42:37PM 2 points [-]
Comment author: Matt_Simpson 06 April 2010 09:53:20PM 0 points [-]

I have a couple of questions about UDT if anyone's willing to bite. Thanks in advance.

Comment author: NancyLebovitz 06 April 2010 02:48:05PM *  3 points [-]

Rats have some ability to distinguish between correlation and cauation

To get back to the rat study—it's very simple actually. What I did is: I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food. So they might have used Pavlovian conditioning; just as I said, Pavlovian conditioning might be the substrate by which animals learn to piece together spatial maps and maybe causal maps as well. If they treat the light as a common cause of the tone and of food, they see [hear] the tone and they predict food might happen. Just like if you see the barometer drop then you think, "Oh, the storm might happen." But, if you see someone tamper with the barometer and you know that the barometer and the storm aren't causally related, then you won't think that the weather is going to change. So, the question is, if the rat intervenes to make the tone happen, will it now no longer think the food will occur.

So there were a bunch of rats; they all had the same training—light as an antecedent to tone and food. Then, at test, some of the rats got tone and they tended to go look in the food section. So they were expecting food based on the tone—which humans would says is a diagnostic reasoning process. “Tone is there because light causes tone and light also causes food. Oh, there must be food.” Or, it's just second order Pavlovian conditioning. The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. So now the question is, do the rats attribute that tone to being caused by themselves. That is, did they intervene to make that variable change? If they thought that they were the cause of the tone, that means it couldn't have been the light, therefore the other effects of the light, food, would not have been expected. In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much. That is the essence of the finding and how it fits in with this idea of causal models and how we go about testing our world.

the abstract

Comment author: Amanojack 06 April 2010 04:12:15PM 0 points [-]

I had the rats learn that a light, a little flashing light in a Pavlovian box, is followed sometimes by a tone and sometimes by food.

The information here is a little scant. If, in the cases where there was a tone instead of food, the tone always followed very soon after the light, it'd be most logical for rats to wait for the tone after seeing the light, and only go look for food after confirming that no tone was forthcoming. (This would save them effort assuming the food section was significantly far away. No tone = food. Tone = no food. Or did the scientists sometimes have the light be followed by both tone and food? I assume no, because that would introduce a first-order Pavlovian association between tone and food, which would mess up the next part of the experiment.)

Then, at test, some of the rats got tone and they tended to go look in the food section.

If, as I suggested above, the rats had previously been trained to wait for the lack of a tone before checking in the food section, this result would more strongly rule out a second-order Pavlovian response.

The critical test was with another group of rats that got the same training. We gave them a lever that they had never had before. They were in this box, and they have a lever that is rigged so that if they press the lever the tone will immediately come up. ... In that case, the intervening rats, after hearing the tone of their own intervention, should not expect food. Indeed, they didn't go to food nearly as much.

On the one hand, this is really surprising. On the other hand, I don't see how rats could survive without some cause-and-effect and logical reasoning. I'm really eager to see more studies on logical reasoning in animals. Any anecdotal evidence with house pets anyone?

Comment author: Wei_Dai 06 April 2010 11:16:31AM 6 points [-]

I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

Comment author: JGWeissman 06 April 2010 04:51:44PM 2 points [-]

You can do better than frequentist approach without using the "magic" universal prior. You can just use a prior that represents initial ignorance of the frequency at which the machine produces head-biased and tail-biased coins. (dP(f) = 1df). If you want to look for repeating patterns, you can assign probability (1/2)(1/2^n) to the theory that the machine produces each type of coin on a frequency depending on the last n coins it produced. This requires treating a probability as a strength of belief, and not the frequency of anything, which is what (as I understand it) frequentists are not willing to do.

Note the universal prior, if you can pull it off, is still better than what I described. The repeating pattern seeking prior will not notice, for example, if the machine makes head biased coins on prime-numbered trials, but tailbiased coins on composite-numbered trials. This is because it implicitly assigns probability 0 to that type of machine, which takes infinite evidence to update.

Comment author: JGWeissman 06 April 2010 04:00:08PM *  1 point [-]

BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.

I second this feature request.

ETA: I did not notice earlier Steve Rayhawk made the same comment.

Comment author: Steve_Rayhawk 06 April 2010 11:53:01AM 1 point [-]

I wish there is a "public drafts" feature on LessWrong

Seconded. See also JenniferRM on editorial-level versus object-level comments.

Comment author: Morendil 06 April 2010 11:38:33AM *  0 points [-]

Agreed. I'll be investigating what it would take to implement that.

(Edit: interesting; draft folders are apparently private sub-reddits created when a user registers and admin'ed by that user.)

Comment author: Vladimir_Nesov 06 April 2010 11:32:08AM *  4 points [-]

Why does the universe that we live in look like a giant computer? What about uncomputable physics?

Consider "syntactic preference" as an order on agent's strategies (externally observable possible behaviors, but in mathematical sense, independently on what we can actually arrange to observe), where the agent is software running on an ordinary computer. This is "ontological boxing", a way of abstracting away any unknown physics. Then, this syntactic order can be given interpretation, as in logic/model theory, for example by placing the "agent program" in environment of all possible "world programs", and restating the order on possible agent's strategies in terms of possible outcomes for the world programs (as an order on sets of outcomes for all world programs), depending on the agent.

This way, we first factor out the real world from the problem, leaving only the syntactic backbone of preference, and then reintroduce a controllable version of the world, in a form of any convenient mathematical structure, an interpretation of syntactic preference. The question of whether the model world is "actually the real world", and whether it reflects all possible features of the real world, is sidestepped.

Comment author: Wei_Dai 06 April 2010 12:59:35PM *  2 points [-]

Thanks (and upvoted) for this explanation of your current approach. I think it's definitely worth exploring, but I currently see at least two major problems.

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality. For example, I currently think reality is just "all possible mathematical structures", but I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly. What would happen if you tried to use your idea to extract my preferences before I resolve that question?

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors. Internally, the agent is computing the optimal strategy (as best as it can) from a preference that's stated in terms of "the real world" and maybe also in terms of subjective anticipation. If we could somehow translate those preferences directly into preferences on mathematical structures, we would be able to bypass those computational limitations and errors without having to single them out.

Comment author: Vladimir_Nesov 06 April 2010 03:35:50PM *  4 points [-]

The first is that my preferences seem to have a logical dependency on the ultimate nature of reality.

An important principle of FAI design to remember here is "be lazy!". For any problem that people would want to solve, where possible, FAI design should redirect that problem to FAI, instead of actually solving it in order to construct a FAI.

Here, you, as a human, may be interested in "nature of reality", but this is not a problem to be solved before the construction of FAI. Instead, the FAI should pursue this problem in the same sense you would.

Syntactic preference is meant to capture this sameness of pursuits, without understanding of what these pursuits are about. Instead of wanting to do the same thing with the world as you would want to, the FAI having the same syntactic preference wants to perform the same actions as you would want to. The difference is that syntactic preference refers to actions (I/O), not to the world. But the outcome is exactly the same, if you manage to represent your preference in terms of your I/O.

I don't know what my preferences are until I resolve what "all possible mathematical structures" means exactly

You may still know the process of discovery that you want to follow while doing what you call getting to know your own preference. That process of discovery gives definition of preference. We don't need to actually compute preference in some predefined format, to solve the conceptual problem of defining preference. We only need to define a process that determines preference.

The second is that I don't see how you plan to differentiate within "syntactic preference", those that are true preferences, and those that are caused by computational limitations and/or hardware/software errors.

This issue is actually the last conceptual milestone I've reached on this problem, just a few days ago. The trouble is in how would the agent reason about the possibility of corruption of its own hardware. The answer is that human preference is to a large extent concerned with consequentialist reasoning about the world, so human preference can be interpreted as modeling the environment, including the agent's hardware. This is an informal statement, referring to the real world, but the behavior supporting this statement is also determined by formal syntactic preference that doesn't refer to the real world. Thus, just mathematically implementing human preference is enough to cause the agent to worry about how its hardware is doing (it isn't in any sense formally defined as its own hardware, but what happens in the agent's formal mind can be interpreted as recognizing the hardware's instrumental utility). In particular, this solves the issues of possible morally harmful impact of the FAI's computation (e.g. simulating tortured people and then deleting them from memory, etc.), and of upgrading the FAI beyond the initial hardware (so that it can safely discard the old hardware).

Comment author: Wei_Dai 06 April 2010 10:05:35PM *  2 points [-]

Once we implement this kind of FAI, how will we be better off than we are today? It seems like the FAI will have just built exact simulations of us inside itself (who, in order to work out their preferences, will build another FAI, and so on). I'm probably missing something important in your ideas, but it currently seems a lot like passing the recursive buck.

ETA: I'll keep trying to figure out what piece of the puzzle I might be missing. In the mean time, feel free to take the option of writing up your ideas systematically as a post instead of continuing this discussion (which doesn't seem to be followed by many people anyway).

Comment author: Vladimir_Nesov 06 April 2010 10:40:41PM *  2 points [-]

FAI doesn't do what you do; it optimizes its strategy according to preference. It's more able than a human to form better strategies according to a given preference, and even failing that it still has to be able to avoid value drift (as a minimum requirement).

Preference is never seen completely, there is always loads of logical uncertainty about it. The point of creating a FAI is in fixing the preference so that it stops drifting, so that the problem that is being solved is held fixed, even though solving it will take the rest of eternity; and in creating a competitive preference-optimizing agent that ensures the preference to fair OK against possible threats, including different-preference agents or value-drifted humanity.

Preference isn't defined by an agent's strategy, so copying a human without some kind of self-reflection I don't understand is pretty pointless. Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

FAI is not built without exact and complete definition of preference. The uncertainty about preference can only be logical, in what it means/implies. (At least, when we are talking about syntactic preference, where the rest of the world is necessarily screened off.)

Comment author: andreas 07 April 2010 01:22:45AM *  2 points [-]

Since I never described a way of extracting preference from a human (and hence defining it for a FAI), I'm not sure where do you see the regress in the process of defining preference.

Reading your previous post in this thread, I felt like I was missing something and I could have asked the question Wei Dai asked ("Once we implement this kind of FAI, how will we be better off than we are today?"). You did not explicitly describe a way of extracting preference from a human, but phrases like "if you manage to represent your preference in terms of your I/O" made it seem like capturing strategy was what you had in mind.

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware). You have not said anything about what kind of static analysis would take you from an agent's s̶t̶r̶a̶t̶e̶g̶y̶ program to an agent's preference.

Comment author: Wei_Dai 22 April 2010 10:58:22AM 2 points [-]

After reading Nesov's latest posts on the subject, I think I better understand what he is talking about now. But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

You [Nesov] have not said anything about what kind of static analysis would take you from an agent's program to an agent's [syntactic] preference.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Comment author: Vladimir_Nesov 22 April 2010 02:04:21PM *  2 points [-]

But I still don't get why Nesov seems confident that this is the right approach, as opposed to a possible one that is worth looking into.

What other approaches do you refer to? This is just the direction my own research has taken. I'm not confident it will lead anywhere, but it's the best road I know about.

Do we have at least an outline of how such an analysis would work? If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

I have some ideas, though too vague to usefully share (I wrote about a related idea on the SIAI decision theory list, replying to Drescher's bounded Newcomb variant, where a dependence on strategy is restored from a constant syntactic expression in terms of source code). For "semantic preference", we have the ontology problem, which is a complete show-stopper. (Though as I wrote before, interpretations of syntactic preference in terms of formal "possible worlds" -- now having nothing to do with the "real world" -- are a useful tool, and it's the topic of the next blog post.)

At this point, syntactic preference (1) solves the ontology problem, (2) gives focus to investigation of what kind of mathematical structure could represent preference (strategy is a well-understood mathematical structure, and syntactic preference is something allowing to compute a strategy, with better strategies resulting from more computation), and (3) gives a more technical formulation of the preference extraction problem, so that we can think about it more clearly. I don't know of another effort towards clarifying/developing preference theory (that reaches even this meager level of clarity).

If not, why do we think that working out such an analysis would be any easier than, say, trying to state ourselves what our "semantic" preferences are?

Returning to this point, there are two show-stopping problems: first, as I pointed out above, there is the ontology problem: even if humans were able to write out their preference, the ontology problem makes the product of such an effort rather useless; second, we do know that we can't write out our preference manually. Figuring out an algorithmic trick for extracting it from human minds automatically is not out of the question, hence worth pursuing.

P.S. These are important questions, and I welcome this kind of discussion about general sanity of what I'm doing or claiming; I only saw this comment because I'm subscribed to your LW comments.

Comment author: Wei_Dai 25 April 2010 04:37:08PM 0 points [-]

Why do you consider the ontology problem to be a complete show-stopper? It seems to me there are at least two other approaches to it that we can take:

  1. We human beings seem to manage to translate our preferences from one ontology to another when necessary, so try to figure out how we do that, and program it into the FAI.

  2. Work out what the true, correct ontology is, then translate our preferences into that ontology. It seems that we already have a good candidate of this in the form of "all mathematical structures". Formalizing that notion seems really hard, but why should it be impossible?

You claim that syntactic preference solves the ontology problem, but I have even fewer ideas about how to extract the syntactic preference of arbitrary programs. You mention that you do have some vague ideas, so I guess I'll just have to be patient and let you work them out.

second, we do know that we can't write out our preference manually.

How do we know that? It's not clear to me that there is any more evidence for "we can't write out our preferences manually", than for "we can't build an artificial general intelligence manually".

I only saw this comment because I'm subscribed to your LW comments.

I had a hunch that might be the case. :)

Comment author: Vladimir_Nesov 07 April 2010 08:13:37AM *  3 points [-]

I now understand you as talking only about what kind of object preference is (an I/O map) and about how this kind of object can contain certain preferences that we worry might be lost (like considerations of faulty hardware).

Correct. Note that "strategy" is a pretty standard term, while "I/O map" sounds ambiguous, though it emphasizes that everything except the behavior at I/O is disregarded.

You have not said anything about what kind of static analysis would take you from an agent's strategy to an agent's preference.

An agent is more than its strategy: strategy is only external behavior, normal form of the algorithm implemented in the agent. The same strategy can be implemented by many different programs. I strongly suspect that it takes more than a strategy to define preference, that introspective properties are important (how the behavior is computed, as opposed to just what the resulting behavior is). It is sufficient for preference, when it is defined, to talk about strategies, and disregard how they could be computed; but to define (extract) a preference, a single strategy may be insufficient, it may be necessary to look at how the reference agent (e.g. a human) works on the inside. Besides, the agent is never given as its strategy, it is given as its source code that normalizes to that strategy, and computing the strategy may be tough (and pointless).

Comment author: NancyLebovitz 06 April 2010 10:34:51AM 0 points [-]

Mass Driver's recent comment about developing the US Constitution being like the invention of a Friendly AI opens up the possibility of a mostly Friendly AI-- an AI which isn't perfectly Friendly, but which has the ability to self-correct.

Is it more possible to have an AI which never smiley-faces or paperclips or falls into errors we can't think of than to have an AI which starts to screw up, but can realize it and stops?

Comment author: NancyLebovitz 06 April 2010 12:00:02PM 0 points [-]

It's not feasible to attempt to create a government which both perfect and self-correcting. I'm not sure if the same is true of FAI.

Comment author: Mass_Driver 06 April 2010 04:17:33AM 0 points [-]

Is anybody interested in finding a study buddy for the material on Less Wrong? I think a lot of the material is really deep -- sometimes hard to internalize and apply to your own life even if you're articulate and intelligent -- and that we would benefit from having a partner to go over the material with, ask tough questions, build trust, and basically learn the art of rationality together. On the off chance that you find Jewish analogies interesting or helpful, I'm basically looking for a chevruta partner, although the sacredish text in question would be the Less Wrong sequences instead of the Bible.

Comment author: NancyLebovitz 05 April 2010 10:57:29PM 1 point [-]

An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!

People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.

Comment author: Amanojack 05 April 2010 10:50:20PM 1 point [-]

I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).

A teacher announces that there will be a surprise test next week. A student objects that this is impossible: "The class meets on Monday, Wednesday, and Friday. If the test is given on Friday, then on Thursday I would be able to predict that the test is on Friday. It would not be a surprise. Can the test be given on Wednesday? No, because on Tuesday I would know that the test will not be on Friday (thanks to the previous reasoning) and know that the test was not on Monday (thanks to memory). Therefore, on Tuesday I could foresee that the test will be on Wednesday. A test on Wednesday would not be a surprise. Could the surprise test be on Monday? On Sunday, the previous two eliminations would be available to me. Consequently, I would know that the test must be on Monday. So a Monday test would also fail to be a surprise. Therefore, it is impossible for there to be a surprise test.”

Can the teacher fulfill his announcement?

Extensive treatment and relation to other epistemic paradoxes here.

Comment author: thomblake 08 April 2010 04:23:14PM 3 points [-]

Let's not forget that the clever student will be indeed very surprised by a test on any day, since he thinks he's proven that he won't be surprised by tests on those days. It seems he made an error in formalizing 'surprise'.

(imagine how surprised he'll be if the test is on Friday!)

Comment author: Amanojack 08 April 2010 05:32:51PM 0 points [-]

Since the student believes a surprise test is impossible, it seems this wouldn't surprise him.

Comment author: Rain 08 April 2010 04:18:15PM 1 point [-]

Why not give a test on Monday, and then give another test later that day? I bet they would be surprised by a second test on the same day.

Comment author: Amanojack 08 April 2010 05:24:00PM 0 points [-]

True, there's nothing saying there won't be two tests.

Rather than solve this, I was hoping people'd take a look at the linked explanation. When phrased more carefully, it becomes a whole bunch of nested paradoxes, the resolution of which contains valuable lessons on how words can trick people. It covers some LW material along the way, such as Moore's Paradox.

Comment author: Rain 08 April 2010 07:48:28PM 1 point [-]

But if there's a solution, it's not really a paradox.

And I don't like word arguments.

Comment author: Amanojack 08 April 2010 08:28:15PM *  0 points [-]

Words frequently confuse people into believing something they wouldn't otherwise. You may be correct that this confusion can always be addressed indirectly, but in any case it needs to be addressed. Addressing semantic confusion requires identifying it, and I found this riddle (actually the whole article) a great neutral exercise for that purpose.

EDIT: Looking back, I should probably just have posted riddle and kept quiet. Updated for next time.

Comment author: Sniffnoy 08 April 2010 08:21:58PM 3 points [-]

Ugh, yes. Why are we speaking of "paradoxes" at all? Anything that actually occurs is not a paradox. If something appears to be a paradox, either you have reasoned incorrectly, you've made untenable assumptions, or you've just been using fuzzy thinking. This is a problem; presumably it has some solution. Describing it as a "paradox" and asking people not to solve it is not helpful. You don't understand it better that way, you understand it by solving it. The only thing gained that way is an understanding of why it appears to be a paradox, which is useful as a demonstration of the dangers of fuzzy thinking, but also kind of obvious.

Maybe I'm being overly strict about the word "paradox" here, but I really just don't see the term as at all helpful. If you're using it in the strict sense, they shouldn't occur except as an indicator that you've done something wrong (in which case you probably wouldn't use the word "paradox" to describe it in the first place). If you're using it in the loose sense, it's misleading and unhelpful (I prefer to explcitly say "apparent paradox".)

Comment author: Amanojack 08 April 2010 09:03:17PM 0 points [-]

We're all saying the exact same thing here: words are not to be treated as infallible vehicles for communicating concepts. That was the point of my original post, the point of Rain's reply, and yours as well. (You're completely right about the word "paradox.")

Also, I'm not saying not to try solving it, just that I've no intention of refuting all proposed solutions. I didn't want my reply to be construed as a debate about the solution, because that would never end.

Comment author: wedrifid 07 April 2010 10:15:32PM *  -1 points [-]

not as an exercise for solving

...and yet...

Can the teacher fulfill his announcement?

Probably.

p(teacher provides a surprise test) = 1 - x^3
Where:
x = 'improbability required for an event to be surprising'

If a 50% chance of having a test that day would leave a student surprised he can be 87.5% confident in being able to fullfil his assertion.

However, if the teacher was a causal decision agent then he would not be able to provide a surprise test without making the randomization process public (or a similar precommitment).

Comment author: Amanojack 08 April 2010 12:23:28AM 1 point [-]

The problem with choosing at day at random is, what if it turns out to be Friday? Friday would not be a surprise, since the test will be either Monday, Wednesday or Friday, and so by Thursday the students would know by process of elimination that it had to be Friday.

Comment author: RobinZ 07 April 2010 10:30:12PM 0 points [-]

How do you get that result while requiring that the test occur next week? It is that assumption that drives the 'paradox'.

Comment author: wedrifid 07 April 2010 10:51:19PM -1 points [-]

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

Comment author: RobinZ 07 April 2010 11:44:20PM *  1 point [-]

You misunderstand me - I maintain that an obvious unstated condition in the announcement is that there will be a test next week. Under this condition, the student will be surprised by a Wednesday test but not a Friday test, and therefore

p(teacher provides a surprise test) = 1 - x^2

and, if I guess your algorithm correctly,

p(teacher provides a surprise lack of test) = x^2 * (1 - x)

[edit: algebra corrected]

Comment author: wedrifid 08 April 2010 01:24:49AM *  1 point [-]

I maintain that an obvious unstated condition in the announcement is that there will be a test next week.

The condition is that there will be a surprise test. If the teacher were to split 'surprise test' into two and consider max(p(surprise | p(test) == 100)) then yes, he would find he is somewhat less likely to be making a correct claim.

You misunderstand me

I maintain my previous statement (and math):

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

Something that irritates me with regards to philosophy as it is often practiced is that there is an emphasis on maintaining awe at how deep and counterintuitive a question is rather than extract possible understanding from it, disolve the confusion and move on.

Yes, this question demonstrates how absolute certainty in one thing can preclude uncertainty in some others. Wow. It also demonstrates that one can make self defeating prophecies. Kinda-interesting. But don't let that stop you from giving the best answer to the question. Given that the teacher has made the prediction and given that he is trying to fulfill his announcement there is a distinct probability that he will be successful. Quit saying 'wow', do the math and choose which odds you'll bet on!

Comment author: RobinZ 08 April 2010 02:51:11AM 0 points [-]

I never intended to dispute that

The answer to the question 'Can the teacher fulfill his announcement?' is 'Probably'. The answer to the question 'Is there a 100% chance that the teacher fulfills his announcement?' is 'No'.

only the specific figure 87.5%.

It's a minor point. Your logic is good.

Comment author: beriukay 05 April 2010 02:38:45PM 5 points [-]

Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.

Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, this sort of problem led me to reject the death penalty on practical grounds. Then, as I lost my religious views, I stopped seeing it as a punishment at all. I started to see it as a the same basic thing as putting down an aggressive dog. After all, dead people have a pretty encouraging recidivism rate.

I began to wonder if I could reject the death penalty on principle. A large swath of America believes that the words of the Declaration of Independence are as pertinent to our country as the Constitution. This would mean that we could disallow execution because it conflicts with our "inalienable" right to life. But then, I can't justify using the same argument as the people who try to prove that America is a Christian nation. As an interesting corollary, it seems that anyone citing the Declaration in this manner will have a very hard time also supporting the death penalty for this reason.

So basically, I think I would find the death penalty morally acceptable, but only in the hypothetical realm of virtual certainty that the inmate is guilty of a heinous crime. And I have no bound for what that virtual certainty is. Certainly a 5% chance of being falsely accused is too high. I wouldn't kill one innocent man to rid the world of 19 bad ones. But then, I would kill an innocent person to stop a billion headaches (an example I just read in Steven Landsburg's The Big Questions), so I obviously don't demand 100% certainty.

It seems like I might be asking: "What are the chances that someone was falsely accused, given that they were accused of an execution-worthy crime?" And a follow-up "What is an acceptable chance for killing an innocent person?"

Can Bayes help here? I am eager to hear some actual opinions on this matter. So far I've come up with precious little when talking to friends and family.

Comment author: Unnamed 06 April 2010 05:41:08AM *  9 points [-]

My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?

Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit stronger, but the evidence is unclear on that. Provides closure to the victim's family? Execution seems like more definitive closure, but they have to wait until years after sentencing to get it.

The criminal justice system is a big important topic, and I think it's too bad that this little piece of it (capital punishment) soaks up so much of our attention to it. Overall, my stance on capital punishment is ambivalent, leaning against it because it's not worth the trouble, though in some cases (like McVeigh) it's nice to have around and I could be swayed by a big deterrent effect. I'd prefer for more of the focus to be on this sort of thing (pdf).

Comment author: Kevin 06 April 2010 05:46:37AM *  2 points [-]

Good post. I have never seen strong evidence that the death penalty has a meaningful deterrent effect but I'd be curious to see links one way or the other.

I lean towards prison abolition, but it's an idealistic notion, not a pragmatic one. I suppose we could start by getting rid of prisons for non-violent crimes and properly funding mental hospitals. http://en.wikipedia.org/wiki/Prison_abolition_movement I can't see that happening when we can't even decriminalize marijuana.

Comment author: Kevin 06 April 2010 05:31:21AM *  2 points [-]

There is strong Bayesian evidence that the USA has executed one innocent man. http://en.wikipedia.org/wiki/Cameron_Todd_Willingham By that I mean that an Amanda Knox test type analysis would clearly show that Willingham is innocent, probably with greater certainty than when the Amanda Knox case was analyzed. Does knowing that the USA has indeed provably executed an innocent person change your opinion?

What are the practical advantages of death over life in prison? US law allows for true life without parole. Life in an isolated cell in a Supermax prison is continual torture -- it is not a light punishment by any means. Without a single advantage given for the death penalty over life in prison without parole, I think that ~100% certainty is needed for execution.

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

Comment author: beriukay 06 April 2010 11:12:06AM 1 point [-]

Kevin, thank you for the specific example. It definitely strengthened my practical objection to the practice. I strongly suspect that the current number of false positives lies outside of my acceptance zone.

Rain, I agree that politics is a mind-killer, but thought it worthy of at least brushing the cobwebs off some cached thoughts. Good point about Nitrogen. I wonder why we choose gruesome methods when even CO would be cheap, easy and effective.

Morendil, I appreciate the other questions. You have a good point that if Omega were brought in on the justice system, it would definitely find better corrective measures than the kill command. I think Eliezer once talked about how predicting your possible future decisions is basically the same as deciding. In that case, I already changed many things on this Big Question, and am just finally doing what I predicted I might do last time I gave any thought to capital punishment. Which happened to be at the conclusion (if there is such a thing) of a murder trial where my friend was a victim. Lots of bias to overcome there, methinks.

Unnamed, interesting points. I hadn't actually considered how similar life imprisonment is to execution, with regard to the pertinent facts. I was recently introduced to the concept of restorative justice which I think encompasses your article. I find it particularly appealing because it deals with what works, instead of worthless Calvinist ideals like punishment. From my understanding, execution only fulfills punishment in the most trivial of senses.

Comment author: wedrifid 06 April 2010 07:13:39AM 1 point [-]

I am against the death penalty for regular murder and mass murder and aggravated rape. I am indifferent with regards to the death penalty for crimes against humanity as I recognize that symbolic execution could be appropriate for grave enough crimes.

"Crimes against humanity" is one of the crimes that for most practical purposes means "... and lost".

Comment author: Kevin 06 April 2010 07:51:11AM *  0 points [-]

Yup. Even though they'll never face charges, some of the winners are guilty as sin. And I mean that the Project for the New American Century was on the winning side of the war, their namesake mission has failed horribly.

Comment author: Amanojack 05 April 2010 08:20:12PM 0 points [-]

Political questions like this are far removed from the kind of analysis you seem to want to apply. If it's you taking out a killer yourself that's one thing, but the question of whether to support it as a law is something entirely different. This rabbit hole goes very far indeed. Anyway, why would you care about the Constitution - you're not one of the signers, are you? ;-)

Comment author: Rain 06 April 2010 12:11:17PM *  1 point [-]

Anyway, why would you care about the Constitution - you're not one of the signers, are you? ;-)

I swore an oath to support and defend the Constitution as a condition of employment, so at the very least I have to signal caring about it. I doubt beriukay is in the same position, though.

Comment author: rortian 08 April 2010 01:51:40AM 0 points [-]

Do you really take that sort of thing seriously? Far out if you do, but I have trouble with the concept of an 'oath'.

Comment author: Rain 08 April 2010 02:03:31AM *  3 points [-]

Oaths in general can be a form of precommitment and a weak signal that someone ascribes to certain moral or legal values, though no one seemed to take it seriously in this instance. On my first day, it was just another piece of paper in with all the other forms they wanted me to sign, and they took it away right after a perfunctory reading. I had to search it out online to remember just what it was I had sworn to do. Later, I learned some people didn't even remember they had taken it.

Personally, I consider it very important to know the rules, laws, commitments, etc., for which I may be responsible, so when I or someone else breaks them, I can clearly note it.

For example, in middle school, one of my teachers didn't like me whispering to the person sitting next to me in class. When she asked what I was doing, I told her that I was explaining the lesson, since she did a poor job of it. She asked me if I would like to be suspended for disrespect; I made sure to let her know that the form did not have 'disrespect' as a reason for suspension, only detention.

Comment author: rortian 09 April 2010 01:24:58AM 0 points [-]

Personally, I consider it very important to know the rules, laws, commitments, etc., for which I may be responsible, so when I or someone else breaks them, I can clearly note it.

Far out. That is important.

As for your story, it's something I would have done but I hope you understand that a little tact could have gone a long way.

What I was trying to get at you seem to think also. You think you are sending a 'weak signal' that you are committed to something. But you are using words that I think many around here would be suspicious of (e.g. oath and sworn).

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

Comment author: Rain 09 April 2010 01:36:11AM 0 points [-]

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

Perhaps through enforcement. There are a significant number of laws, regulations, and directives that cover US Federal employees, and the oath I linked to above is a signed and sworn statement indicating the fact that I am aware of and accept responsibility for them.

Comment author: wedrifid 08 April 2010 02:10:44AM 0 points [-]

She asked me if I would like to be suspended for disrespect; I made sure to let her know that the form did not have 'disrespect' as a reason for suspension, only detention.

You prefer more time locked up in school than less?

Comment author: Rain 08 April 2010 02:58:35AM 0 points [-]

No. <explanation redacted>

Comment author: wedrifid 08 April 2010 06:44:53AM 1 point [-]

My explanation: It is ironic that 'more time at school after it finishes' is used as a punishment and yet 'days off school' is considered a worse punishment.

Given the chance I would go back in time and explain to my younger self that just because something is presented as a punishment or 'worse punishment' doesn't mean you have to prefer to avoid it. Further, I would explain that getting what he wants does not always require following the rules presented to him. He can make his own rules, chose among preferred consequences.

While I never actually got either a detention or a suspension, I would have to say I'd prefer the suspension.

Comment author: rortian 09 April 2010 01:27:08AM 0 points [-]

In theory but I wonder how long it has been since you were in school. In GA they got around to making a rule that if you were suspended you would lose your drivers license. Also, suspensions typically imply a 0 on all assignments (and possibly tests) that were due for its duration.

Comment author: wedrifid 09 April 2010 01:31:30AM *  0 points [-]

In theory but I wonder how long it has been since you were in school.

As a teacher or a student? 4 years and <undisclosed> respectively.

Comment author: mattnewport 08 April 2010 01:54:14AM 1 point [-]

but I have trouble with the concept of an 'oath'.

How so?

Comment author: rortian 09 April 2010 01:30:19AM 0 points [-]

Yeah I like Kevin's short answer. But in general I said to Rain:

You can say you will do something. If someone doesn't trust that assertion, how will they ever trust 'no really I'm serious'.

When you make something a contract you see there are some legal teeth, but swearing to uphold the constitution feels silly.

Comment author: mattnewport 09 April 2010 01:46:04AM 2 points [-]

Well obviously the idea of an oath only has value if it is credible, that is why there are often strong cultural taboos against oath breaking. In times past there were often harsh punishments for oath breaking to provide additional enforcement but it is true that in the modern world much of the function of oaths has been transferred to the legal system. Traditionally one of the things that defined a profession was the expectation that members of the profession held themselves to a standard above and beyond the minimum enforced by law however. Professional oaths are part of that tradition, as is the idea of an oath sworn by civil servants and other government employees. This general concept is not unique to the US or to government workers.

Comment author: Kevin 08 April 2010 03:13:57AM 1 point [-]

An oath is an appeal to a sacred witness, typically the God of Abraham. An affirmation is the secular version of an oath in the American legal system.

Comment author: mattnewport 08 April 2010 07:33:09AM 0 points [-]

Hailing from secular Britain I wasn't aware of the distinction. Affirmation actually sounds more religious to me. I'd never particularly associated the idea of an oath with religion but I can see how such an association could sour one on the word 'oath'.

Comment author: Mass_Driver 06 April 2010 04:32:26AM 6 points [-]

I care about the Constitution for a couple of reasons beyond the narrowly patriotic:

(1) For the framers, its design posed a problem very similar to the design of Friendly AI. The newly independent British colonies were in a unique situation. On the one hand, whatever sort of nation they designed was likely to become quite powerful; it had good access to very large quantities of people, natural resources, and ideas, and the general culture of empiricism and liberty meant that the nation behaved as if it were much more intelligent than most of its competitors. On the other hand, the design they chose for the government that would steer that nation was likely to be quite permanent; it is one thing to change your system of government as you are breaking away from a distant and unpopular metropole, and another to change your government once that government is locally rooted and supported. The latter takes a lot more blood, and carries a much higher risk of simply descending into medium-term anarchy. Finally, the Founders knew that they could not see every possible obstacle that the young and unusual nation would encounter, and so they would have to create a system that could learn based on input from its environment without further input from its designers. So just as we have to figure out how to design a system that will usefully manage vast resources and intelligence in situations we cannot fully predict and with directions that, once issued, cannot be edited or recalled, so too did the Founding Fathers, and we should try to learn from their failures and successes.

(2) The Constitution has come to embody, however imperfectly, some of the core tenents of Bayesianism. I quote Chief Justice Oliver Wendell Holmes:

Persecution for the expression of opinions seems to me perfectly logical. If you have no doubt of your premises or your power and want a certain result with all your heart you naturally express your wishes in law and sweep away all opposition...But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas...that the best test of truth is the power of the thought to get itself accepted in the competition of the market, and that truth is the only ground upon which their wishes safely can be carried out. That at any rate is the theory of our Constitution.

Comment author: Amanojack 06 April 2010 02:37:58PM 1 point [-]

Re 1, if that is the case why not support the Articles of Confederation instead? I also take exception to the underlying assumption that society needs top-down designing, but that's a very deep debate.

But when men have realized that time has upset many fighting faiths, they may come to believe even more than they believe the very foundations of their own conduct that the ultimate good desired is better reached by free trade in ideas. ... hat at any rate is the theory of our Constitution.

If that was really the theory - "checks and balances" - the Constitution was a huge step backward from the Articles of Confederation. (I don't support the AoC, but I'd prefer them to the Constitution.)

Comment author: Mass_Driver 06 April 2010 04:38:46PM 2 points [-]

Re 1, if that is the case why not support the Articles of Confederation instead?

I never said we should support it; I said we should care about it.

It would be silly to claim that anyone interested in FAI should be pro-Constitution; there were plenty of 18th century people who earnestly grappled with their version of the FAI problem and thought the Constitution was a bad idea. If you agree more with the anti-Federalists, fine! The point is that we should closely follow the results of the experiment, not that we should bark agreement with the particular set of hypotheses chosen by the Founding Fathers for extensive testing.

Comment author: NancyLebovitz 06 April 2010 10:21:18AM 1 point [-]

Very good point, and the founders' process for developing the constitution and bill of rights is important for thinking about how to develop a Friendly (mostly Friendly?) AI.

Comment author: Morendil 05 April 2010 05:29:54PM *  3 points [-]

The more judicious question, I am coming to realize, isn't so much "Which of these two Standard Positions should I stand firmly on".

The more useful question is, why do the positions matter? Why is the discussion currently crystallized around these standard positions important to me, and how should I fluidly allow whatever evidence I can find to move me toward some position, which is rather unlikely (given that the debate has been so long crystallized in this particular way) to be among the standard ones. And I shouldn't necessarily expect to stay at that position forever, once I have admitted in principle that new evidence, or changes in other beliefs of mine, must commit me to a change in position on that particular issue.

In the death-penalty debate I identify more strongly with the "abolitionist" standard position because I was brought up in an abolitionist country by left-wing parents. That is, I find myself on the opposite end of the spectrum from you. And yet, perhaps we are closer than is apparent at first glance, if we are both of us committed primarily to investigating the questions of values, the questions of fact, and the questions of process that might leave either or both of us, at the end of the inquiry, in a different position than we started from.

  • Would I revise my "in principle" opposition to the death penalty if, for instance, the means of "execution" were modified to cryonic preservation? Would I then support cryonic preservation as a "punishment" for lesser crimes such as would currently result in lifetime imprisonment?

  • Would I still oppose the death penalty if we had a Truth Machine? Or if we could press Omega into service to give us a negligible probability of wrongful conviction? Or otherwise rely on a (putatively) impartial means of judgment which didn't involve fallible humans? Is that even desirable, if it was at all possible?

  • Would I support the death penalty if I found out it was an effective deterrent, or would I oppose it only if I found that it didn't deter? Does deterrence matter? Why, or why not?

  • How does economics enter into such a decision? How much, whatever position I arrive at, should I consider myself obligated to actively try to ensure that the society I live in espouses that position? For what scope of "the society I live in" - how local or global?

Those are topics and questions I encounter in the process of thinking about things other than the death penalty; practically every important topic has repercussions on this one.

There's an old systems science saying that I think applies to rational discussions about Big Questions such as this one: "you can't change just one thing". You can't decide on just one belief, and as I have argued before, it serves no useful purpose to call an isolated belief "irrational". It seems more appropriate to examine the processes whereby we adjust networks of beliefs, how thoroughly we propagate evidence and argument among those networks.

There is currently something of a meta-debate on LW regarding how best to reflect this networked structure of adjusting our beliefs based on evidence and reasoning, with approaches such as TakeOnIt competing against more individual debate modeling tools, with LessWrong itself, not so much the blog but perhaps the community and its norms, having some potential to serve as such a process for arbitrating claims.

But all these prior discussions seem to take as a starting point that "you can't change just one belief". That's among the consequences of embracing uncertainty, I think.

Comment author: Rain 05 April 2010 05:42:45PM 1 point [-]

Yeah, that's why I try to avoid hot topics. Too much work.

Comment author: Morendil 05 April 2010 06:01:32PM 1 point [-]

Well, even relatively uncontroversial topics have the same entangled-with-your-entire-belief-network quality to them, but (to most people) less power to make you care.

The judicious response to that is to exercise some prudence in the things you choose to care about. If you care too much about things you have little power to influence and could easily be wrong about, you end up "mind-killed". If you care too little and about too few things except for basic survival, you end up living the kind of life where it makes little difference how rational you are.

The way it's worked out for me is that I've lived through some events which made me feel outraged, and for better or for worse the outrage made me care about some particular topics, and caring about these topics has made me want to be right about them. Not just to associate myself with the majority, or with a set of people I'd pre-determined to be "the right camp to be in", but to actually be right.

Comment author: Rain 05 April 2010 02:50:45PM *  3 points [-]

Standard response: politics is the mind-killer.

Personal response: I'm opposed to the death penalty because it costs more than putting them in prison for life due to the huge number of appeals they're allowed (vaguely recall hearing in newspapers / reports). I feel the US has become so risk-averse and egalitarian that it cannot properly implement a death penalty. This is reflected in the back-and-forth questions you ask.

I also oppose it on the grounds that it is often used as a tool of vengeance rather than justice. Nitrogen poisoning (I think that was the gas they were talking about) is a safe, highly reliable, and euphoric means of death, but the US still prefers electrocution (can take minutes), injection (can feel like the veins are burning from the inside out while the body is paralyzed), etc.

That said, I don't care enough about the topic to try and alter its use, whether through voting, polling, letters, etc, nor do I desire to put much thought into it. Best to let hot topics alone.

And after asking about Bayes, you should ask for math rather than opinions.

Comment author: beriukay 06 April 2010 09:33:01AM 0 points [-]

Yeah, my formatting of the last few sentences wasn't very great. Sorry.

Comment author: gaffa 05 April 2010 01:51:43PM 1 point [-]

Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.

Comment author: DanielVarga 08 April 2010 10:05:48PM *  9 points [-]

Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.

For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:

Gauss Is Not Mocked

So You Think You Have a Power Law — Well Isn't That Special?

Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line

Power-law distributions in empirical data

Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169

Another very relevant and readable paper:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305

Comment author: RobinZ 08 April 2010 10:41:18PM 4 points [-]

That gives a whole new meaning to Mar's Law.

Comment author: DanielVarga 08 April 2010 11:33:01PM *  2 points [-]

Thank you, I never knew this fallacy has its own name, and I have been annoyed by it since ages. Actually, since 2003, when I was working on one of the first online social network services (iwiw.hu). The structure of the network was contradicting most of the claims made by the then-famous popular science books on networks. Not scale-free, (not even truncated power-law), not attack-sensitive, most of the edges were strong links. Looking at the claims of the original papers instead of the popular science books, the situation was not much better.

Comment author: Cyan 05 April 2010 05:36:01PM 1 point [-]

You could try "Ubiquity" by Mark Buchanan for the power law stuff, but it's been a while since I read it, so I can't vouch for it completely. (Confusingly, Amazon lists three books with that title and different subtitles, all by that author, all published around 2001-2002.)

Comment author: CronoDAS 05 April 2010 02:10:14AM *  3 points [-]

My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?

ETA: Their father is non-religious. I don't know why he's putting up with this.

Comment author: wedrifid 07 April 2010 10:21:12PM *  1 point [-]

Introduce them to really cool, socially near, atheists. In particular, provide contact with attractive opposite-gender children who are a couple of years older and are atheists.

Comment author: Amanojack 06 April 2010 04:25:59PM *  1 point [-]

Possibly introducing them to some of the content in A Human's Guide to Words, such as dissolving the question, would lead them to theological noncognitivism. The nice thing about that as opposed to direct atheism is it's more "insidious" because instead of saying, "I don't believe" the kids would end up making more subtle points, like, "What do you even mean by omnipotent?" This somehow seems a lot less alarming to people, so it might bother the parents much less, or even seem like "innocent" questioning.

Comment author: Unnamed 06 April 2010 06:03:55AM 5 points [-]

I wouldn't proselytize too directly - you want to stay on their (and their mother's) good side, and I doubt it would be very effective anyways. You're better off trying to instill good values - open-mindedness, curiosity, ability to think for oneself, and other elements of rationality & morality - rather than focusing on religion directly. Just knowing an atheist (you) and being on good terms with him could help lead them to consider atheism down the road at some point, which is another reason why it's important to maintain a good relationship. Think about the parallel case of religious relatives who interfere with parents who are raising their kids non-religiously - there are a lot of similarities between their situation and yours (even though you really are right and they just think they are) and you could run into a lot of the same problems that they do.

I haven't had the chance to try it out personally, but Dale McGowan's blog seems useful for this sort of thing, and his books might be even more useful.

Comment author: sketerpot 07 April 2010 08:38:10PM 2 points [-]

I think that's some very good advice, and I'd like to elaborate a bit. The thing that made me ditch my religion was the fact that I already had a secular, socially liberal, science-friendly worldview, and it clashed with everything they said in church. That conflict drove my de-conversion, and made it easier for me to adjust to atheism. (I was even used to the idea, from most of my favorite authors mentioning that they weren't religious. Harry Harrison, in particular, had explicitly atheistic characters as soon as his publishers would let him.)

So, yeah, subtlety is your friend here.

Comment author: NancyLebovitz 05 April 2010 10:46:49PM 2 points [-]

I'm not speaking from experience here, but that doesn't stop me from having opinions.

I don't believe this is an emergency. Are the kid's lives being affected negatively by the religion? What do they think of what they're being taught?

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

Their minds aren't a battlefield between you and religious school-- what they believe is, well not exactly their choice because people aren't very good at choosing, but more their choice than yours.

I recommend teaching them a little thoughtful cynicism, with advertisements as the subject matter.

Comment author: CronoDAS 06 April 2010 02:03:15PM *  1 point [-]

Actually, this could be an emergency if they're being taught about Hell. Are they? Is it haunting them?

I haven't seen any evidence that they're being bothered by anything.

Mostly, I just want to make it clear that, unlike a lot of other things they're learning in school, there are a lot of people who have good reasons to think the stories aren't true - to make it clear that there's a difference between "Moses led the Jews out of Egypt" and "George Washington was the first President of the United States."

Comment author: LucasSloan 05 April 2010 10:10:28PM *  0 points [-]

Speaking as someone who is seeing that sort of thing happening on the inside, I'm really not sure how you should deal with it. Even teaching traditional rationality doesn't help if religion is wrapped up in their social identity. I myself was lucky, in that I never did believe in god. I almost believe that the reason I came through sane was my IQ, although I'm sure that cannot be entirely correct. Getting them to socialize with other children who don't believe in god, or if that's not possible, children who believe in very different gods might help. I would also suggest you introduce them to fiction with strong rationality memes - Eliezer's Harry Potter fanfic [edited, see below] is the kind of thing that might appeal to children, although it has too much adult material.

Comment author: Eliezer_Yudkowsky 05 April 2010 10:59:01PM 2 points [-]

Um... Chapter 7 is not the child-friendliest chapter in the world. Teen-friendly, maybe. Not child-friendly.

Comment author: LucasSloan 06 April 2010 12:16:24AM *  0 points [-]

Ah, yes. Totally slipped my mind. Part of the problem might be that I was reading that kind of material by age 10 so I'm a bit desensitized. However, I continue to think that the overall package is generally appealing to children. Perhaps delivery of a hard copy that has been judiciously edited might work.

Comment author: gwern 07 April 2010 09:36:13PM *  0 points [-]

Part of the problem might be that I was reading that kind of material by age 10 so I'm a bit desensitized.

True story: when I was 8 or so, I loved Piers Anthony's Xanth books. So much that I went and read all of his other books.

Comment author: Alicorn 07 April 2010 10:20:18PM 0 points [-]

Even Xanth isn't harmless throughout.

Comment author: gwern 07 April 2010 10:25:11PM *  0 points [-]

Xanth's dark places are a heck of a lot more kid-friendly than, say, Bio of a Space Tyrant.

Comment author: Alicorn 07 April 2010 10:29:54PM 1 point [-]

Of course. But I can't think of a single Piers Anthony item that I'd actually recommend to a child. Or, for that matter, to an adult, but that's because Anthony's work sucks, not because it's inappropriate.

Comment author: CronoDAS 16 April 2010 08:43:39PM *  0 points [-]

Having read quite a bit of Piers Anthony's work, I noticed that it got consistently worse as he got older. I still think A Spell for Chameleon was pretty good (and so was Tarot, if you don't mind the deliberate squick-inducing scenes), but anything he wrote after, say, 1986 is probably best avoided - everything had a tendency to turn into either pure fluff or softcore pornography.

Comment author: Alicorn 16 April 2010 09:16:12PM 1 point [-]

The entire concept of Chameleon is nasty. Her backstory sets up all of the men from her village as being thrilled to take advantage of "Wynne" and universally unwilling to give "Fanchon" the time of day, while about half of them like "Dee". (Anthony is notable for being outrageously sexist towards both genders at once.) Her lifelong ambition is to sit halfway between the two extremes permanently, sacrificing the chance to ever have her above-average intellect because she wants male approval and it's conditional on being pretty (while she recognizes that being as stupid as she sometimes gets is a hazard). Bink is basically presented as a saint for putting up with the fact that she's sometimes ugly for the sake of getting "variety". It's implied that in her smart phase he values her as a conversation partner but actually touching her then would be out of the question. I haven't read the book in years, but I don't remember Chameleon having any complaints about the dubious sort of acceptance Bink offers; she just loves him because he's the protagonist and love means never having to say you want any accommodations whatsoever from your partner, apparently.

Comment author: NancyLebovitz 08 April 2010 03:07:54AM 0 points [-]

I still have some fondness for Macroscope. The gender stuff is creepy, but the depiction of an interstellar information gift culture seemed very cool at the time. I should reread it and see how it compares to how the net has developed.

Comment author: Cyan 08 April 2010 12:31:50AM 1 point [-]

I'd classify his... preoccupation... with young teenage girls paired with much older men as "inappropriate".

Comment author: CronoDAS 16 April 2010 09:00:18PM *  3 points [-]

This is one of those "stupid questions" to which the answer seems obvious to everyone but me:

What's wrong with a 16-year-old and a 30-year-old having sex?

Comment author: wedrifid 08 April 2010 02:05:15AM *  2 points [-]

Most of my aversion to that theme is (just?) cultural preference. I cannot tell whether I would object to the practice in another culture without more information about, for example, any physical or emotional trauma involved, reproductive implications, degree of physical maturity and the opportunity for the girls to self-determine their own lives. I would then have to compare the practice with 'forced schooling' from our culture to decide which is more disgusting.

Comment author: Alicorn 08 April 2010 01:24:29AM 2 points [-]

Right. And I would consider that inappropriateness sufficient to refrain from recommending the books to a child. The fact that they also suck is necessary to extend that lack of recommendation to adults. Sorry if it was unclear.

Comment author: [deleted] 05 April 2010 08:15:58PM 1 point [-]

Teach them the basis of bayesian reasoning without any connection to religion. This will help them in more ways and will lay the foundation for later when they naturally start questioning religion. Also their parents wont have anything against it you merely introduce it as a method for physics or chemistry or with the standard medical examples.

Comment author: RobinZ 05 April 2010 11:28:23AM 3 points [-]

Dangerous situation!

How do the parents feel about science and science fiction? I believe that stuff has good effects.

Comment author: Kevin 05 April 2010 02:34:43AM 3 points [-]

One thing to do is make sure the kids understand that the Bible is just a bunch of stories. My mom teaches Reform Jewish Sunday school and makes this clear to her students. I make fun of her for cranking out little atheists.

Teaching that the bible is a bunch of stories written by multiple humans over time is not nearly as offensive as preaching atheism. Start there. This bit of knowledge should be enough to get your young relatives thinking about religion, if they want to start thinking about it.

Comment author: Kevin 04 April 2010 10:01:01PM 4 points [-]

US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin

Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?

Comment author: Amanojack 05 April 2010 11:27:34AM 2 points [-]

The causes of "too big to fail" are:

  1. Corporate personhood laws makes it harder to punish the actual people in charge.

  2. Problems in tort law (in the US) make it difficult to sue corporations for certain kinds of damages.

  3. A large government (territorial monopoly of jurisdiction) makes it more profitable for any sufficiently large company to use the state as a bludgeon against its competitors (lobbying, bribes, friends in high places) instead of competing directly on the market.

  4. Letting companies that waste resources go bankrupt causes short-term damage to the economy, but it is healthy in the long term because it allows more efficient companies to take over the tied-up talent and resources. Politicians care more about the short term than the long term.

  5. For pharmaceutical companies there is an additional embiggening factor. Testing for FDA drug approval costs millions of dollars, which constitutes a huge barrier to entry for smaller companies. Hence the large companies can grow larger with little competition. This is amplified by 1 and 2, and 3 suggests that most of the competition among Big Pharma is over legislators and regulators, not market competition.

Disclosure: I am a "common law" libertarian (I find all monopolies counterproductive, including state governments).

Comment author: NancyLebovitz 05 April 2010 01:39:48PM 3 points [-]

I'd add trauma from the Great Depression (amplified by the Great Recession) which means that any loss of jobs sounds very bad, and (not related to the topic but a corollary) anything which creates jobs can be made to sound good.

Comment author: Zubon 04 April 2010 05:31:09PM 17 points [-]

Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.

Comment author: Emile 08 April 2010 03:38:44PM 1 point [-]

Quite depressing. Makes me even less likely to have my kids educated in the states. I wonder how bad Europe is on that count? Is it really better here? It can be hard to tell from inside; correcting for the fact that most info I get is biased one way or the other leaves me with pretty wide confidence intervals.

Comment author: timtyler 04 April 2010 07:18:29PM 0 points [-]

22/7 gives "something like" something like 3.1427 ?!? Surely it is more like some other things that that!

Comment author: RobinZ 04 April 2010 11:03:14PM *  1 point [-]

Well, yes - it's more like 3.142857 recurring. But that's fairly minor.

(Footnote: I originally thought the teachers had performed the division incorrectly, rather than the anonymous commenter incorrectly recount the number, so this comment was briefly incorrect.)

Comment author: Tyrrell_McAllister 04 April 2010 06:57:24PM *  1 point [-]

It would have been even more frustrating had the protagonist not also been guessing the teacher's password. It seemed that the protagonist just had a better memory of what more authoritative teachers had said.

The protagonist was closer to being able to derive π himself, but that played no part in his argument.

Comment author: JGWeissman 04 April 2010 07:06:09PM 5 points [-]

There's no evidence that the protagonist didn't just have a better memory of what more authoritative teachers had said.

The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.

The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.

The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.

Comment author: Tyrrell_McAllister 05 April 2010 12:30:48PM -1 points [-]

The protagonist knew that pi is defined as the ratio of a circle's circumference and diameter, and the numbers that people have memorized came from calculating that ratio.

The protagonist knew that pi is irrational, that irrational means it cannot be expressed as a ratio of integers, and that 7 and 22 are integers, and that therefore pi cannot be exactly expressed as 22/7.

These are important pieces of knowledge, and they are why I said that they protagonist was closer to being able to derive π himself.

The protagonist was willing to entertain the theory that 22/7 is a good enough approximation of pi to 5 digits, but updated when he saw that the result came out wrong.

The result only came out wrong relative to his own memorized teacher-password. Except for his memory of what the first five digits of π really were, he gave no argument that they weren't the same as the first five digits of 22/7.

Comment author: RobinZ 05 April 2010 02:23:08PM 5 points [-]

Y'know, there's something this blogger I read once wrote that seems kinda applicable here:

I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

Comment author: Tyrrell_McAllister 05 April 2010 05:32:34PM *  1 point [-]

Y'know, there's something this blogger I read once wrote that seems kinda applicable here:

I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

I did not criticize the protagonist. He acted entirely appropriately in his situation. Trying to derive digits of π (by using Archimedes's method, say) would not have been an effective way to convince his teammates under those circumstances. In some cases, such as a timed exam, going with an accurately-memorized teacher-password is the best thing to do. [ETA: Furthermore, his and our frustration at his teammates was justified.]

But the fact remains that the story was one of conflicting teacher-passwords, not of deep knowledge vs. a teacher-password. Although the protagonist possessed deeper knowledge, and although he might have been able to reconstruct Archimedes's method, he did not in fact use his deeper knowledge in the argument to make 3.1415 more probable than the first five digits of 22/7.

Again, I'm not saying that he should have had to do that. But it would have made for a better anti-teacher-password story.

Comment author: RobinZ 05 April 2010 09:06:35PM 4 points [-]

I see what you mean. I think the confusion we've had on this thread is over the loaded term "teacher's password" - yes, the question only asked for the password, but it would be less misleading to say that both the narrator and the schoolteachers had memorized the results, but the narrator did a better job of comprehending the reference material.

Comment author: Eliezer_Yudkowsky 04 April 2010 06:52:28PM 3 points [-]

AAAAAIIIIIIIIEEEEEEEE

BOOM

Comment author: Alicorn 04 April 2010 06:59:49PM 6 points [-]

Clearly, your math teacher biting powers are called for.

Comment author: CronoDAS 04 April 2010 09:24:36PM 2 points [-]

In first grade, I threw a crayon at the principal. Can I help? ;)

Comment author: JGWeissman 04 April 2010 07:08:29PM 2 points [-]

Let's not get too hasty. They still might know logarithms. ;)

Comment author: Mass_Driver 04 April 2010 06:26:18AM *  2 points [-]

Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.

I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my entire day around motivating an early bedtime, that often works, but at an unacceptably high cost; the point of going to bed early is to have more surplus time/energy, not to spend all of my time/energy on going to bed.

I am happy to test various hypotheses, but don't have a good sense of which hypotheses to promote or how to generate plausible hypotheses in this context.

Comment author: Nick_Tarleton 04 April 2010 06:26:51PM *  2 points [-]

Melatonin. Also, getting my housemates to harass me if I don't go to bed.

Comment author: gwern 07 April 2010 09:30:34PM 1 point [-]

Mass_Driver's comment is kind of funny to me, since I had addressed exactly his issue at length in my article.

Comment author: Mass_Driver 08 April 2010 03:25:39PM *  1 point [-]

Which, I couldn't help but notice, you have thoughtfully linked to in your comment. I'm new here; I haven't found that article yet.

Comment author: gwern 08 April 2010 04:38:49PM *  3 points [-]

If you're not being sarcastic, you're welcome.

If you're being sarcastic, my article is linked, in Nick_Tarleton's very first sentence; it would be odd for me to simply say 'my article' unless some referent had been defined in the previous two comments, and there is only one hyperlink in those two comments.

Comment author: Mass_Driver 08 April 2010 07:14:46PM 0 points [-]

Gwern, I apologize for the sarcasm; it wasn't called for. As I said, I'm new here, and I guess I'm not clicking "show more above" as much as I should.

However, a link still would have been helpful. As someone who had never read your article, I had no way of knowing that a link to "Melatonin" contained an extensive discussion about willpower and procrastination. It looked to me like a biological solution, i.e., a solution that was ignoring my real concerns, so I ignored it.

Having now read your article, I agree that taking a drug that predictably made you very tired in about half an hour could be one good option for fighting the urge to stay up for no reason, and I also think that the health risks of taking melatonin long-term -- especially at times when I'm already tired -- could be significant. I may give it a try if other strategies fail.

Comment author: gwern 08 April 2010 09:38:38PM 1 point [-]

I also think that the health risks of taking melatonin long-term

I strongly disagree, but I also dislike plowing through as enormous a literature as that on melatonin and effectively conducting a meta-study, since Wikipedia already covers the topic and I wouldn't get a top-level article out of such an effort, just some edits for the article (and old articles get few hits, comments, or votes, if my comments are anything to go by).

Comment author: Amanojack 04 April 2010 05:53:31PM *  1 point [-]

I've been struggling with this for years, and the only thing I've found that works when nothing else does is hard exercise. The other two things that I've found help the most:

  • Let the sun hit your eyelids first thing in the morning (to halt melatonin production)
  • F.lux, a program that auto-adjusts your monitor's light levels (and keep your room lights low at night; otherwise melatonin production will be delayed)

EDIT: Apparently keeping your room lights at a low color temperature (incandescent/halogen instead of fluorescent) is better than keeping them at low intensity:

"...we surmise that the effect of color temperature is greater than that of illuminance in an ordinary residential bedroom or similar environment where a lowering of physiological activity is desirable, and we therefore find the use of low color temperature illumination more important than the reduction of illuminance. Subjective drowsiness results also indicate that reduction of illuminance without reduction of color temperature should be avoided." —Noguchi and Sakaguchi, 1999 (note that these are commercial researchers at Matsushita, which makes low-color-temperature fluorescents)

Comment author: khafra 06 April 2010 05:15:09PM 0 points [-]

Does that imply that HIDs are safer for long drives at night than halogen headlights?

Comment author: Mass_Driver 05 April 2010 01:44:16PM *  1 point [-]

That all sounds awfully biological -- are you sure fixing monitor light levels is a solution for akrasia?

Comment author: Amanojack 05 April 2010 08:04:09PM 0 points [-]

No, the items I've given will only make you more sleepy at night than you would have been. If that's not enough, I agree it's akrasia of a sort, also known as having a super-high time preference.

Comment author: Nick_Tarleton 04 April 2010 11:53:25PM *  0 points [-]

If you use Mac OS, Nocturne lets you darken the display, lower its color temperature, etc. manually/more flexibly than F.lux.

Comment author: gwern 07 April 2010 09:28:33PM 1 point [-]

For Linux, there's Redshift. I like it because it's kinder on my eyes, though it doesn't do anything for akrasia.

Comment author: andreas 05 April 2010 12:19:28AM 0 points [-]

There is also Shades, which lets you set a tint color and which provides a slider so you can move gradually between standard and tinted mode.