Robin, what is your favorite piece of academic philosophy that argues about values?
Nicholas, our own universe may have an infinite volume, and it's only the speed of light that limits the size of the observable universe. Given that infinite universes are not considered implausible, and starlines are not considered implausible (at least as a fictional device), I find it surprising that you consider starlines that randomly connect a region of size 2^(10^20) to be implausible.
Starlines have to have an average distance of something, right? Why not 2^(10^20)?
Nicholas, suppose Eliezer's fictional universe contains a total of 2^(10^20) star systems, and each starline connects two randomly selected star systems. With a 20 hour doubling speed, the Superhappies, starting with one ship, can explore 2^(t36524/20) random star systems after t years. Let's say the humans are expanding at the same pace. How long will it take, before humans and Superhappies will meet again?
According to the birthday paradox, they will likely meet after each having explored about sqrt(2^(10^20)) = 2^(510^19) star systems, which will take 51...
But the tech in the story massively favors the defense, to the point that a defender who is already prepared to fracture his starline network if attacked is almost impossible to conquer (you’d need to advance faster than the defender can send warnings of your attack while maintaining perfect control over every system you’ve captured). So an armed society would have a good chance of being able to cut itself off from even massively superior aliens, while pacifists are vulnerable to surprise attacks from even fairly inferior ones.
I agree, and that's why in m...
So, what about the fact that all of humanity now knows about the supernova weapon? How is it going to survive the next few months?
In case it wasn't clear, the premise of my ending is that the Ship's Confessor really was a violent thief and drug dealer from the 21th century, but his "rescue" was only partially successful. He became more rational, but only pretended to accept what became the dominant human morality of this future, patiently biding time his whole life for an opportunity like this.
The Ship's Confessor uses the distraction to anesthetizes everyone except the pilot. He needs the pilot to take command of the starship and to pilot it. The ship stays to observe which star the Superhappy ship came from, then takes off for the nearest Babyeater world. They let the Babyeaters know what happened, and tell them to supernova the star that Superhappies came from at all costs.
When everyone wakes up, the Ship's Confessor convinces the entire crew to erase their memory of the true Alderson's Coupling Constant, ostensibly for the good of humanity. ...
Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)
I'm afraid that the Axiom of In...
To expand on my categorization of values a bit more, it seems clear to me that at least some human value do not deserved to be forever etched into the utility function of a singleton. Those caused by idiosyncratic environmental characteristics like taste for salt and sugar, for example. To me, these are simply accidents of history, and I wouldn't hesitate (too much) to modify them away in myself, perhaps to be replaced by more interesting and exotic tastes.
What about reproduction? It's a value that my genes programmed into me for their own purposes, so why...
Tim and Tyrrell, do you know the axiomatic derivation of expected utility theory? If you haven't read http://cepa.newschool.edu/het/essays/uncert/vnmaxioms.htm or something equivalent, please read it first.
Yes, if you change the spaces of states and choices, maybe you can encode every possible agent as an utility function, not just those satisfying certain axioms of "rationality" (which I put in quotes because I don't necessarily agree with them), but that would be to miss the entire point of expected utility theory, which is that it is supposed ...
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.
Tim, I've seen you state this before, but it's simply wrong. A utility function is not like a Turing-complete language. It imposes rather strong constraints on possible behavior.
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced b...
If the aliens' wetware (er, crystalware) is so efficient that their children are already sentient when they are still tiny relative to adults, why don't the adults have bigger brains and be much more intelligent than humans? Given that they also place high values on science and rationality, had invented agriculture long before humans did, and haven't fought any destructive wars recently, it makes no sense that they have a lower level of technology than humans at this point.
Other than that, I think the story is not implausible. The basic lesson here is the ...
Maybe we don't mean the same thing by boredom?
I'm using Eliezer's definition: a desire not to do the same thing over and over again. For a creature with roughly human-level brain power, doing the same thing over and over again likely means it's stuck in a local optimum of some sort.
Genome equivalents which don't generate terminally valued individual identity in the minds they descrive should outperform those that do.
I don't understand this. Please elaborate.
Why not just direct expected utility? Pain and pleasure are easy to find but don't work nearly as we...
We can sort the values evolution gave us into the following categories (not necessarily exhaustive). Note that only the first category of values is likely to be preserved without special effort, if Eliezer is right and our future is dominated by singleton FOOM scenarios. But many other values are likely to survive naturally in alternative futures.
Why make science a secret, instead of inventing new worlds with new science for people to explore? Have you heard of "Theorycraft"? It's science applied to the World of Warcraft, and for some, Theorycraft is as much fun as the game it's based on.
Is there something special about the science of base-level reality that makes it especially fun to explore and discover? I think the answer is yes, but only if it hasn't already been explored and then covered up again and made into a game. It's the difference between a natural and an artificial challenge.
One day we'll discover the means to quickly communicate insights from one individual to another, say by directly copying and integrating the relevant neural circuitry. Then, in order for an insight to be Fun, it will have to be novel to transhumanity, not just the person learning or discovering it. Learning something the fast efficient way will not be Fun because there's not true effort. Pretending that the new way doesn't exist, and learning the old-fashioned way, will not be Fun because there's not true victory.
I'm not sure there are enough natural probl...
Anna Salamon wrote: Is it any safer to think ourselves about how to extend our adaptation-executer preferences than to program an AI to figure out what conclusions we would come to, if we did think a long time?
First, I don't know that "think about how to extend our adaptation-executer preferences" is the right thing to do. It's not clear why we should extend our adaptation-executer preferences, especially given the difficulties involved. I'd backtrack to "think about what we should want".
Putting that aside, the reason that I prefer we d...
Robin wrote: Having to have an answer now when it seems an likely problem is very expensive.
(I think you meant to write "unlikely" here instead of "likely".)
Robin, what is your probability that eventually humanity will evolve into a singleton (i.e., not necessarily through Eliezer's FOOM scenario)? It seems to me that competition is likely to be unstable, whereas a singleton by definition is. Competition can evolve into a singleton, but not vice versa. Given that negentropy increases as mass squared, most competitors have to remain in t...
An expected utility maximizer would know exactly what to do with unlimited power. Why do we have to think so hard about it? The obvious answer is that we are adaptation executioners, not utility maximizers, and we don't have an adaptation for dealing with unlimited power. We could try to extrapolate an utility function from our adaptations, but given that those adaptations deal only with a limited set of circumstances, we'll end up with an infinite set of possible utility functions for each person. What to do?
James D. Miller: But about 100 people die every...
Maybe we don't need to preserve all of the incompressible idiosyncrasies in human morality. Considering that individuals in the post-Singularity world will have many orders of magnitude more power than they do today, what really matter are the values that best scale with power. Anything that scales logarithmically for example will be lost in the noise compared to values that scale linearly. Even if we can't understand all of human morality, maybe we will be able to understand the most important parts.
Just throwing away parts of one's utility function seems...
This is fascinating. JW plays C in the last round, even though AA just played D in the next-to-last round. What explains that? Maybe JW's belief in his own heroic story is strong enough to make him sacrifice his self-interest?
Theoretically, of course, utility functions are invariant up to affine transformation, so a utility's absolute sign is not meaningful. But this is not always a good metaphor for real life.
So you're suggesting that real life has some additional structure which is not representable in ordinary game theory formalism? Can you think of an extension to game theory which can represent it? (Mathematically, not just metaphorically.)