In our club, we've decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don't feel it's worth arguing over that kind of inferential distance. We'd rather it be the 'discuss cool things' club than the 'argue with people who don't believe in evolution' club.
This perspective looks deeply insane to me.
I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.
I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who'd have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.
There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.
I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we'll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like 'bringing back dead people' v. 'running more parallel copies of current people.' I'd also caution against treating future society as a monolithic Entity with Values that makes Decisions - it's very probably...
Right, but (virtually) nobody is actually proposing doing that. It's obviously stupid to try from chemical first principles. Cells might be another story. That's why we're studying neurons and glial cells to improve our computational models of them. We're pretty close to having adequate neuron models, though glia are probably still five to ten years off.
I believe there's at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can't do it, I'll update then.
Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can't do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.
In general, when something can be either tremendously clever, or a bit foolish, the prior tends to the latter. Even with someone who's generally a pretty smart cookie. You could run the experiment, but I'm willing to bet on the outcome now.
It's important to remember that it isn't particularly useful for this book to be The Sequences. The Sequences are The Sequences, and the book can direct people to them. What would be more useful would be a condensed, rapid introduction to the field that tries to maximize insight-per-byte. Not something that's a de...
1: If your cousin can demonstrate that ability using somebody else's deck, under experimental conditions that I specify and he is not aware of ahead of time, I will give him a thousand dollars.
2: In the counter-factual case where he accomplishes this, that does not mean that his ability is outside the realm of science (well, probably it means the experiment was flawed, but we'll assume otherwise). There have been a wide range of inexplicable phenomena which are now understood by science. If your cousin's psychic powers are real, then science can study ...
Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren't exactly short and to the point. Frankly, they ramble. Which is fine if you're just trying to get your thoughts out there, but people don't finish the majority of the books they pick up. You need something that's going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we're looking at now. The more of it they get through, the more good you're doing.
EDIT: Oh! And the whole thing needs a full jargon palette-swap. There...
If it were me, I'd split your list after reductionism into a separate ebook. Everything that's controversial or hackles-raising is in the later sequences. A (shorter) book consisting solely of the sequences on cognitive biases, rationalism, and reductionism could be much more a piece of content somebody without previous rationalist intentions can pick up and take something valuable away from. The later sequences have their merits, but they are absolutely counterproductive to raising the sanity waterline in this case. They'll label your book as kooky an...
Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren't exactly short and to the point. Frankly, they ramble. Which is fine if you're just trying to get your thoughts out there, but people don't finish the majority of the books they pick up. You need something that's going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we're looking at now. The more of it they get through, the more good you're doing.
EDIT: Oh! And the whole thing needs a full jargon palette-swap. There...
There will always be multiple centers of power What's at stake is, at most, the future centuries of a solar-system civilization No assumption that individual humans can survive even for hundreds of years, or that they would want to
You give no reason why we should consider these as more likely than the original assumptions.
How about a cyborg whose arm unscrews? Is he not augmented? Most of a cochlear implant can be removed. Nothing about trans-humanism says your augmentations have to be permanently attached to your body. You need only want to improve yourself and your abilities, which a robot suit of that caliber definitely accomplishes.
And, yes, obviously transhumanism is defined relative to historical context. If everyone's doing it, you don't need to have a word for it. That we have a word implies that transhumanists are looking ahead, and looking for things that not everyone has yet. So, no, your car doesn't make you a trans-humanist, but a robotic exoskeleton might be evidence of that philosophy.
Your four criteria leave an infinite set of explanations for any phenomenon. Including, yes, George the Giant. That's why we have the idea of Occam's razor - or, more formally, Solomonoff Induction. Though I suppose, depending on the data available to the tribe, the idea of giant humans might not be dramatically more complicated than plate tectonics. It isn't like they postulated a god of earthquakes or some nonsense like that. At minimum, however, they are privileging the George the Giant hypotheses over the other equally-complicated plausible expla...
When we try to build a model of the underlying universe, what we're really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).
So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less tha...
Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they're just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next...
Additionally, reality and virtual reality can get a lot fuzzier than that. If AR glasses become popular, and a protocol exists to swap information between them to allow more seamless AR content integration, you could grab all the feeds coming in from a given location, reconstruct them into a virtual environment, and insert yourself into that environment, which would update with the real world in real time. People wearing glasses could see you as though you were there, and vice versa. If you rented a telepresence robot, it would prevent people from walki...
If you're talking about people frozen after four plus hours of room temperature ischemia, I'd agree with you that the odds are not good. However, somebody with a standby team, perfused before ischemic clotting can set in and vitrified quickly, has a very good chance in my book. We've done SEM imaging of optimally vitrified dead tissue, and the structural preservation is extremely good. You can go in and count the pores on a dendrite. There simply isn't much information lost immediately after death, especially if you get the head in ice water quickly. ...
The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected.
Well, then there's your answer to the question 'what is friendship good for' - whatever other value you place on friendship that makes you neurotypical. I was just trying to point out that that line of reasoning was silly.
...Okay, but the reason why rationality has a special ability to help yo
Well, there's no reason to think you'd be completely isolated from top level reality. Internet access is very probable. Likely the ability to rent physical bodies. Make phone calls. That sort of thing. You could still get involved in most of the ways you do now. You could talk to people about it, get a job and donate money to various causes. Sign contracts, make legal arrangements to keep yourself safe. That sort of thing.
...With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend.
I want meaning, this requires having access to reality. I'll think about it.
Does it? You can have other people in the simulation with you. People find a lot of meaning in companionship, even digitally mediated. People don't think a conversation with your mother is meaningless because it happens over VOIP. You could have lots of places to explore. Works of art.. Things to learn. All meaningful things. You could play with the laws of physics. Find out what if feels like to turn gravity off one day and drift out of your apartment window.
If you w...
Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo...
-shrug- so don't leave your brain to science. I figure if somebody is prepared to let their brain decompose on a table while first year medical students poke at it, you might as well try to save their life. Provided, of course, the laws wherever you are permit you to put the results down if they're horrible. Worst case, they're back where they started.
...Depends on your definition of 'you.' Mine are pretty broad. The way I see it, my only causal link to myself of yesterday is that I remember being him. I can't prove we're made of the same matter. Under quantum mechanics, that isn't even a coherent concept. So, if I believe that I didn't die in the night, then I must accept that that's a form of survival.
Uploaded copies of you are still 'you' in the sense that the you of tomorrow is you. I can talk about myself tomorrow, and believe that he's me (and his existence guarantees my survival), even though ...
You are overwhelmingly likely not to wake up in a body, depending on the details of your instructions to Alcor.. Scanning a frozen brain is exponentially cheaper and technologically easier than trying to repair every cell in your body. You will almost certainly wake up as a computer program running on server somewhere.
This is not a bad thing. Your computer program can be plugged into software body models in convincing virtual environments, permitting normal human activities (companionship, art, fun, sex, etc.), plus some activities not normally possible for humans. It'll likely be possible to rent mechanical bodies for interacting with the physical world.
There's no reason to experiment o cryo patients. Lots of people donate their brains to science. Grab somebody who isn't expecting to be resurrected, and test your technology on them. Worst case, you wake up somebody who doesn't want to be alive, and they kill themselves.
Number two is very unlikely. We're basically talking brain damage, and I've never heard of a case of brain damage, no matter how severe, doing that.
As for number three, that shambling horror would not be you in a meaningful sense. You'd just be dead, which is the default case. Al...
Living forever isn't quite impossible. If we ever develop acausal computing, or a way to beat the first law of thermodynamics (AND the universe turns out to be spatially infinite), then it's possible that a sufficiently powerful mind could construct a mathematical system containing representations of all our minds that it could formally prove would keep us existent and value-fulfilled forever, and then just... run it.
Not very likely, though. In the mean time, more life is definitely better than less.
If you're revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore's law stays constant). That's little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor...
If cryonics is not performed extremely quickly, ischemic clotting can seriously inhibit cortical circulation, preventing good perfusion with cryoprotectants, and causing partial information-theoretic death. Being cryopreserved within a matter of minutes is probably necessary, barring a way to quickly improve circulation.
Not quite. It actually replaces it with the problem of maximizing people's expected reported life satisfaction. If you wanted to choose to try heroin, this system would be able to look ahead, see that that choice will probably drastically reduce your long-term life satisfaction (more than the annoyance at the intervention), and choose to intervene and stop you.
I'm not convinced 'what's best for people' with no asterisk is a coherent problem description in the first place.
By bounded, I simply meant that all reported utilities are normalized to a universal range before being summed. Put another way, every person has a finite, equal fraction of the machine's utility to distribute among possible future universes. This is entirely to avoid utility monsters. It's basically a vote, and they can split it up however they like.
Also, the reflexive consistency criteria should probably be applied even to people who don't exist yet. We don't want plans to rely on creating new people, then turning them into happy monsters, even i...
I can think of an infinite utility scenario. Say the AI figures out a way to run arbitrarily powerful computations in constant time. Say it's utility function is over survival and happiness of humans. Say it runs an infinite loop (in constant time), consisting of a formal system containing implementations of human minds, which it can prove will have some minimum happiness, forever. Thus, it can make predictions about its utility a thousand years from now just as accurately as ones about a billion years from now, or n, where n is an finite number of years. Summing the future utility of the choice to turn on the computer, from zero to infinity, would be an infinite result. Contrived I know, but the point stands.
If we can extract utility in a purer fashion, I think we should. At the bare minimum, it would be much more run-time efficient. That said, trying to do so opens up a whole can of worms of really hard problems. This proposal, provided you're careful about how you set it up, pretty much dodges all of that, as far as I can tell. Which means we could implement it faster, should that be necessary. I mean, yes, AGI is still very hard problem, but I think this reduces the F part of FAI to a manageable level, even given the impoverished understanding we have ...
Reflexively Consistent Bounded Utility Maximizer?
Hrm. Doesn't exactly roll off the tongue, does it? Let's just call it a Reflexive Utility Maximizer (RUM), and call it a day. People have raised a few troubling points that I'd like to think more about before anyone takes anything too seriously, though. There may be a better way to do this, although I think something like this could be workable as a fallback plan.
Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).