Then he could give a guest lecture, and that'd be pretty cool.
In our club, we've decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don't feel it's worth arguing over that kind of inferential distance. We'd rather it be the 'discuss cool things' club than the 'argue with people who don't believe in evolution' club.
This perspective looks deeply insane to me.
I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.
I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who'd have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.
There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.
I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we'll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like 'bringing back dead people' v. 'running more parallel copies of current people.' I'd also caution against treating future society as a monolithic Entity with Values that makes Decisions - it's very probably still going to be capitalist. I expect the deciding factor regarding whether or not cryopatients are revived to be whether or not Alcor can pay for the revival while remaining solvent.
Also, I'm not at all certain about your value calculation there. Creating new people is much less valuable than preserving old ones. It would be wrong to round up and exterminate a billion people in order to ensure than one billion and one babies are born.
Right, but (virtually) nobody is actually proposing doing that. It's obviously stupid to try from chemical first principles. Cells might be another story. That's why we're studying neurons and glial cells to improve our computational models of them. We're pretty close to having adequate neuron models, though glia are probably still five to ten years off.
I believe there's at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can't do it, I'll update then.
Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can't do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.
I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.
So far, I'm twenty pages in, and getting close to being done with the basic epistemology stuff.
Lottery winners have different problems. Mostly that sharp changes in money are socially disruptive, and that lottery players are not the most fiscally responsible people on Earth. It's a recipe for failure.
Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).