All of Dolores1984's Comments + Replies

Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).

Then he could give a guest lecture, and that'd be pretty cool.

In our club, we've decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don't feel it's worth arguing over that kind of inferential distance. We'd rather it be the 'discuss cool things' club than the 'argue with people who don't believe in evolution' club.

0palladias
D'you mean you've found the topic of religion to be mindkilling, so all discussions in your group need to work within the majority framework of atheism/deism to be productive or that you restrict your membership?

This perspective looks deeply insane to me.

I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.

I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who'd have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.

0jefftk
It's extremely hard to get away from practical considerations here, and it tends to be hard for people to generalize ethics to things as far removed from practicality as killing a million adults to replace them with a billion babies. It would? While mandatory breeding (via birth control denial or other measures) would make for a lot of people, their lives would be much worse. The reason a person in power opposing birth control and mandating breeding sounds horrible is the same as why I would oppose it: it would suck. No one wants to be forced to have kids. I also care much more about the total number of people to ever exist (again, weighted by how good their lives are) than the total number to exist at once. Dramatically increasing the number of people alive now, even if you did it in a way that didn't affect average happiness, would probably just make us burn through our current stock of resources faster and not lead to more long-term total people. Now this sounds deeply insane. (Not as an insult! It just seems horribly scale insensitive.)

There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.

I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we'll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like 'bringing back dead people' v. 'running more parallel copies of current people.' I'd also caution against treating future society as a monolithic Entity with Values that makes Decisions - it's very probably... (read more)

0jefftk
I agree, but I don't think that cuts to the point. The process of rounding up and killing a billion people, the sadness of people left behind, the skill loss, and the change of the age distribution, would all have large negative effects, and a billion and one babies would be a heck of a baby boom. While practical issues mean that killing people is just about never the right thing to do, I don't agree that "creating new people is much less valuable than preserving old ones". See my response to Nisan.

Right, but (virtually) nobody is actually proposing doing that. It's obviously stupid to try from chemical first principles. Cells might be another story. That's why we're studying neurons and glial cells to improve our computational models of them. We're pretty close to having adequate neuron models, though glia are probably still five to ten years off.

I believe there's at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can't do it, I'll update then.

0jefftk
You might find this earlier discussion useful on how far we've gotten with emulating C elegans: http://lesswrong.com/lw/88g/whole_brain_emulation_looking_at_progress_on_c/

Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can't do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.

3David_Gerard
As you'll see if you read his text, he's responding to proposals to emulate a brain without understanding how it all works, and is noting just how fine you'd need to actually go to do that. I've heard the case made at length, but not of, e.g., a C. elegans that's learnt something, been frozen and shows it stil remembers it after it's unfrozen (to name one obvious experiment that, last time this precise Myers article was discussed, apparently no-one had ever done) or something of similar evidentiary value. Experiment beats arguing why you don't need an experiment. Edit: Not the last time this Myers article was discussed, but the discussion of kalla724's "what on earth" neuroscientist's opinion on cryonics practice.

I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.

4lsparrish
There is a cost to doing nothing as well. Calling them koans potentially has the following effects: 1. Makes people think that rationality is Zen. 2. Makes people think Zen is rational. 3. Irritates people who know/care more about Zen than average. 4. Signals disrespect of specialized knowledge. 5. Encourages a norm of misusing/inflating terms beyond their technical use. The question is whether it is more costly to make the change or not. How costly is the change? Are the costs long-term or short-term? (The costs of not making the change are mostly long-term.) Also relevant: Apart from avoiding the above costs, are there benefits to changing it to something else? (For example, a better term could make the articles more interesting and intuitive to beginners than "koan" does.)

So far, I'm twenty pages in, and getting close to being done with the basic epistemology stuff.

Lottery winners have different problems. Mostly that sharp changes in money are socially disruptive, and that lottery players are not the most fiscally responsible people on Earth. It's a recipe for failure.

In general, when something can be either tremendously clever, or a bit foolish, the prior tends to the latter. Even with someone who's generally a pretty smart cookie. You could run the experiment, but I'm willing to bet on the outcome now.

It's important to remember that it isn't particularly useful for this book to be The Sequences. The Sequences are The Sequences, and the book can direct people to them. What would be more useful would be a condensed, rapid introduction to the field that tries to maximize insight-per-byte. Not something that's a de... (read more)

1Epiphany
I'm currently writing a summary of each sequence as I read them. I am doing this because it helps me to remember what I read. What is going to result from my doing this is a Cliff's notes version of the sequences. If you were going to do something similar anyway, I might as well just post these notes when I am done to save you the work. Would that serve the purpose you were thinking of? Or is your idea significantly different?
-1handoflixue
Suggested title: The Tao of Bayes Ideally it should not be significantly longer than "The Tao of Pooh" I'd be half-tempted to try my hand at it myself...

You could plug a baby's nervous system into the output of a radium decay random number generator. It'd probably disagree (disregarding how crazy it would be) that its observations were best described by causal graphs.

It does not. Epiphenomenal consciousness could be real for the same reason that the spaceship vanishing over the event horizon. It's Occam's Razor that knocks down that one.

1: If your cousin can demonstrate that ability using somebody else's deck, under experimental conditions that I specify and he is not aware of ahead of time, I will give him a thousand dollars.

2: In the counter-factual case where he accomplishes this, that does not mean that his ability is outside the realm of science (well, probably it means the experiment was flawed, but we'll assume otherwise). There have been a wide range of inexplicable phenomena which are now understood by science. If your cousin's psychic powers are real, then science can study ... (read more)

Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren't exactly short and to the point. Frankly, they ramble. Which is fine if you're just trying to get your thoughts out there, but people don't finish the majority of the books they pick up. You need something that's going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we're looking at now. The more of it they get through, the more good you're doing.

EDIT: Oh! And the whole thing needs a full jargon palette-swap. There... (read more)

2DaFranker
Regarding the jargon, I agree with wedrifid that LW-specific jargon is actually being defined as the sequences, and from what I've heard and experienced this is extremely helpful in setting down a common language for us to discuss these matters. However, there is some jargon that could and probably should be done away with: the computer science stuff. Not all sequences/articles have it, but when it's there it's usually several levels of inference away from laypeople. The CS/programming examples, comparisons and metaphors are fun for someone like me, but it's an accepted matter among IT people that things like the XKCD comic on a random function that always returns 4 will not help get the point across to non-IT people. I'm sure that has been mentioned before, but it's worth making sure that it's looked over and that while doing it you remember that when writing educative material, most people severely overshoot the level that they're aiming for, and end up writing a text that's perfect for undergrads when they were targeting a middle school audience or somesuch. Personally, I'd leave in most of the random intercultural references (like the anime references, for instance) since I suspect they'd still reach a good portion of the audience and wouldn't have negative impact, but that'd be up for discussion. This also gives me an idea, but I'll make a separate comment for it.
4NancyLebovitz
I'm not sure whether what looks like rambling is actually an effective method of easing people into the ideas so that the ideas are easier to accept, rather than just being inefficient. Is there any way to find out?
9wedrifid
For most part the sequences define said jargon, rather than using it.

If it were me, I'd split your list after reductionism into a separate ebook. Everything that's controversial or hackles-raising is in the later sequences. A (shorter) book consisting solely of the sequences on cognitive biases, rationalism, and reductionism could be much more a piece of content somebody without previous rationalist intentions can pick up and take something valuable away from. The later sequences have their merits, but they are absolutely counterproductive to raising the sanity waterline in this case. They'll label your book as kooky an... (read more)

5wedrifid
That one is taken.
1amcknight
Quantum mechanics and Metaethics are what initially drew me to LessWrong. Without them, the Sequences aren't as amazingly impressive, interesting, and downright bold. As solid as the other content is, I don't think the Sequences would be as good without these somewhat more speculative parts. This content might even be what really gets people talking about the book.
5cata
I agree completely with this comment (assuming that the ebook is aimed at people who aren't already familiar with LW.) Regarding the title, you should aim for describing what the writing is about, not where the writing came from. Unfortunately I can personally only generate boring titles like "Essays On Thinking Straight".

Oh, and somebody get Yudkowsky an editor. I love the sequences, but they aren't exactly short and to the point. Frankly, they ramble. Which is fine if you're just trying to get your thoughts out there, but people don't finish the majority of the books they pick up. You need something that's going to be snappy, interesting, and cater to a more typical attention span. Something maybe half the length we're looking at now. The more of it they get through, the more good you're doing.

EDIT: Oh! And the whole thing needs a full jargon palette-swap. There... (read more)

Yup, this is planned. It may be that SI publishes the full Sequences thing, and CFAR publishes the cut-down version (with a new introduction by Eliezer, or something).

There will always be multiple centers of power What's at stake is, at most, the future centuries of a solar-system civilization No assumption that individual humans can survive even for hundreds of years, or that they would want to

You give no reason why we should consider these as more likely than the original assumptions.

Sure. I think we just have different definitions of the term. Not much to be gained here.

How about a cyborg whose arm unscrews? Is he not augmented? Most of a cochlear implant can be removed. Nothing about trans-humanism says your augmentations have to be permanently attached to your body. You need only want to improve yourself and your abilities, which a robot suit of that caliber definitely accomplishes.

And, yes, obviously transhumanism is defined relative to historical context. If everyone's doing it, you don't need to have a word for it. That we have a word implies that transhumanists are looking ahead, and looking for things that not everyone has yet. So, no, your car doesn't make you a trans-humanist, but a robotic exoskeleton might be evidence of that philosophy.

3knb
It depends. If a person loses an arm and gets a mechanical prosthesis to restore normal functioning, that isn't transhumanist. If they get a prosthesis because they want to be super strong (or whatever), that is transhumanist. Transhumanism isn't about any technology, it specifically refers to augmentation of humans themselves.
0Nectanebo
This conversation sounds a little bit to me like the conversation in disputing definitions. Taboo transhumanism or something, perhaps? I think that these superheroes count as significant positive change at least, one of the things NancyLebovitz described in the title post.

I think the suit definitely counts as human augmentation. Plus, he designs his augmentations himself. Captain America just used the technology of some guy who then promptly proceeded to die, making the process unrepeatable for some reason. Stark is constantly refining his stuff.

5knb
Saying the suit makes Stark a transhuman is like saying my car makes me a transhuman. One of the characters even flies away with one of Stark's suits in the second movie, so it isn't really a part of him in any sense. Yes Iron Man's technology progresses, but so does Batman's. Ok, that might make Iron Man a better or more interesting character, but Tony Stark is not actually an augmented person in the movies. (Again, except for his fusion device thing, but that's the equivalent of a pacemaker, it restores normal mobility but doesn't augment his abilities).

The obvious counter-example is iron man, especially in the films.

4knb
More in the comics, I would say. In the films he only has one self-modification: the fusion device in his chest, and that is more of a medical device required to keep him alive than an actual transhumanist augmentation. In the comics, Stark has to continually modify his biology to keep up with the enhancements to his armor/fight more powerful villains. Actually Captain America is perhaps a better example. He becomes a super soldier not by accident, but by volunteering for an experimental human-enhancement procedure.

Simply to replicate one of the 10,000 neuron brain cells involved in the rat experiment took the processing capacity usually found in a single laptop.

Good lord, what sort of neuron model are they running? There had got to be a way to optimize that.

Your four criteria leave an infinite set of explanations for any phenomenon. Including, yes, George the Giant. That's why we have the idea of Occam's razor - or, more formally, Solomonoff Induction. Though I suppose, depending on the data available to the tribe, the idea of giant humans might not be dramatically more complicated than plate tectonics. It isn't like they postulated a god of earthquakes or some nonsense like that. At minimum, however, they are privileging the George the Giant hypotheses over the other equally-complicated plausible expla... (read more)

When we try to build a model of the underlying universe, what we're really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).

So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less tha... (read more)

Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they're just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next... (read more)

Additionally, reality and virtual reality can get a lot fuzzier than that. If AR glasses become popular, and a protocol exists to swap information between them to allow more seamless AR content integration, you could grab all the feeds coming in from a given location, reconstruct them into a virtual environment, and insert yourself into that environment, which would update with the real world in real time. People wearing glasses could see you as though you were there, and vice versa. If you rented a telepresence robot, it would prevent people from walki... (read more)

If you're talking about people frozen after four plus hours of room temperature ischemia, I'd agree with you that the odds are not good. However, somebody with a standby team, perfused before ischemic clotting can set in and vitrified quickly, has a very good chance in my book. We've done SEM imaging of optimally vitrified dead tissue, and the structural preservation is extremely good. You can go in and count the pores on a dendrite. There simply isn't much information lost immediately after death, especially if you get the head in ice water quickly. ... (read more)

The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected.

Well, then there's your answer to the question 'what is friendship good for' - whatever other value you place on friendship that makes you neurotypical. I was just trying to point out that that line of reasoning was silly.

Okay, but the reason why rationality has a special ability to help yo

... (read more)

Well, there's no reason to think you'd be completely isolated from top level reality. Internet access is very probable. Likely the ability to rent physical bodies. Make phone calls. That sort of thing. You could still get involved in most of the ways you do now. You could talk to people about it, get a job and donate money to various causes. Sign contracts, make legal arrangements to keep yourself safe. That sort of thing.

With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend.

... (read more)
2Epiphany
Hmm. I hadn't thought very much about blends of reality and virtual reality like that. I've encountered that idea but hadn't really thought about it. You took one example way too far. That wasn't intended as an essay on my views of friendship. The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected. Okay, but the reason why rationality has a special ability to help you get more of what you want is because it puts you in touch with reality. Only when you're in touch with reality can you understand it enough to make reality do things you want. In a simulation, you don't need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3. In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?

I want meaning, this requires having access to reality. I'll think about it.

Does it? You can have other people in the simulation with you. People find a lot of meaning in companionship, even digitally mediated. People don't think a conversation with your mother is meaningless because it happens over VOIP. You could have lots of places to explore. Works of art.. Things to learn. All meaningful things. You could play with the laws of physics. Find out what if feels like to turn gravity off one day and drift out of your apartment window.

If you w... (read more)

1Epiphany
Why is reality important to me? Hmm. Because without access to reality, you always have to wonder what's happening around you. Wouldn't there come a point where you went HOLY CRAP someone could be sneaking up behind me right now and I'd never know. Do you trust the outside world enough not to worry about that? I don't. I'd eventually spill coffee on my computer or something and it would dawn on me "What if they spill coffee on my brain?" I'd want to speak to the outside world. We'd probably be able to access them on the internet or some such. Things would be happening there. I would know about them. Political problems, disasters. Things I couldn't get involved in. And if not, then I'd be left to wonder. What's going on in the outside world? Are things okay? Imagine this: Imagine being cut off from the news. Not knowing what's going on in the world. Imagine realizing that you are asleep. Not knowing whether there's a burglar in your house, whether it's on fire. Not being able to wake up. Imagine your friends all have the same problem. You have no access to reality, so there's no way you can help them. If something affects them from the outside world, you can give them a hug. A virtual hug. But both of you knows that there's nothing you can do. With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend. If all the problems are happening in a world that neither of you has access to, if you're stuck inside a great big game where nothing can hurt you for real, what basis is there for friendship? What would companionship be good for? You'll be like a couple of children - helpless and living in a fantasy. Why are you learning rationality if you don't see value in influencing reality?

Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo...

-shrug- so don't leave your brain to science. I figure if somebody is prepared to let their brain decompose on a table while first year medical students poke at it, you might as well try to save their life. Provided, of course, the laws wherever you are permit you to put the results down if they're horrible. Worst case, they're back where they started.

... (read more)

Depends on your definition of 'you.' Mine are pretty broad. The way I see it, my only causal link to myself of yesterday is that I remember being him. I can't prove we're made of the same matter. Under quantum mechanics, that isn't even a coherent concept. So, if I believe that I didn't die in the night, then I must accept that that's a form of survival.

Uploaded copies of you are still 'you' in the sense that the you of tomorrow is you. I can talk about myself tomorrow, and believe that he's me (and his existence guarantees my survival), even though ... (read more)

Remember: you can always take random recently dead guys who donated their bodies to science, vitrify their brains, and experiment on them. And this'll be after years of animal studies and such.

You are overwhelmingly likely not to wake up in a body, depending on the details of your instructions to Alcor.. Scanning a frozen brain is exponentially cheaper and technologically easier than trying to repair every cell in your body. You will almost certainly wake up as a computer program running on server somewhere.

This is not a bad thing. Your computer program can be plugged into software body models in convincing virtual environments, permitting normal human activities (companionship, art, fun, sex, etc.), plus some activities not normally possible for humans. It'll likely be possible to rent mechanical bodies for interacting with the physical world.

-3CAE_Jones
It is if you want to not die, rather than be copied. How likely would it be, assuming that politics and funding weren't an issue, that we could grow a new body, prevent the brain from developing, yet keep it alive to the point that an existing brain could be inserted? I'm not necessarily concerned with the details of getting a brain transplant to work smoothly in general, just the replacement body. It doesn't seem like it should be difficult in theory; I'd be more worried about the resources. I'm also curious as to what's stopping us from keeping brains alive even if the body can no longer function. I'm not well researched in this area, but if it is a matter of keeping chemical resources flowing in and waste flowing out, then our current technology should be capable of as much. At that point, all we'd need is to develop artificial i/o for brains (which seems slightly more difficult, but not so difficult that it couldn't happen within a few decades). But I've probably overlooked something obvious and well known and am completely confused. :( I don't like the idea of being "revived" as an upload, though. An upload would be nice to have (It'd certainly make it easier to examine stored data, if only a little), but I still see an upload as a copy rather than preserving the original. And, being the original, that isn't the most appealing outcome to me.

There's no reason to experiment o cryo patients. Lots of people donate their brains to science. Grab somebody who isn't expecting to be resurrected, and test your technology on them. Worst case, you wake up somebody who doesn't want to be alive, and they kill themselves.

Number two is very unlikely. We're basically talking brain damage, and I've never heard of a case of brain damage, no matter how severe, doing that.

As for number three, that shambling horror would not be you in a meaningful sense. You'd just be dead, which is the default case. Al... (read more)

1Epiphany
There's no way not to. It will be a new technology. Somebody has to get reanimated first. Even if we freeze 100 mice to test on, or monkeys, reviving humans will be different. Doing something for the first time is, by it's very nature, an experiment. Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo... People experience this every day. It's called chemical depression. Even if you don't currently see a way for preservation or revival technology to cause this condition, it exists, it's possible that more than one mechanism may exist to trigger it, and that these technologies may have that as an accidental side-effect. Uh... no, because I'd be experiencing life, I would just be without what makes me me. That would be horror, not non-existence. So it is not death. Is it now? Most people don't believe in the right to die. In a world where we had figured out how to reanimate preserved corpses, do you think that they'll believe in the right to die? They'll probably automatically save and revive everyone.

Really? I wouldn't put odds of revival for best-case practices any lower than maybe 10%. How on earth do you have such a high confidence that WBE emulation won't be perfected in the next couple of hundred years?

0mwengler
I put the odds that we will have nanobots in our bloodstream killing cancer cells and regulating our chemistry to avoid a lot of metabolic problems, repair injuries, and so on, at a pretty high number. I put the odds that we will figure out how to put a living human into some sort of suspended animation and bring them back into regular animation at some sort of reasonable odds. I put the odds that if we did our best effort to freeze a living person now without damage that we would be able to eventually revive them at maybe 10%. The odds that we will be able to revive a person frozen or otherwise preserved after they are legally dead, that's getting down towards time-machine to the past odds, since I think you are freezing after important parts of the information are lost. Conditioned on having the technical ability to revive the frozen, that might raise the odds of eventually being revived towards 10%. There are a lot of things that might keep revival from happening other than it not being possible technically.

Living forever isn't quite impossible. If we ever develop acausal computing, or a way to beat the first law of thermodynamics (AND the universe turns out to be spatially infinite), then it's possible that a sufficiently powerful mind could construct a mathematical system containing representations of all our minds that it could formally prove would keep us existent and value-fulfilled forever, and then just... run it.

Not very likely, though. In the mean time, more life is definitely better than less.

1Epiphany
Let me ask you this. Somebody makes a copy of your mind. They turn it on. Do you see what it sees? Someone touches the new instance of you. Do you feel it? When you die, do you inhabit it? Or are you dead?

If you're revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.

That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore's law stays constant). That's little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor... (read more)

0A1987dM
Maybe, but scanning a vitrified brain with such a high resolution that a copy would feel more or less like the same person might take a bit longer.

Sure, there's some ambiguity there, but over adequately large sample sizes, trends become evident. Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.

2[anonymous]
That is a general defense of the concept of statistical analysis. It doesn't have anything to do with my point. It's pretty damn slow about correcting for pervasive biases in the researcher population, though. There's a reason we talk about science advancing funeral-by-funeral.

Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.

7[anonymous]
The problem with that is that, for any being that can't clearly and unambiguously report its experiences of mirror self-recognition to us (nonhuman animals generally -- there are claims of language use, but those would be considered controversial, to put it mildly, if used as evidence here) we have to guess whether or not the animal recognized itself based on its behaviors and their apparent relevance to the matter. It's necessarily an act of interpretation. Humans frequently mistake other humans for simpler, less-reflective beings than is actually the case due to differences of habit, perception and behavior -- simply because they don't react in expected ways based on the same stimulus. Human children have been subjected to the mirror test and passed or failed based on whether they tried to remove a sticker from their own faces. It should not be difficult to list alternative reasons for why a child wouldn't remove a sticker from their face. I find myself wondering if these researchers remember ever having been children before...

If cryonics is not performed extremely quickly, ischemic clotting can seriously inhibit cortical circulation, preventing good perfusion with cryoprotectants, and causing partial information-theoretic death. Being cryopreserved within a matter of minutes is probably necessary, barring a way to quickly improve circulation.

Your idea of provincialism is provincial. The idea of shipping tinned apes around the solar system is the true failure of vision here, nevermind the bag check procedures.

2Decius
How quickly do you think humans will give up commuting?

Not quite. It actually replaces it with the problem of maximizing people's expected reported life satisfaction. If you wanted to choose to try heroin, this system would be able to look ahead, see that that choice will probably drastically reduce your long-term life satisfaction (more than the annoyance at the intervention), and choose to intervene and stop you.

I'm not convinced 'what's best for people' with no asterisk is a coherent problem description in the first place.

0TheOtherDave
Sure, I accept the correction. And, sure, I'm not convinced of that either.

By bounded, I simply meant that all reported utilities are normalized to a universal range before being summed. Put another way, every person has a finite, equal fraction of the machine's utility to distribute among possible future universes. This is entirely to avoid utility monsters. It's basically a vote, and they can split it up however they like.

Also, the reflexive consistency criteria should probably be applied even to people who don't exist yet. We don't want plans to rely on creating new people, then turning them into happy monsters, even i... (read more)

2Mitchell_Porter
This matters more for initial conditions. A mature "FAI" might be like a cross between an operating system, a decision theory, and a meme, that's present wherever sufficiently advanced cognition occurs; more like a pervasive culture than a centralized agent. Everyone would have a bit of BAUM in their own thought process.

I can think of an infinite utility scenario. Say the AI figures out a way to run arbitrarily powerful computations in constant time. Say it's utility function is over survival and happiness of humans. Say it runs an infinite loop (in constant time), consisting of a formal system containing implementations of human minds, which it can prove will have some minimum happiness, forever. Thus, it can make predictions about its utility a thousand years from now just as accurately as ones about a billion years from now, or n, where n is an finite number of years. Summing the future utility of the choice to turn on the computer, from zero to infinity, would be an infinite result. Contrived I know, but the point stands.

If we can extract utility in a purer fashion, I think we should. At the bare minimum, it would be much more run-time efficient. That said, trying to do so opens up a whole can of worms of really hard problems. This proposal, provided you're careful about how you set it up, pretty much dodges all of that, as far as I can tell. Which means we could implement it faster, should that be necessary. I mean, yes, AGI is still very hard problem, but I think this reduces the F part of FAI to a manageable level, even given the impoverished understanding we have ... (read more)

2TheOtherDave
Well, it replaces it with a more manageable problem, anyway. More specifically, it replaces the question "what's best for people?" with the question "what would people choose, given a choice?" Of course, if I'm concerned that those questions might have different answers, I might be reluctant to replace the former with the latter.

Reflexively Consistent Bounded Utility Maximizer?

Hrm. Doesn't exactly roll off the tongue, does it? Let's just call it a Reflexive Utility Maximizer (RUM), and call it a day. People have raised a few troubling points that I'd like to think more about before anyone takes anything too seriously, though. There may be a better way to do this, although I think something like this could be workable as a fallback plan.

-2Mitchell_Porter
Let me review the features of the algorithm: * The FAI maximizes overall utility. * It obtains a value for the overall utility of a possible world by adding the personal utilities of everyone in the world. But there is a bound. It's unclear to me whether the bound applies directly to personal utilities - so that a personal utility exceeding the bound is reduced to the bound for the purposes of subsequent calculation - or whether the bound applies to the sum of personal utilities - so that if the overall utility of a possible world exceeds the bound, it is reduced to the bound for the purposes of decision-making (comparison between worlds). * If one of the people whose personal utilities gets summed, is a future continuation of an existing person (someone who exists at the time the FAI gets going), then the present-day person gets to say whether that is a future self of which they would approve. The last part is the most underspecified aspect of the algorithm: how the approval-judgement is obtained, what form it takes, and how it affects the rest of the decision-making calculation. Is the FAI only to consider scenarios where future continuants of existing people are approved continuants, with any scenario containing an unapproved continuant just ruled out apriori? Or are there degrees of approval? I think I will call my version (which probably deviates from your conception somewhere) a "Bounded Approved Utility Maximizer". It's still a dumb name, but it will have to do until we work our way to a greater level of clarity.

Note the reflexive consistency criterion. That'd only happen if everyone predictable looked at the happy monster and said 'yep, that's me, that agent speaks for me.'

-1Mitchell_Porter
OK... I am provisionally adopting your scheme as a concrete scenario for how a FAI might decide. You need to give this decision procedure a name.
Load More