http://biointelligence-explosion.com/

- Site put together by David Pearce

The content and choice of domain name should be of interest.

New to LessWrong?

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 3:26 PM

Despite this witches' brew of new technologies, a conceptual gulf remains in the futurist community between those who imagine human destiny, if any, lies in digital computers and hypothetical artificial consciousness; and in contrast radical bioconservatives who believe that our posthuman successors will also be our supersentient descendants at their neural networked core - not the digital zombies of symbolic AI run on classical serial computers.

Digital creatures need not be "zombies" - any more than human beings are - and they certainly don't need to run on "classical serial computers".

There is a gulf much like the one David describes - but the "bioconservative" position seems unbelievable to me - the future will be engineered.

Random question that just occurred to me: would you be fine if an exact copy was made of you (ignore quantum mechanics for now), and the old you was killed off?

If you asked me afterwards, I'd hardly say "no".

Me? I suppose so - if I could be really convinced the process was reliable. Make two of me and I might need less convincing.

I don't know. The question of self is a hard one. I would not, because I would like my consciousness, as in the one that I control (a little recursive, but you get the point) to be alive, and because that other me is another distinct set of atoms, and therefore my neurons don't control him. So I would say no

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y00

There might not be such a thing as "distinct set of atoms" on fundamental level and even if it does, the atoms/molecules in constellation that constitutes you are turned over all the time. In short you in 5 sec do not consist of the same set of atoms at present you. Does that make you think that 5 sec you is not really you?

In short you in 5 sec do not consist of the same set of atoms at present you. Does that make you think that 5 sec you is not really you?

The five seconds in the future guy is me. The guy from 5 seconds ago... nah, he was kind of a dick.

[-]Burrzz12y-10

Could I come back at say 21 with the knowledge / wisdom I have now?

Metaphorically, the Biointelligence Explosion represents an "event horizon" beyond which humans cannot model or understand the future.

As usual, the idea that we cannot model or understand the future is bunk. Wolfram goes on about this too - with his "computational irreducibility". Popper had much the same idea - in "The Poverty of Historicism". What is it about the unknowable future that makes it seem so attractive?

What is it about the unknowable future that makes it seem so attractive?

There are a variety of different issues going on here. One is that there's a large history of very inaccurate predictions about the future, so they are reacting against that. Another is that predicting about the future with any accuracy is really hard. If the thesis were restricted to "predicting the future is so difficult that the vast majority of it is a waste of time" then it would look more reasonable. I suspect that when some people make this sort of assertion they mean something closer to this.

If the thesis were restricted to "predicting the future is so difficult that the vast majority of it is a waste of time" then it would look more reasonable.

Well, the brain is constantly predicting the future. It has to understand the future consequences of its possible actions - so that it can choose between them. Prediction is the foundation of all decision making. Predicting the future seems rather fundamental and commonplace to me - and I would not normally call it "a waste of time".

Ok. How about ""predicting the future to any substantial level beyond the next few years is so difficult that the vast majority of it is a waste of time."

(I disagree with both versions of this thesis, but this seems more reasonable. Therefore, it seems likely to me that people mean something much closer to this.)

Also note the conflation between two types of singularity even though only one type (intelligence explosion) is in the name! Isn't the reason one would use the term "intelligence explosion" to distinguish your view from the event horizon one?

It is best not to use the term "intelligence explosion" for some hypothetical future event in the first place. That is severely messed up terminology.

Many thanks Dr Manhattan. A scholarly version plus a bibliography ( PDF: http://www.biointelligence-explosion.com/biointelligence-explosion.pdf ) will appear in Springer's forthcoming Singularity volume (cf. http://singularityhypothesis.blogspot.com/p/central-questions.html ) published later this year. Here's a question for lesswrong members. I've been asked to co-author the Introduction. One thing I worry about is that causal readers of the contributors' essays won't appreciate just how radically the Kurzweilian and SIAI conceptions of the Singularity are at odds. Is there anywhere in print or elsewhere where Ray Kurzweil and Eliezer Yudkowski, say, directly engage each other's arguments in debate ? Also, are there any critical but non-obvious points beyond the historical background you think should be included the introductory overview?

It may make sense if you are going to make a point about the different notions of a Singularity to use I. J. Good as the prototype of intelligence explosion Singularity since he's the original proponent. That said, I'm not aware of anywhere where Kurzweil and Eliezer directly engage each other.