Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race's survival and flourishing in the short and long term. He's strongly into the importance of xrisk reduction.

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 6:18 PM

At 30:20 in the talk:

I'll say a little bit about the single one [x-risk] on this list that I worry about the most, which is (...) Unfriendly Artificial Intelligence. (...) One thing I think is clear: That we really don't know [about the impact of foom-able AI]. (...) My feeling is if we really don't know, probably we should put at least a little bit of thought into thinking about it (...) with a few awsome exceptions [points out the FHI] there is way too little attention given to this.

A very nice touch a bit later on when he says he worries about this also as a father, which reinforces the point that x-risk isn't just something academic, but would have an actual, real impact to his actual family's actual well-being. It's easy to banish x-risk discussions to some academic sphere of armchair-theorycrafting, and not realize that if e.g. the planet explodes, that encompasses your house as well. Even your comfy chair!

[-][anonymous]11y170

Visions of it swam sickeningly through his nauseated mind. There was no way his imagination could feel the impact of the whole Earth having gone, it was too big. He prodded his feelings by thinking that his parents and his sister had gone. No reaction. He thought of all the people he had been close to. No reaction. Then he thought of a complete stranger he had been standing behind in the queue at the supermarket before and felt a sudden stab - the supermarket was gone, everything in it was gone. Nelson's Column had gone! Nelson's Column had gone and there would be no outcry, because there was no one left to make an outcry. From now on Nelson's Column only existed in his mind. England only existed in his mind - his mind, stuck here in this dank smelly steel-lined spaceship. A wave of claustrophobia closed in on him.

England no longer existed. He'd got that - somehow he'd got it. He tried again. America, he thought, has gone. He couldn't grasp it. He decided to start smaller again. New York has gone. No reaction. He'd never seriously believed it existed anyway. The dollar, he thought, had sunk for ever. Slight tremor there. Every Bogart movie has been wiped, he said to himself, and that gave him a nasty knock. McDonalds, he thought. There is no longer any such thing as a McDonald's hamburger.

He passed out.

It's from the 'The Hitchhiker's Guide to the Galaxy'. There, I saved you a google.

I'm a bit confused about the prior that he uses in order to assign uniform probability on the existence of extraterrestrial life. Although I agree with that a logarithmic flat prior is a good idea for this problem, it is important to acknowledge that it is biased towards the unconstrained large scales. Since there is a minimum length scale by construction (the size of the earth or so) it would look more fair if he imposed a large scale cutoff as well (at radius of the observable Universe say). This way we can no longer claim that the extraterrestrial life is most likely to be found further than the edge of our Universe, but we could possibly still rule out our own galaxy.

Aside from that, an excellent (and entertaining) talk by Tegmark.

He is concerned that AIs might not be conscious. Interestingly, this is IIRC the exact opposite fear to Eliezer, who is afraid that they might be (though I may be misremembering). I think Tegmark is mainly talking about UFAIs that replace us (rather than FAIs that protect us) - so basically he's saying he'd value a conscious clippy, but not an unconscious one.

Does he define "conscious"?

No. Elsewhere he has said "I believe that consciousness is the way information feels when being processed", but in this talk he seems to make a little bit of a retreat. He describes a positive singularity with p-zombie AI/robots that have perception and appear conscious, but aren't "aware" of the world around them. He makes no clarification of how perception differs from awareness and doesn't mention introspection at all.

So... basically he doesn't know what he is talking about?

Neither does anyone who is talking about consciousness...