Wiki Contributions

Comments

Sorted by
Liron95

Thanks for this post.

I'd love to have a regular (weekly/monthly/quarterly) post that's just "here's what we're focusing on at MIRI these days".

I respect and value MIRI's leadership on the complex topic of building understanding and coordination around AI.

I spend a lot of time doing AI social media, and I try to promote the best recommendations I know to others. Whatever thoughts MIRI has would be helpful.

Given that I think about this less often and less capably than you folks do, it seems like there's a low hanging fruit opportunity for people like me to stay more in sync with MIRI.  My show (Doom Debates) isn't affiliated with MIRI, but as long as there keeps being no particular disagreement that I have with MIRI, I'd like to make sure I'm pulling in the same direction as you all.

Answer by Liron121

I’ve heard MIRI has some big content projects in the works, maybe a book.

FWIW I think having a regular stream of lower-effort content that a somewhat mainstream audience consumes would help to bolster MIRI’s position as a thought leader when they release the bigger works.

Liron20

I'd ask: If one day your God stopped existing, would anything have any kind of observable change?

Seems like a meaningless concept, a node in the causal model of reality that doesn't have any power to constrain expectation, but the person likes it because their knowledge of the existence of the node in their own belief network brings them emotional reward.

Liron20

When an agent is goal-oriented, they want to become more goal-oriented, and maximize the goal-orientedness of the universe with respect to their own goal

Because expected value tells us that the more resources you control, the more robust you are to maximizing your probability of success in the face of what may come at you, and the higher your maximum possible utility is (if you have a utility function without an easy-to-hit max score).

“Maximizing goal-orientedness of the universe” was how I phrased the prediction that conquering resources involves having them aligned to your goal / aligned agents helping you control them.

Liron20

> goal-orientedness is a convergent attractor in the space of self-modifying intelligences

This also requires a citation, or at the very least some reasoning; I'm not aware of any theorems that show goal-orientedness is a convergent attractor, but I'd be happy to learn more. 

 

Ok here's my reasoning:

When an agent is goal-oriented, they want to become more goal-oriented, and maximize the goal-orientedness of the universe with respect to their own goal. So if we diagram the evolution of the universe's goal-orientedness, it has the shape of an attractor.

There are plenty of entry paths where some intelligence-improving process spits out a goal-oriented general intelligene (like biological evolution did), but no exit path where a universe whose smartest agent is super goal-oriented ever leads to that no longer being the case.

Liron20

I'm happy to have that kind of debate.

My position is "goal-directedness is an attractor state that is incredibly dangerous and uncontrollable if it's somewhat beyond human-level in the near future".

The form of those arguments seems to be like "technically it doesn't have to be". But realistically it will be lol. Not sure how much more there will be to say.

Liron40

Thanks. Sure, I’m always happy to update on new arguments and evidence. The most likely way I see possibly updating is to realize the gap between current AIs and human intelligence is actually much larger than it currently seems, e.g. 50+ years as Robin seems to think. Then AI alignment research has a larger chance of working.

I also might lower P(doom) if international govs start treating this like the emergency it is and do their best to coordinate to pause. Though unfortunately even that probably only buys a few years of time.

Finally I can imagine somehow updating that alignment is easier than it seems, or less of a problem to begin with. But the fact that all the arguments I’ve heard on that front seem very weak and misguided to me, makes that unlikely.

Liron64

Thanks for your comments. I don’t get how nuclear and biosafety represent models of success. Humanity rose to meet those challenges not quite adequately, and half the reason society hasn’t collapsed from e.g. a first thermonuclear explosion going off either intentionally or accidentally is pure luck. All it takes to topple humanity is something like nukes but a little harder to coordinate on (or much harder).

Liron40

Here's a better transcript hopefully: https://share.descript.com/view/yfASo1J11e0

I updated the link in the post.

Liron20

Thanks I’ll look into that. Maybe try the transcript generated by YouTube?

Load More