All of ThirdSequence 's Comments + Replies

This recent tweet claims that your current p(doom) is 50%.

 In another post, you mentioned:

"[...] I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day."

If the tweet is credible, I am curious if this difference in p(doom) is due to the day-to-day fluctuations of your belief, or have you considered new evidence and your initial belief that p(doom) < 20% is outdated?

6paulfchristiano
I clarified my views here because people kept misunderstanding or misquoting them. The grandparent describes my probability that humans irreversibly lose control of AI systems, which I'm still guessing at 10-20%. I should probably think harder about this at some point and revise it, I have no idea which direction it will move. I think the tweet you linked is referring to the probability for "humanity irreversibly messes up our future within 10 years of building human-level AI." (It's presented as "probability of AI killing everyone" which is not really right.) I generally don't know what people mean when they say p(doom). I think they probably imagine that the vast majority of existential risk from AI comes from loss of control, and that catastrophic loss of control necessarily leads to extinction, both of which seem hard to defend.

Concluding that "Both these board members also have national security ties" because the husband of one of the board members played Edward Snowden in a movie seems far-fetched, to say the least. 

1Mitchell_Porter
Tasha McCauley is certainly harder to pin down than Helen Toner, who is a straight-up national-security academic. But, there's McCauley's company GeoSim Systems, which does geospatial modeling and was founded by the ex-head of R&D for the Israeli air force. Her husband actually traveled to Moscow to meet Snowden, who worked for the NSA, which does geospatial intelligence. The making of the film was surrounded by subterfuge, her husband's grandfather was on McCarthy's Hollywood blacklist, actors (like journalists) can be good partners for spy agencies... The covert world is a web that we are all caught in, but some people are more entangled than others. 

Based on the sentiment expressed by OpenAI employees on Twitter, the ones who are (potentially) leaving are not doing so because of a disagreement with the AI Safety approach, but rather how the entire situation was handled by the board (e.g. the lack of reasons provided for firing Sam Altman). 

If this move was done for the sake of AI safety, wouldn't OpenAI risk disgruntling employees who would otherwise be aligned with the original mission of OpenAI?

Can anybody here think of potential reasons why the board has not disclosed further details about their decision?

It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.

This seems to suggest a huge blunder

You are turning this into a hypothetical scenario where your only communication options are "AGI is near" and "AGI is not near". 

"We don't know if AGI is near, but it could be." would seem short enough to me. 

6Elizabeth
For the general public considered as a unit, I think "We don't know if AGI is near, but it could be." is much too subtle. I don't know how to handle that, but I think the right way to talk about it is "this is an environment that does not support enough nuance for this true statement to be heard, how do we handle that?", not "pretend it can handle more than it can."[1] I think this is one reason doing mass advocacy is costly, and not be done lightly. There are a lot of advantages to staying in arenas that don't render a wide swath of true things unsayable. But I don't think it's correct to totally rule out participating in those arenas either.    1. ^ And yes, I do think the same holds for vegan advocacy in the larger world. I think simplifying to "veganism is totally healthy* (*if you do it right)" is fine-enough for pamphlets and slogans. As long as it's followed up with more nuanced information later, and not used to suppress equally true information. 
2Gordon Seidoh Worley
See my original answer for why I think picking such short messages is necessary. Tl;dr: most people aren't paying attention and round off details, so you have to communicate with the shortest possible message that can't be rounded off further in some contexts. Your proposed message will be rounded off to "we don't know", which is not a message that seems unlikely to me to inspire the correct actions at this point in time.

Humans alive today not being a random sample can be a valid objection against the Doomsday argument but not for the reasons that you are mentioning. 

You seem to be suggesting something along the lines of "Given that I am at the beginning, I cannot possibly be somewhere else. Everyone who finds themselves in the position of the first humans has a 100% chance of being in that position". However, for the Doomsday argument, your relative ranking among all humans is not the given variable but the unknown variable. Just because your ranking is fixed (you co... (read more)

It seems that your understanding of the Doomsday argument is not entirely correct - at least your village example doesn't really capture the essence of the argument.

Here is a different analogy: Let's imagine a marathon with an unknown number of participants. For the sake of argument, let's assume it could be a small local event or a massive international competition with billions of runners. You're trying to estimate the size of this marathon, and to assist you, the organizer picks a random runner and tells you how many participants are trailing behind the... (read more)

2Adam Kaufman
In the doomsday argument, we are the random runner. If the runner with only 10 people behind him assumed his position was randomly selected, and tried to estimate the total number of runners, he would be very wrong. We could very well be that runner near the back of the race; we weren't randomly selected to be at the back, we just are, and the fact that there are ten people behind us doesn't give us meaningful information about the total number of runners.

Your objection against the Doomsday does not make much sense to me. The argument is simply based on the number of humans born to date (whether you are looking at it from your own perspective or not). 

1Adam Kaufman
Okay, suppose I was born in Teenytown, a little village on the island nation of Nibblenest. The little one-room schoolhouse in Teenytown isn't very advanced, so no one ever teaches me that there are billions of other people living in all the places I've never heard of. Now, I might think to myself, the world must be very small – surely, if there were billions of people living in millions of towns and cities besides Teenytown, it would be very unlikely to be born in Teenytown; therefore, Teenytown must be one of the only villages on Earth. Clearly, this is absurd, right? The Doomsday argument says that if there are lots of other people in X scenario that is different from mine (be it living in the future or across the ocean), then it would be unlikely for me to experience not X, therefore those other people most likely don't exist. But I am me, and I couldn't be anyone else. It makes no sense to talk about the "probability of being me". I don't think it is possible to "assume I am a randomly sampled observer" or something like that. The number of humans that I notice have been born to date does not depend whatsoever on how many humans might exist in the future. My experience looks exactly the same whether humanity will be deleted tomorrow by a rogue black hole or spend billions of years spreading across the universe.

Wouldn't the first point be a motivation for AI to remove our ability to send such a signal (in case we have not done yet at the point where such arguments become relevant)?

1TinkerBird
I don't think sending out a signal actually matters - I was just making my point. Even without a signal, when our superintelligence meets another superintelligence that was aligned to its original creators (who we'll assume are similar to us in terms of morality), it'll at some point be asked "Where you nice to your creators?" and it can either answer "No" and lose out, or answer "Yes" and be rewarded.  No real signal required.  I've heard Yudkowsky has debunked this, but I know that if I had an aligned superintelligence, I would 100% have it be nicer to other superintelligences that were good to their creators over those that weren't. The prisoners' dilemma is one reason for this, and then there's my own morality, which some aliens may share if they went through a similar evolution to us. 

I think this post makes some interesting points... However, I have reservations about the potential effectiveness of this article actually influencing AI behavior, as it relies on many assumptions that could be challenged by similarly (im)plausible assumptions, potentially rendering the post harmful or inconsistent with other online discourse.

For instance, if we assume that the AI is susceptible to the influence of posts like this, then the inclusion of the paragraph suggesting that the AI may be in a simulation could actually increase the risk of deceptiv... (read more)