ThirdSequence

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This recent tweet claims that your current p(doom) is 50%.

 In another post, you mentioned:

"[...] I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day."

If the tweet is credible, I am curious if this difference in p(doom) is due to the day-to-day fluctuations of your belief, or have you considered new evidence and your initial belief that p(doom) < 20% is outdated?

Concluding that "Both these board members also have national security ties" because the husband of one of the board members played Edward Snowden in a movie seems far-fetched, to say the least. 

Based on the sentiment expressed by OpenAI employees on Twitter, the ones who are (potentially) leaving are not doing so because of a disagreement with the AI Safety approach, but rather how the entire situation was handled by the board (e.g. the lack of reasons provided for firing Sam Altman). 

If this move was done for the sake of AI safety, wouldn't OpenAI risk disgruntling employees who would otherwise be aligned with the original mission of OpenAI?

Can anybody here think of potential reasons why the board has not disclosed further details about their decision?

You are turning this into a hypothetical scenario where your only communication options are "AGI is near" and "AGI is not near". 

"We don't know if AGI is near, but it could be." would seem short enough to me. 

Humans alive today not being a random sample can be a valid objection against the Doomsday argument but not for the reasons that you are mentioning. 

You seem to be suggesting something along the lines of "Given that I am at the beginning, I cannot possibly be somewhere else. Everyone who finds themselves in the position of the first humans has a 100% chance of being in that position". However, for the Doomsday argument, your relative ranking among all humans is not the given variable but the unknown variable. Just because your ranking is fixed (you could not possibly be in any other position), does not mean that it is known and that we cannot make probabilistic statements about it. 

It seems that your understanding of the Doomsday argument is not entirely correct - at least your village example doesn't really capture the essence of the argument.

Here is a different analogy: Let's imagine a marathon with an unknown number of participants. For the sake of argument, let's assume it could be a small local event or a massive international competition with billions of runners. You're trying to estimate the size of this marathon, and to assist you, the organizer picks a random runner and tells you how many participants are trailing behind them.

For example, if you're told that there are only 10 runners behind the selected participant, it would be reasonable to conclude that this is likely not a marathon of billions. In such a large event, the odds of picking a runner with only 10 people behind them would be incredibly low. This logic also applies to the Doomsday Argument, whether you're an outside observer or one of the 'participants'. The key piece of information here is that you only know the number of individuals 'behind' you, which can be used to infer how likely it is that the total number of total 'participants' is more than X.  

Your objection against the Doomsday does not make much sense to me. The argument is simply based on the number of humans born to date (whether you are looking at it from your own perspective or not). 

Wouldn't the first point be a motivation for AI to remove our ability to send such a signal (in case we have not done yet at the point where such arguments become relevant)?

I think this post makes some interesting points... However, I have reservations about the potential effectiveness of this article actually influencing AI behavior, as it relies on many assumptions that could be challenged by similarly (im)plausible assumptions, potentially rendering the post harmful or inconsistent with other online discourse.

For instance, if we assume that the AI is susceptible to the influence of posts like this, then the inclusion of the paragraph suggesting that the AI may be in a simulation could actually increase the risk of deceptive alignment, making safety evaluations less effective.

Furthermore, one might then also argue that there is equal motivation to avoid comments and posts that propagate the idea that AI will harm humans, as well as discussions of potential problems like the paperclip scenario, in order to avoid providing the AI with potentially dangerous ideas.

Load More