harfe

Wiki Contributions

Comments

Sorted by
harfe30

I think you are broadly right.

So we're automatically giving ca. higher probability – even before applying the length penalty .

But note that under the Solomonoff prior, you will get another penalty for these programs with DEADCODE. So with this consideration, the weight changes from (for normal ) to (normal plus DEADCODE versions of ), which is not a huge change.

For your case of "uniform probability until " I think you are right about exponential decay.

harfe12

I have doubts that the claim about "theoretically optimal" apply to this case.

Now, you have not provided a precise notion of optimality, so the below example might not apply if you come up with another notion of optimality or assume that voters collude with each other, or use a certain decision theory, or make other assumptions... Also there are some complications because the optimal strategy for each player depends on the strategy of the other players. A typical choice in these cases is to look at Nash-equilibria.

Consider three charities A,B,C and two players X,Y who can donate $100 each. Player X has utilities , , for the charities A,B,C. Player Y has utilities , , for the charities A,B,C.

The optimal (as in most overall utility) outcome would be to give everything to charity B. This would require that both players donate everything to charity B. However, this is not a Nash-equilibrium, as player X has an incentive to defect by giving to A instead of B and getting more utility.

This specific issue is like the prisoners dilemma and could be solved with other assumptions/decision theories.

The difference between this scenario and the claims in the literature might be that public goods is not the same as charity, or that the players cannot decide to keep the funds for themselves. But I am not sure about the precise reasons.

Now, I do not have an alternative distribution mechanism ready, so please do not interpret this argument as serious criticism of the overall initiative.

harfe40

There is also Project Quine, which is a newer attempt to build a self-replicating 3D printer

harfe34

This was already referenced here: https://www.lesswrong.com/posts/MW6tivBkwSe9amdCw/ai-existential-risk-probabilities-are-too-unreliable-to

I think it would be better to comment there instead of here.

harfe80

One thing I find positive about SSI is their intent to not have products before superintelligence (note that I am not arguing here that the whole endeavor is net-positive). Not building intermediate products lessens the impact on race dynamics. I think it would be preferable if all the other AGI labs had a similar policy (funnily, while typing this comment, I got a notification about Claude 3.5 Sonnet... ). The policy not to have any product can also give them cover to focus on safety research that is relevant for superintelligence, instead of doing some shallow control of the output of LLMs.

To reduce bad impacts from SSI, it would be desirable that SSI also

  • have a clearly stated policy to not publish their capabilities insights,
  • take security sufficiently seriously to be able to defend against nation-state actors that try to steal their insights.
harfe41

It does not appear paywalled to me. The link that @mesaoptimizer posted is an archive, not the original bloomberg.com article.

harfe30

I haven't watched it yet, but there is also a recent technical discussion/podcast episode about AIXI and relatedd topics with Marcus Hutter: https://www.youtube.com/watch?v=7TgOwMW_rnk

harfe31

It suffices to show that the Smith lotteries that the above result establishes are the only lotteries that can be part of maximal lottery-lotteries are also subject to the partition-of-unity condition.

I fail to understand this sentence. Here are some questions about this sentence:

  • what are Smith lotteries? Ctrl+f only finds lottery-Smith lottery-lotteries, do you mean these? Or do you mean lotteries that are smith?

  • which result do you mean by "above result"?

  • What does it mean for a lottery to be part of maximal lottery-lotteries?

  • does "also subject to the partition-of-unity" refer to the smith lotteries or to the lotteries that are part of maximal lottery-lotteries? (it also feels like there is a word missing somewhere)

  • Why would this suffice?

  • Is this part also supposed to imply the existence of maximal lottery-lotteries? If so, why?

Answer by harfe10

A lot of the probabilities we talk about are probabilities we expect to change with evidence. If we flip a coin, our p(heads) changes after we observe the result of the flipped coin. My p(rain today) changes after I look into the sky and see clouds. In my view, there is nothing special in that regard for your p(doom). Uncertainty is in the mind, not in reality.

However, how you expect your p(doom) to change depending on facts or observation is useful information and it can be useful to convey that information. Some options that come to mind:

  1. describe a model: If your p(doom) estimate is the result of a model consisting of other variables, just describing this model is useful information about your state of knowledge, even if that model is only approximate. This seems to come closest to your actual situation.

  2. describe your probability distribution of your p(doom) in 1 year (or another time frame): You could say that you think there is a 25% chance that your p(doom) in 1 year is between 10% and 30%. Or give other information about that distribution. Note: your current p(doom) should be the mean of your p(doom) in 1 year.

  3. describe your probability distribution of your p(doom) after a hypothetical month of working on a better p(doom) estimate: You could say that if you were to work hard for a month on investigating p(doom), you think there is a 25% chance that your p(doom) after that month is between 10% and 30%. This is similar to 2., but imo a bit more informative. Again, your p(doom) should be the mean of your p(doom) after a hypothetical month of investigation, even if you don't actually do that investigation.

Load More