Haiku

Co-founder of AI-Plans and volunteer with PauseAI.

The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Haiku20

The reasoning you gave sounds sensible, but it doesn't comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.

I believe that is because the culture on Metaculus of predicting one's true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.

Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don't expect to update regularly can be on the order of 1 minute.

Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster's decision to predict or not, but it doesn't seem to have a significant impact on the aggregate. Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn't affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.

Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.

Haiku10

Metaculus does not have this problem, since it is not a market and there is no cost to make a prediction. I expect long-shot conditionals on Metaculus to be more meaningful, then, since everyone is incentivized to predict their true beliefs.

Haiku32

Not building a superintelligence at all is best. This whole exchange started with Sam Altman apparently failing to notice that governments exist and can break markets (and scientists) out of negative-sum games.

Haiku48

That requires interpretation, which can introduce unintended editorializing. If you spotted the intent, the rest of the audience can as well. (And if the audience is confused about intent, the original recipients may have been as well.)

I personally would include these sorts of notes about typos if I was writing my own thoughts about the original content, or if I was sharing a piece of it for a specific purpose. I take the intent of this post to be more of a form of accessible archiving.

Haiku30

I used to be a creationist, and I have put some thought into this stumbling block. I came to the conclusion that it isn't worth leaving out analogies to evolution, because the style of argument that would work best for most creationists is completely different to begin with. Creationism is correlated with religious conservatism, and most religious conservatives outright deny that human extinction is a possibility.

The Compendium isn't meant for that audience, because it explicitly presents a worldview, and religious conservatives tend to strongly resist shifts to their worldviews or the adoption of new worldviews (moreso than others already do). I think it is best left to other orgs to make arguments about AI Risk that are specifically friendly to religious conservatism. (This isn't entirely hypothetical. PauseAI US has recently begun to make inroads with religious organizations.)

Haiku66

I don't find any use for the concept of fuzzy truth, primarily because I don't believe that such a thing meaningfully exists. The fact that I can communicate poorly does not imply that the environment itself is not a very specific way. To better grasp the specific way that things actually are, I should communicate less poorly. Everything is the way that it is, without a moment of regard for what tools (including language) we may use to grasp at it.

(In the case of quantum fluctuations, the very specific way that things are involves precise probabilistic states. The reality of superposition does not negate the above.)

Haiku143

I am not well-read on this topic (or at-all read, really), but it struck me as bizarre that a post about epistemology would begin by discussing natural language. This seems to me like trying to grasp the most fundamental laws of physics by first observing the immune systems of birds and the turbulence around their wings.

The relationship between natural language and epistemology is more anthropological* that it is information-theoretical. It is possible to construct models that accurately represent features of the cosmos without making use of any language at all, and as you encounter in the "fuzzy logic" concept, human dependence on natural language is often an impediment to gaining accurate information.

Of course, natural language grants us many efficiencies that make it extremely useful in ancestral human contexts (as well as most modern ones). And given that we are humans, to perform error correction on our models, we have to model our own minds and the process of examination and modelling itself as part of the overall system we are examining and modelling. But the goal of that recursive modelling is to reduce the noise and error caused by the fuzziness of natural language and other human-specific* limitations, so that we can make accurate and specific predictions about stuff.

*The rise of AI language models means natural language is no longer a purely human phenomenon. It also had the side-effect of solving the symbol grounding problem by constructing accurate representations of natural language using giant vectors that map inputs to abstract concepts, map abstract concepts to each other, and map all of that to testable outputs. This seems to be congruent with what humans do, as well. Here again, formalization and precise measurement in order to discover the actual binary truth values that really do exist in the environment is significantly more useful than accepting the limitations of fuzziness.

Haiku113

Is this an accurate summary of your suggestions?

Realistic actions an AI Safety researcher can take to save the world:

  • ✅ Pray for a global revolution
  • ✅ Pray for an alien invasion
  • ❌ Talk to your representative
Answer by Haiku20

In my spare time, I am working in AI Safety field building and advocacy.

I'm preparing for an AI bust in the same way that I am preparing for success in halting AI progress intentionally: by continuing to invest in retirement and my personal relationships. That's my hedge against doom.

Haiku32

I think this sort of categorization and exploration of lab-level safety concepts is very valuable for the minority of worlds in which safety starts to be a priority at frontier AI labs.

Load More