Wiki-Tags in Need of Work

Someone is well-calibrated if the things they predict with X% chance of happening in fact occur X% of the time. Importantly, calibration is not the same as accuracy. Calibration is about accurately assessing how good your predictions are, not making good predictions. Person A, whose predictions are marginally better than chance (60% of them come true when choosing from two options) and who is precisely 60% confident in their choices, is perfectly calibrated. In contrast, Person B, who is 99% confident in their predictions, and right 90% of the time, is more accurate than Person A, but less well-calibrated... (read more)

Transparency and interpretability is the ability for the decision processes and inner workings of AI and machine learning systems to be understood by humans or other outside observers... (read more)

AI Risk is analysis of the risks associated with building powerful AI systems... (read more)

The Rationality tag is for posts about how to think in ways that more reliably result in you having true beliefs and making decisions that result in attainment of your goals.

Tag Voting Activity

User Post Title Tag Pow When Vote

Recent Tag & Wiki Activity

Value drift refers to the idea that over time, the values or goals of a person or an AI system can change, often in ways that weren’t originally intended. For humans, this might happen as life experiences, personal growth, or external influences cause someone's beliefs to evolve. For AI, it could occur if the system starts to interpret its goals differently as it learns and interacts with the world.

AI Racing is when multiple actors (corporations, or governments) are each incentivized to make capabilities advances, even if it'd be better for everyone if neither actor was advancing capabilities.

AI Racing

AI Racing is when multiple actors (corporations, or governments) are each incentivized to make capabilities advances, even if it'd be better for everyone if neither actor was advancing capabilities.

A safety case is a structured argument showing that a system is acceptably safe for a specific use in a specific environment. Safety cases typically include:

  • A description of the system's operational context
  • Identification of potential hazards and their consequences
  • A description of the risk controls that mitigate the hazards
  • An account of any residual risk
AI Safety Cases

A safety case is a structured argument showing that a system is acceptably safe for a specific use in a specific environment. Safety cases typically include:

  • A description of the system's operational context
  • Identification of potential hazards and their consequences
  • A description of the risk controls that mitigate the hazards
  • An account of any residual risk

Assorted Projects 

  • Calibration – see how justified your own confidence is
  • Pastcasting – forecasting past results so you get instant feedback
  • Historical Base rates - much thinking requires knowing roughly how often things have happened in the past. CF Our World In Data
  • Prizes for summarization (by Nonlinear) – making it easier to get a basic understanding on EA/LessWrong topics
  • Displaying estimates – the Squiggle team seem focused on the first step of this
  • Calibration – see how justified your own confidence is
  • Pastcasting - forecasting past results so you get instant feedback
  • Calibration - see how justified your own confidence is
  • Historical Base rates - much thinking requires knowing roughly how often things have happened in the past. CF Our World In Data
  • Prizes for summarization (by Nonlinear) - making it easier to get a basic understanding on EA/LessWrong topics
  • Displaying estimates – the Squiggle team seem focused on the first step of this
  • Pastcasting - forecasting past results so you get instant feedback
  • Calibration - see how justified your own confidence is
  • Historical Base rates - much thinking requires knowing roughly how often things have happened in the past
  • Prizes for summarization (by Nonlinear) - making it easier to get a basic understanding on EA/LessWrong topics
  • Displaying estimates – the Squiggle team seem focused on the first step of this

Assorted Projects to help people improve their epistemics and individuals to contact: 

 

Potential projects:

  • Displaying estimates
    • The – the Squiggle team seem focused on the first step of this
      • Ozzie Gooen
    • Nathan Young is interested in it
  •  

Note:

  • It is hard to build epistemic infrastructure among rationalists, because anyone who is capable of doing it can work on AI safety and most do

Sometimes people dismiss Newcomb's problem because of the physical impossibility of a being like Omega. However, Newcomb's problem does not actually depend on the possibility of Omega in order to be relevant. Similar issues arise if one canwe imagine a skilled human psychologist who can predict other people's actions with 65% accuracy in similar situations.

Sometimes people dismiss Newcomb's problem because of the physical impossibility of a being like Omega. However, Newcomb's problem does not actually depend on the possibility of Omega in order to be relevant. Similar issues arise if weone can imagine a skilled human psychologist who can predict other people's actions with 65% accuracy.accuracy in similar situations.

Self10

I think this page would be more useful if it linked to the individual sequences it lists.

As far as I've seen, there is no page that links to all sequences in order, which would be useful for working through them systematically.

Clinical trials are often required before a new drug or medical device will be approved for commercial sale. In some cases, they can be expensive, long, and risky, representing the most-costly part of new drug development.

Clinical Trials

Clinical trials are often required before a new drug or medical device will be approved for commercial sale. In some cases, they can be expensive, long, and risky, representing the most-costly part of new drug development.

Reminder to self: they are no longer in beta, remove that message
The footnote icon is the [*] on the right.
  • You can use #, ##, ### at the beginning of a line to insert Heading Level 1, 2, 3
  • > at the beginning of a paragraph makes it a quote block
  • >! makes for a spoiler tag on a paragraph
  • Three dashes will insert a newline
  • Adding \~\~~~ before and after text (\(~~\~text\~\text~~) will make it crossed.
Reminder to self: they are no longer in beta, remove that message
The footnote icon is the [*] on the right.
  • You can use #, ##, ### at the beginning of a line to insert Heading Level 1, 2, 3
  • > at the beginning of a paragraph makes it a quote block
  • >! makes for a spoiler tag on a paragraph
  • Three dashes will insert a newline
  • Adding \~\~ before and after text (\~\~text\~\~) will make it crossed.