Huge kudos to you for putting this together, Habryka!
Besides Superintelligence, the latest "major" publication on the subject is Yudkowsky's Intelligence explosion microeconomics. There are also a few articles related to the topic at AI Impacts.
In descending order of importance:
- Instrumental rationality.
- How to actually practice becoming more rational.
- Rationality via cluster thinking and intelligent imitation.
If there is an objective morality, but we don't care about it, is it relevant in any way?
I think Peter Singer wrote a paper arguing "no," but I can't find it at the moment.
Marblestone et al., "Physical principles for scalable neural recording."
wow everyone is so squinty
It was so bright out! The photo has my eyes completely closed, unfortunately. :)
Is the idea to get as many people as possible to sign this? Or do we want to avoid the image of a giant LW puppy jumping up and down while barking loudly, when the matter finally starts getting attention from serious people?
After the first few pages of signatories, I recognize very few of the names, so my guess is that LW signers will just get drowned in the much larger population of people who support the basic content of the research priorities document, which means there's not much downside to lots of LWers signing the open letter.
Context: Elon Musk thinks there's an issue in the 5-7 year timeframe (probably due to talking to Demis Hassabis at Deepmind, I would guess). By that standard I'm also less afraid of AI than Elon Musk, but as Rob Bensinger will shortly be fond of saying, this conflates AGI danger with AGI imminence (a very very common conflation).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Awesome!
I've been dying for something like this after I zoomed through all the questions in the CFAR calibration app.
Notes so far:
* The highest-available confidence is 99%, so the lowest-available confidence should be 1% rather than 0%. Or even better, you could add 99.9% and 0.1% as additional options.
* So far I've come across one question that was blank. It just said Category: jewelry and then had no other text. Somehow the answer was Ernest Hemingway.
* Would be great to be able to sign up for an account so I could track my calibration across multiple sessions.