This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
AI Alignment Intro Materials
•
Applied to
Doing Nothing Utility Function
by
k64
2mo
ago
•
Applied to
AI Alignment and the Quest for Artificial Wisdom
by
Myspy
4mo
ago
•
Applied to
UC Berkeley course on LLMs and ML Safety
by
Ruby
4mo
ago
•
Applied to
So you want to work on technical AI safety
by
gw
5mo
ago
•
Applied to
Talk: AI safety fieldbuilding at MATS
by
Ryan Kidd
5mo
ago
•
Applied to
Podcast interview series featuring Dr. Peter Park
by
jacobhaimes
8mo
ago
•
Applied to
INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
8mo
ago
•
Applied to
INTERVIEW: StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
9mo
ago
•
Applied to
A starter guide for evals
by
Marius Hobbhahn
10mo
ago
•
Applied to
Hackathon and Staying Up-to-Date in AI
by
jacobhaimes
10mo
ago
•
Applied to
Interview: Applications w/ Alice Rigg
by
jacobhaimes
1y
ago
•
Applied to
Into AI Safety: Episode 3
by
jacobhaimes
1y
ago
•
Applied to
Into AI Safety Episodes 1 & 2
by
jacobhaimes
1y
ago
plex
v1.4.0
Nov 5th 2023 GMT
(
+51
/
-26
)
4
Stampy's AI Safety Info
(extensive interactive FAQ)
Scott Alexander's Superintelligence FAQ
The MIRI Intelligence Explosion FAQ
The
Stampy.AI wiki project
The
AGI Safety Fundamentals courses
Superintelligence
(book)
•
Applied to
Into AI Safety - Episode 0
by
jacobhaimes
1y
ago
•
Applied to
Documenting Journey Into AI Safety
by
jacobhaimes
1y
ago
•
Applied to
Apply to a small iteration of MLAB to be run in Oxford
by
RP
1y
ago
Stampy.AI wiki projectTheAGI Safety Fundamentals courses