The host has requested RSVPs for this event
1 Going0 Maybe1 Can't Go
Noah S
mingyuan

Note: we'll be meeting IRL this week at Nathan's (@nburn42#4388) place indoors (though possibly outdoors as well).

Reading:

AI Safety (Week 3, AI Threat Modeling)

This week we will look at AI "Threat Models", which when succinctly defined is a "combination of a development model that says how we get AGI and a risk model that says how AGI leads to existential catastrophe." (Rohin Shah). You see come to understand what people mean by "hard takeoff" and "soft takeoff", and you will familiarize yourself with "inner misalignment" and "outer misalignment".

- "Hard Takeoff" by Eliezer Yudkowsky: https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeoff


- "What Failure Looks Like" by Paul Christiano: https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like


- "Deceptive Alignment" by Hubinger and others: https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment

 

Location: 11841 Wagner St., Culver City

Note the following:
1) Things start winding down around 11PM.
2) For parking: use street parking. Do not park in the driveway of the house.  Street parking is free and doesn't require any permits.
3) Please take a rapid COVID test the night before the meetup.

Schedule:
6:30PM - 7:00PM - Gathering/Chatting
7:00PM - 7:20PM - Surprise of the Week
7:20PM - 7:30PM - Announcements
7:30PM - 9:00PM - Group Discussion
9:00PM - ??? - Hanging Out

Contact: The best way to contact me (or anybody else who is attending the meetup) is through our Discord. Feel free to message Vishal or Nathan directly. Invitation link: https://discord.gg/TaYjsvN

Posted on:

2

New Comment
Everyone who RSVP'd to this event will be notified.