All of Daniel Uebele's Comments + Replies

I'll put up a paper saying "ACX Meetup" so you know who to wander over to.

We'll meet at the library and then head over the Pho place for lunch.

Is the goal of all this persuasion to get people to fire off a letter like the one above?

2trevor
The goal of the explanation is to give people a fair chance of understanding AI risk. You can either give someone a fair chance to model the world correctly, or you can fail to give them that fair chance. More fairness is better.  I could tell from the post that Omid did not feel confident in their ability to give someone a fair chance at understanding AI risk.

Thanks, I've fired off a version of this to my three representatives.  I'm going to pass out copies to my friends and family.

I think I can express Sam's point using logical argumentation.

1) Your internal motivational structure is aimed at X.

2) Y is a pre-requisite for X.

3) You ought to do Y in order to achieve X. (This is the only sense in which "ought" means anything).

First time post! I signed up just to say this. If there's a problem with the formulation I just described, I'd love to know what that is. I've been confused for years about why this seems so difficult for some, or why Sam can't put it in these terms.