Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Yaacov00

Interested in theory. I wouldn't move cities to join a baugruppe but if I ended up in the same city as one I would like to live there.

Yaacov60

This study was trying to figure out what would happen if everyone in the US followed the same diet. That's probably not that useful for individual decision making. Even if lots and lots of people became vegan we wouldn't stop using grazing land, we would just raise less grain-fed animals.

Also, this analysis doesn't seem to consider animal suffering, which I personally find important.

Yaacov20

Destroying the robot greatly diminishes its future ability to shoot, but it would also greatly diminishes its future ability to see blue. The robot doesn't prefer 'shooting blue' to 'not shooting blue', it prefers 'seeing blue and shooting' to 'seeing blue and not shooting'.

So the original poster was right.

Edit: I'm wrong, see below

Yaacov180

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?