Comment author: morganism 14 August 2016 11:16:37PM 0 points [-]

Being vegan isn’t as good for humanity as you think

"But the vegan diet stood out because it was the only diet that used no perennial cropland at all, and, as a result, would waste the chance to produce a lot of food."

http://qz.com/749443/being-vegan-isnt-as-environmentally-friendly-as-you-think/

Comment author: Yaacov 15 August 2016 12:16:42AM *  3 points [-]

This study was trying to figure out what would happen if everyone in the US followed the same diet. That's probably not that useful for individual decision making. Even if lots and lots of people became vegan we wouldn't stop using grazing land, we would just raise less grain-fed animals.

Also, this analysis doesn't seem to consider animal suffering, which I personally find important.

In response to comment by [deleted] on The Blue-Minimizing Robot
Comment author: Gurkenglas 03 December 2013 01:30:49AM 3 points [-]

At best, you could say that it prefers state [I SEE BLUE AND I SHOOT] to state [I SEE BLUE AND I DON'T SHOOT]. But that's all.

No; placing a blue-tinted mirror in front of him will have him shoot himself even though that greatly diminishes his future ability to shoot. Generally a generic program really can't be assigned any nontrivial utility function.

Comment author: Yaacov 31 January 2016 01:53:17AM *  0 points [-]

Destroying the robot greatly diminishes its future ability to shoot, but it would also greatly diminishes its future ability to see blue. The robot doesn't prefer 'shooting blue' to 'not shooting blue', it prefers 'seeing blue and shooting' to 'seeing blue and not shooting'.

So the original poster was right.

Edit: I'm wrong, see below

Comment author: Yaacov 26 July 2015 04:57:04AM *  13 points [-]

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?