All of Stefan Stoyanov's Comments + Replies

Saying that a system would possibly kill everyone is a strong claim, but you do not provide any details on why this would be the case.

At the same time you say you are against censorship, but will somehow support it if it will prevent the worst case scenario.

I guess everyone readibg will have their own ideas (race war, proliferation of cheaply made biological weapons, mass tax avoidance etc), but can you please elaborate and provide more details on why 10-20%?

4paulfchristiano
I just state my view rather than arguing for it; it's a common discussion topic on LW and on my blog. For some articles where people make the case in a self-contained way see Without specific countermeasures the easiest path to transformative AI likely leads to AI takeover or AGI safety from first principles. I'm saying that I will try to help people get AI to do what they want. I mostly think that's good both now and in the future. There will certainly be some things people want their AI to do that I'll dislike but I don't think "no one can control AI" is very helpful for avoiding that and comes with other major costs (even today). (Compared to the recent batch of SSC commenters I'm also probably less worried about the "censorship" that is happening today; the current extent seems overstated and I think people are making overly pessimistic about its likely future, overall I think this is way less of an issue than other limits on free speech right now that could be more appropriately described as "censorship.")