Vladimir_Nesov comments on AI Risk and Opportunity: A Strategic Analysis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
I want to make it bigger, as much as I can. It doesn't matter how small a chance of winning there is, as long as our actions improve it. Giving up doesn't seem like a strategy that leads to winning. The strategy of navigating the WBE transition (or some more speculative intelligence improvement tool) is a more complicated question, and I don't see in what way the background catastrophic risk matters for it.
This also came up in a previous discussion about this we had: it's necessary to distinguish the risk within a given interval of years, and the eventual risk (i.e. the risk of never building a FAI). The same action can make immediate risk worse, but probability of eventually winning higher. I think encouraging an open effort for researching metaethics through decision theory is like that; also better acceptance of the problem might be leveraged to overturn the hypothetical increase in UFAI risk.
Yes, if we're talking about the overall chance of winning, but I was talking about the chance of winning through a specific scenario (directly building FAI). If the chance of that is tiny, why did your cost/benefit analysis of the proposed course of action (encouraging open FAI research) focus completely on it? Shouldn't we be thinking more about how the proposal affects other ways of winning? ETA: To spell it out, encouraging open FAI research decreases the probability that we win by winning the WBE race or through intelligence amplification, by increasing the probability that UFAI happens first.
Nobody is saying "let's give up". If we don't encourage open FAI research, we can still push for a positive Singularity in other ways, some of which I've posted about recently in discussion.
What do you mean? What aren't you seeing?
Yes, of course. I am talking about the probability of eventually winning.
(Another thread of this conversation is here.)
I see, I'm guessing you view the "second round" (post-WBE/human intelligence improvement) as not being similarly unlikely to eventually win. I agree that if the first round (working on FAI now, pre-WBE) has only a tiny chance of winning, while the second has a non-tiny chance (taking into account the probability of no catastrophe till the second round and it being dominated by a FAI project rather than random AGI), then it's better to sacrifice the first round to make the second round healthier. But I also only see a tiny chance of winning the second round, mostly because of the increasing UFAI risk and the difficulty of winning a race that grants you the advantages of the second round, rather than producing an UFAI really fast.
Near/Far. Long-term effects aren't predictable and shouldn't be traded for more predictable short-term losses. In my experience it fails the Predictable Retrospective Stupidity test. Even when you try to factor in structural uncertainty, you still end up getting burned. And even if you still want to make such a tradeoff then you should halt all research until you've come to agreement or a natural stopping point with Wei Dai or others who have reservations. Stop, melt, catch fire, don't destroy the world.
(Disclaimer: This comment is fueled by a strong emotional reaction due to contingent personal details that might or might not upon further reflection deserve to be treated as substantial evidence for the policy I recommend.)
Just to make clear what specific idea this is about: Wei points out that researching FAI might increase UFAI risk, and suggests that therefore FAI shouldn't be researched. My reply is to the effect that while FAI research might increase UFAI risk within any given number of years, it also decreases the risk of never solving FAI (which IIRC I put at something like 95% if we research it pre-WBE, and 97% if we don't).
When I have analyzed this problem previously my reasoning matched that listed by Nesov here.