One possible answer to the argument "attempting to build FAI based on Eliezer's ideas seems infeasible and increases the risk of UFAI without helping much to increase the probability of a good outcome, and therefore we should try to achieve a positive Singularity by other means" is that it's too early to decide this. Even if our best current estimate is that trying to build such an FAI increases risk, there is still a reasonable chance that this estimate will turn out to be wrong after further investigation. Therefore, the counter-argument goes, we ought to mount a serious investigation into the feasibility and safety of Eliezer's design (as well as other possible FAI approaches), before deciding to either move forward or give up.
(I've been given to understand that this is a standard belief within SI, except possibly for Eliezer, which makes me wonder why nobody gave this counter-argument in response to my post linked above. ETA: Carl Shulman did subsequently give me a version of this argument here.)
This answer makes sense to me, except for the concern that even seriously investigating the feasibility of FAI is risky, if the team doing so isn't fully rational. For example they may be overconfident about their abilities and thereby overestimate the feasibility and safety, or commit sunken cost fallacy once they have developed lots of FAI-relevant theory in the attempt to study feasibility, or become too attached to their status and identity as FAI researchers, or some team members may disagree with a consensus of "give up" and leave to form their own AGI teams and take the dangerous knowledge developed with them.
So the question comes down to, how rational is such an FAI feasibility team likely to be, and is that enough for the benefits to exceed the costs? I don't have a lot of good ideas about how to answer this, but the question seems really important to bring up. I'm hoping this post this will trigger SI people to tell us their thoughts, and maybe other LWers have ideas they can share.
Is it just me, or is the situation of Eliezer and Carl having thought of all of these things but never written them down anywhere crazy? If Eliezer and Carl are unwilling or unable to write down their ideas, then the rest of us have no choice but to try to do strategy work ourselves even if we have to retrace a lot of their steps. The alternative is to for us to go through the Singularity with only two or three people having thought deeply about how best to make it turn out well. It's hard to imagine getting a good outcome while the world is simultaneously that crazy.
I guess my suggestion to you is that if you agree with me that we need a vibrant community of talented people studying and openly debating what is the best strategy for achieving a positive Singularity, then MIRI ought to be putting more effort into this goal. If it encounters problems like Eliezer and Carl being too slow to write down their ideas, then it should make a greater effort to solve such problems or to work around them, like encouraging independent outside work, holding workshops to attract more attention to strategic problems, or trying to convince specific individuals to turn their attention to strategy.
While I look forward to talking to Eliezer and you, I do have a concern, namely that I find Eliezer to be much better (either more natively talented, or more practiced, probably both) than I am at making arguments in real time, while I tend to be better able to hold my own in offline formats like email/blog discussions where I can take my time to figure out what points I want to make. So keep that in mind if the chat ends up being really one-sided.
You're preaching to the choir, here...
And you might be underestimating how many different things I tried, to encourage various experts to write things up at a faster pace.
As for why we aren't spending more resources on strategy work, I refer you to all my previous links and points about that in this thread. Perhaps there are specific parts of my case that you don't find compelling?