Why don't SIAI researchers decide to definitively solve some difficult unsolved mathematics, programming, or engineering problem as proof of their abilities?
Yes it would waste time that could have been spent on AI-related philosophy but would unambiguously support the competency of SIAI.
You mean, like decision theory? Both Timeless Decision Theory (which Eliezer developed) and Updateless Decision Theory (developed mostly by folks who are now SI Research Associates) are groundbreaking work in the field, and both are currently being written up for publication, I believe.
I intended Leveling Up in Rationality to communicate this:
But some people seem to have read it and heard this instead:
This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:
So, I have a few questions: