mathematicians who worked on FrontierMath would possibly have not contributed to this if they knew about the funding and exclusive access.
I'm a mathematician who contributed to FrontierMath.
Speaking just for myself: From the beginning it was clear reading between the lines that the project had an industry sponsor, with OAI being an obvious guess. I judged the project as having a less favorable safety/capabilities tradeoff than my other research, to the point where I drafted, but did not ultimately send, an email bowing out. In hindsight, I thin...
So, boundaries enable cooperation, by protecting BATNA.
Would you say there is a boundary between cell and mitochondria?
In the limit of perfect cooperation, the BATNA becomes minus infinity and the boundary dissolves.
Thanks all for responding! The meetup will be this Thursday, any other Ithaca locals DM for details!
This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world. If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?
money is probably much less valuable after AGI than before, indeed practically worthless.
I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap. It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.
If Alice and Bob are talking to each other as they deliberate
I think this is a typo, it should say "compete" instead of "deliberate".
I worry about persuasion becoming so powerful that it blocks deliberation: How can Alice know whether Bob (or his delegated AI) is deliberating in good faith or trying to manipulate her?
In this scenario, small high-trust communities can still deliberate, but mutual mistrust prevents them from communicating their insights to the rest of the world.
I think this is possible and it doesn’t require AI. It only requires a certain kind of "infectious Turing machine" described below.
Following Gwern’s comment, let’s consider first the easier problem of writing a program on a small portion of a Turing machine’s tape, which draws a large smiley face on the rest of the tape. This is easy even with the *worst case* initialization of the rest of the tape. Whereas our problem is not solvable in worst case, as pointed out by Richard_Kennaway.
What makes our problem harder is errors caused by the r...
Suppose I hand you a circuit C that I sampled uniformly at random from the set of all depth-n reversible circuits satisfying P. What is a reason to believe that there exists a short heuristic argument for the fact that this particular C satisfies P?