All of Lionel Levine's Comments + Replies

Suppose I hand you a circuit C that I sampled uniformly at random from the set of all depth-n reversible circuits satisfying P. What is a reason to believe that there exists a short heuristic argument for the fact that this particular C satisfies P?

2Eric Neyman
We've done some experiments with small reversible circuits. Empirically, a small circuit generated in the way you suggest has very obvious structure that makes it satisfy P (i.e. it is immediately evident from looking at the circuit that P holds). This leaves open the question of whether this is true as the circuits get large. Our reasons for believing this are mostly based on the same "no-coincidence" intuition highlighted by Gowers: a naive heuristic estimate suggests that if there is no special structure in the circuit, the probability that it would satisfy P is doubly exponentially small. So probably if C does satisfy P, it's because of some special structure.

mathematicians who worked on FrontierMath would possibly have not contributed to this if they knew about the funding and exclusive access.

 

I'm a mathematician who contributed to FrontierMath. 

Speaking just for myself: From the beginning it was clear reading between the lines that the project had an industry sponsor, with OAI being an obvious guess. I judged the project as having a less favorable safety/capabilities tradeoff than my other research, to the point where I drafted, but did not ultimately send, an email bowing out. In hindsight, I thin... (read more)

So, boundaries enable cooperation, by protecting BATNA.

Would you say there is a boundary between cell and mitochondria?

In the limit of perfect cooperation, the BATNA becomes minus infinity and the boundary dissolves.

Thanks all for responding! The meetup will be this Thursday, any other Ithaca locals DM for details!

This is an obvious point, but: Any goal is likely to include some variance minimization as a subgoal, if only because of the possibility that another entity (rival AI, nation state, company) with different goals could take over the world.  If an AI has the means to take over the world, then it probably takes seriously the scenario that a rival takes over the world. Could it prevent that scenario without taking over itself?

3Stuart_Armstrong
This is a variant of my old question: * There is a button at your table. If you press it, it will give you absolute power. Do you press it? More people answer no. Followed by: * Hitler is sitting at the same table, and is looking at the button. Now do you press it?

money is probably much less valuable after AGI than before, indeed practically worthless.

I think this overstates the case against money. Humans will always value services provided by other humans, and these will still be scarce after AGI. Services provided by humans will grow in value (as measured by utility to humans) if AGI makes everything else cheap.  It seems plausible that money (in some form) will still be the human-to-human medium of exchange, so it will still have value after AGI.

2Daniel Kokotajlo
It does not make the case against money at all; it just states the conclusion. If you want to hear the case against money, well, I guess I can write a post about it sometime. So far I haven't really argued at all, just stated things. I've been surprised by how many people disagree (I thought it was obvious). To the specific argument you make: Yeah, sure, that's one factor. Ultimately a minor one in my opinion, doesn't change the overall conclusion.

If Alice and Bob are talking to each other as they deliberate

I think this is a typo, it should say "compete" instead of "deliberate".

I worry about persuasion becoming so powerful that it blocks deliberation: How can Alice know whether Bob (or his delegated AI) is deliberating in good faith or trying to manipulate her?

In this scenario, small high-trust communities can still deliberate, but mutual mistrust prevents them from communicating their insights to the rest of the world.

2paulfchristiano
I meant "while they deliberate," as in the deliberation involves them talking to work out their differences or learn from each other. But of course the concern is that this in itself introduces an opportunity for competition even if they had otherwise decoupled deliberation, and indeed the line between competition and deliberation doesn't seem crisp for groups.

I think this is possible and it doesn’t require AI. It only requires a certain kind of "infectious Turing machine" described below. 

Following Gwern’s comment, let’s consider first the easier problem of writing a program on a small portion of a Turing machine’s tape, which draws a large smiley face on the rest of the tape. This is easy even with the *worst case* initialization of the rest of the tape.  Whereas our problem is not solvable in worst case, as pointed out by Richard_Kennaway. 

What makes our problem harder is errors caused by the r... (read more)