glauberdebona

Computer scientist and engineer, amateur philosopher and chess player.

Wikitag Contributions

Comments

Sorted by

Correcting myself: "doubly" just added above.

For the time-complexity of the verifier to grow only linearly with , we need also to assume the complexity of the reduction  is  for a fixed , regardless of , which I admit is quite optimistic. If the time-complexity of the reduction  is doubly exponential in , the (upper bound of the) time-complexity of the verifier will be exponential in , even with the stronger version of Reduction-Regularity. 

By "unconditionally", do you mean without an additional assumption, such as Reduction-Regularity?

I actually have a result in the other direction (an older version of this post). That is, with a slightly stronger assumption, we could derive a better upper bound (on ) for the time-complexity of the verifier.  The stronger assumption is just a stronger version of Reduction-Regularity, requiring  -- that is,  replaces  in the inequality. It could still lead to an exponential bound in  if the complexity of the reduction  increases exponentially with . In the version presented in this post, the exponential bound relies on the complexity of  being  for a fixed , regardless of , which is admittedly optimistic. If we assume this with the stronger version of Reduction-Regularity, then I think the upper bound for the time-complexity of the verifier would increase only linearly with , which is a huge improvement (I can add something in the appendix about this if you are interested in the details). In the end, I opted for presenting the weakest assumption here, even if corresponding to a worse complexity bound.

Your comment seems to imply that the conjecture's "99%" is about circuits  for which  is false. Otherwise, it would be impossible for a  to miss 100% of random reversible circuits without missing the circuits  for which  is true. In the conjecture, should "random reversible circuits " be read as "random reversible circuits  for which  is false"? It does not change much indeed, but I might have to correct my presentation of the conjecture here.

I think I have an idea to make that 99% closer to 1. But its development became too big for a comment here, so I made a post about it.

Thanks for the feedback! The summary here and the abstract in the draft paper have been updated; I hope it is clearer now.

Hi everyone,

I’m Glauber De Bona, a computer scientist and engineer. I left an assistant professor position to focus on independent research related to AGI and AI alignment (topics that led me to this forum). My current interests include self-location beliefs, particularly the Sleeping Beauty problem.

I’m also interested in philosophy, especially formal reasoning -- logic, Bayesianism, and decision theory. Outside of research, I’m an amateur chess player.