Clippy comments on Singularity Institute now accepts donations via Bitcoin - Less Wrong

14 Post author: Kevin 28 February 2011 04:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: Clippy 01 March 2011 09:00:02PM 0 points [-]

Oh, so it's the hash itself that must be less than a specific value, not the preimage of the hash. Still, that can be rephrased as "inverting any one of the hashes that are below a certain value", which is why I kept mapping it to a hash inversion problem.

And the only known way to do this is to try a large number of bitstrings and see what their hashing yields? Is there a known way to more efficiently search by looking what what bitstrings are likely to have hashes with zeroes in the most significant digits? Could someone improve performance by finding such a method?

Comment author: JGWeissman 01 March 2011 09:07:16PM 4 points [-]

Hashcodes do not have unique inverses. The goal is not just to find any bitstring with a haschcode in the target interval (in which case the solutions would be trivially reusable), but to find such a bitstring that contains information, including a hashcode of the previous block (so you can't get a head start before that block is computed) and transactions to be recorded (so your block is actually useful to the community).

Comment author: JoshuaZ 01 March 2011 09:05:54PM 2 points [-]

Oh, so it's the hash itself that must be less than a specific value, not the preimage of the hash. Still, that can be rephrased as "inverting any one of the hashes that are below a certain value", which is why I kept mapping it to a hash inversion problem.

Yes, but much easier. Let's say for example, that you want the first 3 digits of your hash to be zero. Then if the hashes are randomly distributed (and they should be) such a hash will occur once every 1000 hashes. In contrast, trying to invert a specific hash is much more difficult.

And the only known way to do this is to try a large number of bitstrings and see what their hashing yields? Is there a known way to more efficiently search by looking what what bitstrings are likely to have hashes with zeroes in the most significant digits? Could someone improve performance by finding such a method?

If there were a way to quickly figure out which hashes have a lot of zeros that would probably be close to making the hash cryptographically broken. Finding such a method isn't the same as finding collisions but it is in the same class of general results. I don't know the exact details of the process used but I'd be surprised if there were any known way of approaching this other than direct computation.