jimrandomh comments on General Bitcoin discussion thread (May 2011) - Less Wrong

5 Post author: Kaj_Sotala 20 May 2011 02:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 26 May 2011 02:47:38PM 0 points [-]

I don't think there's a well-defined conversion rate. The main issue is that flops are a measure of floating-point arithmetic performance, but SHA256 hashing is mostly bitwise operations that aren't captured in that metric.

However, you can still figure out how much hashing a supercomputer can do, if you can find out how many CPUs it has and what type they are, and how many GPUs it has and what type they are. The same parts are typically used in both supercomputers and desktops, so you should be able to find benchmarks, and the way they're arranged doesn't matter much. (This is a big difference between mining and the tasks supercomputers normally perform; most of the expense of a supercomputer is the I/O backplane, which will go mostly unused.) I'm pretty sure supercomputers will end up losing badly in hashes per dollar.

Comment author: SilasBarta 26 May 2011 03:00:00PM *  0 points [-]

All true, but I was thinking about a measure that abstracts away from the parallelism/serialness tradeoff. Obviously, supercomputers aren't going to be optimized for ultra-paralellizable tasks like mining rigs are, and I want a measure that doesn't penalize them for this.

And you don't have to guess about supercomputers being less cost-efficient in hashing -- that's the whole reason that amateurs like me, without any experience building one, can put to gether a cluster that's hugely ROR-competitive with existing rentable computing services (a theme often noted on the Bitcoin forums).

Still, there are a number of necessary operations at the assembly/machine level to perform a flop, and presumably much of the same operations are used when computing a hash. At the very least, you have to move around memory, add values, etc. There should be level of commensurably in that respect, right?

Comment author: jimrandomh 26 May 2011 03:20:06PM *  0 points [-]

Still, there are a number of necessary operations at the assembly/machine level to perform a flop, and presumably much of the same operations are used when computing a hash. At the very least, you have to move around memory, add values, etc. There should be level of commensurably in that respect, right?

Unfortunately, there isn't; in most architectures, the integer and bitwise operations that SHA256 uses and the floating-point operations that FLOPs measure aren't even using the same silicon, except for some common parts that set up the operations but don't limit the rate at which they're done. A typical CPU will do both types of operations, just not with the same transistors, and not with any predictable ratio between the two performance numbers. A GPU will typically be specialized towards one or the other, and this is why AMD does so much better than nVidia. An FPGA or ASIC won't do floating point at all.

Comment author: SilasBarta 26 May 2011 03:53:19PM 0 points [-]

But certainly all of these components can do floating point arithmetic, even if it requires special programming. People could use computers to add decimals before floating-point specialized subsystems existed. And you wouldn't say that an abacus can't handle floating point arithmetic "because it has no mechanism to split the beads".

Comment author: jimrandomh 26 May 2011 04:35:28PM *  0 points [-]

In this case, the emulation would be going the other way - using floating point to emulate integer arithmetic. This can probably be done, but it'd be dramatically less efficient than regular integer arithmetic. (Note that "arithmetic" in this case means mainly bitwise rotation, AND, OR, and XOR).