Posts

Sorted by New

Wiki Contributions

Comments

Well, now you f*in' tell me.

When I saw the picture, I assumed she was the woman you described in one of your Bayesian conspiracy stories that you post here. But then, she was in a pink jumpsuit, and had, I think, blond hair.

@Daniel_Franke: I was just describing a sufficient, not a necessary condition. I'm sure you can ethically get away with less. My point was just that, once you can make models that detailed, you needn't be prevented from using them altogether, because you wouldn't necessarily have to kill them (i.e. give them information-theoretic death) at any point.

@Tim_Tyler:

The main problem with death is that valuable things get lost. Once people are digital, this problem tends to go away - since you can relatively easily scan their brains - and preserve anything of genuine value. In summary, I don't see why this issue would be much of a problem.

I was going to say something similar, myself. All you have to do is constrain the FAI so that it's free to create any person-level models it wants, as long as it also reserves enough computational resources to preserve a copy so that the model citizen can later be re-instantiated in their virtual world, without any subjective feeling of discontinuity.

However, that still doesn't obviate the question. Since the FAI has limited resources, it will still have to know, for which things it must reserve space for preserving, in order to know if the greater utility of the model justifies the additional resources it requires. Then again, it could just accelerate the model so that that person lives out a full, normal life in their simulated universe, so that they are irreversibly dead in their own world anyway.

Khyre: Setting or clearing a bit register regardless of what was there before is a one-bit irreversible operation (the other two one-bit input, one-bit output functions are constant 1 and constant 0).

face-palm I can't believe I missed that. Thanks for the correction :-)

Anyway, with that in mind, Landauer's principle has the strange implication that resetting anything to a known state, in such a way that the previous can't be retrieved, necessarily releases heat, and the more information the state conveys to the observer, the more heat is released. Okay, end threadjack...

I'm going to nitpick (mainly because of how much reading I've been doing about thermodynamics and information theory since your engines of cognition post):

Human neurons ... dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature. ... it ought to be possible to run a brain at a million times the speed without ... invoking reversible computing or quantum computing.

I think you mean neurons dissipate a million times the thermodynamic minimum for an irreversible one-bit operation at room temperature, though perhaps it was clear you were talking about irreversible operations from the next sentence. A reversible operation can be made arbitrarily close to dissipating zero heat.

Even then, a million might be a low estimate. By Landauer's Principle a one-bit irreversible operation requires only kTln2 = 2.9e-21 J at 25 degrees C. Does the brain use more than 2.9e-15 J per synaptic operation?

Also, how can a truly one-bit digital operation be irreversible? The only such operations that both input and output one bit are the identity and inversion gates, both of which are reversible.

I know, I know, tangential to your point...

Nick_Tarleton: I think you're going a bit too far there. Stability control theory had by that time been rigorously and scientifically studied, dating back to Watts's flyball governor in the 18th century (which controlled shaft rotation speed by allowing a ball to swing out and increase rotational inertia as it sped up) and probably even before that with the incubator (which used heat to move a valve that allowed just the right amount of cooling air in). Then all throughout the 19th century engineers attacked the problem of "hunting" on trains, where they would unsettlingly lurch faster and slower. Bicycles, a fairly recent invention then, had to tackle the rotational stability problem, somewhat similar (as many bicycle design constraints are) to what aircraft deal with.

Certainly, many inventors grasped at straws in attempt to replicate functionality, but the idea that they considered the stability implications of the beak isn't too outlandish.

@Scott_Aaronson: Previously, you had said the problem is solved with certainty after O(1) queries (which you had to, to satisfy the objection). Now, you're saying that after O(1) queries, it's merely a "high probability". Did you not change which claim you were defending?

Second, how can the required number of queries not depend on the problem size?

Finally, isn't your example a special case of exactly the situation Eliezer_Yudkowsky describes in this post? In it, he pointed out that the "worst case" corresponds to an adversary who knows your algorithm. But if you specifically exclude that possibility, then a deterministic algorithm is just as good as the random one, because it would have the same correlation with a randomly chosen string. (It's just like the case in the lockpicking problem: guessing all the sequences in order has no advantage over randomly picking and crossing off your list.) The apparent success of randomness is again due to, "acting so crazy that a superintelligent opponent can't predict you".

Which is why I summarize Eliezer_Yudkowsky's position as: "Randomness is like poison. Yes, it can benefit you, but only if you use it on others."

Could Scott_Aaronson or anyone who knows what he's talking about, please tell me the name of the n/4 left/right bits problem he's referring to, or otherwise give me a reference for it? His explanation doesn't seem to make sense: the deterministic algorithm needs to examine 1+n/4 bits only in the worst case, so you can't compare that to the average output of the random algorithm. (Average case for the determistic would, it seems, be n/8 + 1) Furthermore, I don't understand how the random method could average out to a size-independent constant.

Is the randomized algorithm one that uses a quantum computer or something?

Someone please tell me if I understand this post correctly. Here is my attempt to summarize it:

"The two textbook results are results specifically about the worst case. But you only encounter the worst case when the environment can extract the maximum amount of knowledge it can about your 'experts', and exploits this knowledge to worsen your results. For this case (and nearby similar ones) only, randomizing your algorithm helps, but only because it destroys the ability of this 'adversary' to learn about your experts. If you instead average over all cases, the non-random algorithm works better."

Is that the argument?

Load More