Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Thomas 07 August 2017 08:09:16AM *  1 point [-]

This problem to think about.

Comment author: gjm 08 August 2017 02:46:16PM *  2 points [-]

I wrote a little program to attack this question for a smaller number of primes. The results don't encourage me to think that there's a "principled" answer in general. I ran it for (as it happens) the first 1002 primes, up to 7933. The visibility counts fluctuate wildly; it looks as if there may be a tendency for "typical" visibility counts to decrease, but the largest is 256 for the 943rd prime (7451) which not coincidentally has a large gap before it and smallish gaps immediately after.

It seems plausible that the winner with a billion primes might be the 981,765,348th prime, which is preceded by a gap of length 38 (the largest in the first billion primes), but I don't know and I wouldn't bet on it. With 1200 primes you might think the winner would be at position 1183, after the first ever gap of size 14 -- but in fact that gap is immediately followed by another of size 12, and the prime after that does better even though it can't see its next-but-one neighbour, and both are handily beaten by the 943rd prime which sees lots of others above as well as below.

It's still feeling to me as if any solution to this is going to involve more brute force than insight. Thomas, would you like to tell us whether you know of a solution that doesn't involve a lot of calculation? (Since presumably any solution will at least need to know something about all the first billion primes, maybe I need to be less vague. If the solution looked like "the winning prime is the prime p_i for which p_{i+1}-p_{i-1} is greatest" or something similarly simple, I would not consider it to be mostly brute force.)

Comment author: Thomas 07 August 2017 01:04:55PM 0 points [-]

Yes, exactly so.

There is another small ambiguity here. The towers 2, 3, and 4 have colinear tops. But this is the only case and not important for the solution.

Comment author: gjm 08 August 2017 01:37:58PM 1 point [-]

This is not the only case. For instance, the tops of the towers at heights 11, 17, 23 are collinear (both height differences are 6, both pairs are 2 primes apart).

Even if it turns out not to be relevant to the solution, the question should specify what happens in such cases.

Comment author: Thomas 07 August 2017 08:01:34PM 1 point [-]

Well. If one (Omega or someone like him) asked me to choose between 1000 years compressed into the next hour or just 100 years uncompressed inside the real time from now on ... I am not sure what to say to him.

Comment author: gjm 07 August 2017 09:25:17PM 0 points [-]

Again, the existence of other people complicates this (as it does so many other things). If I'm offered this deal right now and choose to have 1000 years of subjective experience compressed into the next hour and then die, then e.g. I never get to see my daughter grow up, I leave my wife a widow and my child an orphan, I never see any of my friends again, etc. It would be nice to have a thousand years of experiences, but it's far from clear that the benefits outweigh the costs.

This doesn't seem to apply in the case of, e.g., a whole civilization choosing whether or not to go digital, and it would apply differently if this sort of decision were commonplace.

Comment author: Sandi 31 July 2017 08:25:23PM *  2 points [-]

What would be the physical/neurological mechanism powering ego depletion, assuming it existed? What stops us from doing hard mental work all the time? Is it even imaginable to, say, study every waking hour for a long period of time, without ever having an evening of youtube videos to relax? I'm not asking what the psychology of willpower is, but rather if there's a neurology of willpower?

And beyond ego depletion, there's a very popular model of willpower where the brain is seen as a battery, used up when hard work is being done and charged when relaxing. I see this as a deceptive intuition pump since it's easy to imagine and yet it doesn't explain much. What is this energy being used up, physically?

Surely it isn't actual physical energy (in terms of calories) since I recall that the energy consumption of the brain isn't significantly increased while studying. In addition, physical energy is abundant nowadays because food is plentiful. If the lack of physical energy was the issue, we could just keep going by eating more sugar.

The reason we can't workout for 12 hours straight is understood, physiologically. Admittedly, I don't understand it very well myself, but I'm sure an expert could provide reasons related to muscles being strained, energy being depleted, and so on. (Perhaps I would understand the mental analogue better if I understood this.) I'm looking for a similar mechanism in the brain.

To better explain what I'm talking about, what kind of answer would be satisfying, I'll give you a couple fake explanations.

  • Hard mental work sees higher electrical activity in the brain. If this is kept up for too long, neurons would get physically damaged due to their sensitivity. To prevent damage, brains evolved a felling of tiredness when the brain is overused.
  • There is a resource (e.g. dopamine) that is literally depleted during tasking brain operation and regenerated when resting.
  • There could also be a higher level explanation. The inspiration for this came from an old text by Yudkowsky. (I didn't seriously look at those explanations as an answer to my problem because of reasons). I won't quote the source since I think that post was supposed to be deleted. This excerpt gives a good intuitive picture:

My energy deficit is the result of a false negative-reinforcement signal, not actual damage to the hardware for willpower; I do have the neurological ability to overcome procrastination by expending mental energy. I don't dare. If you've read the history of my life, you know how badly I've been hurt by my parents asking me to push myself. I'm afraid to push myself. It's a lesson that has been etched into me with acid. And yes, I'm good enough at self-alteration to rip out that part of my personality, disable the fear, but I don't dare do that either. The fear exists for a reason. It's the result of a great deal of extremely unpleasant experience. Would you disable your fear of heights so that you could walk off a cliff? I can alter my behavior patterns by expending willpower - once. Put a gun to my head, and tell me to do or die, and I can do. Once.

Let me speculate on the answer.

1) There is no neurological limitation. The hardware could, theoretically, run demanding operations indefinitely. But, theories like ego depletion are deceptive memes that spread throughout culture, and so we came to accept an nonexistent limitation. Our belief in the myth is so strong, it might as well be true. The same mechanism as learned helplessness. Needless to say, this could potentially be overcome.

2) There is no neurological limitation, but otherwise useful heuristics stop us from kicking it into higher gear. All of the psychological explanations for akrasia, the kind that are discussed all the time here, come into play. For example, youtube videos provide a tiny, but steady and plentiful stimulus to the reward system, unlike programming, which can have a much higher payout, but one that's inconsistent, unreliable and coupled with frustration. And so, due to a faulty decision making procedure, the brain never gets to the point where it works to its fullest potential. The decision making procedure is otherwise fast and correct enough, thus mostly useful, so simply removing it isn't possible. The same mechanism as cognitive biases. It might be similar to how we cannot do arithmetic effortlessly even though the hardware is probably there.

3) There is an in-built neurological limitation because of an evolutionary advantage. Now, defining this evolutionary advantage can lead to the original problem. For example, it cannot be due to minimizing energy consumption, as discussed above. But other explanations don't run into this problem. Laziness can often lead to more efficient solutions, which is beneficial, so we evolved ego depletion to promote it, and now we're stuck with it. Of course, all the pitfalls customary to evolutionary psychology apply, so I won't go in depth about this.

4) There is a neurological limitation deeply related to the way the brain works. Kind of like cars can only go so fast, and it's not good for them if you push them to maximum speed all the time. At first glance, the brain is propagating charge through neurons all the same, regardless of how tiring an action it's accomplishing. But one could imagine non-trivial complexities to how the brain functions which account for this particular limitation. I dare not speculate further since I know so little about neurology.

Comment author: gjm 07 August 2017 02:41:53PM 0 points [-]

I have a hazy memory that there's some discussion of exactly this in Keith Stanovich's book "What intelligence tests miss".

Unfortunately, my memory is hazy enough that I don't trust it to say accurately (or even semi-accurately) what he said about it :-). So this is useful only to the following extent: if Sandi, or someone else interested in Sandi's question, has a copy of Stanovich's book or was considering reading it anyway, then it might be worth a look.

Comment author: cousin_it 06 August 2017 10:24:33AM *  1 point [-]

If we want a measure of rationality that's orthogonal to intelligence, maybe we could try testing the ability to overcome motivated reasoning? Set up a conflict between emotion and reason, and see how the person reacts. The marshmallow test is an example of that. Are there other such tests, preferably ones that would work on adults? Which emotions would be easiest?

Comment author: gjm 07 August 2017 01:17:17PM 3 points [-]

It seems like it would be tricky to distinguish "good at reasoning even in the face of emotional distractions" from "not experiencing strong emotions". The former is clearly good; the latter arguably bad.

I'm not sure how confident I am that the paragraph above makes sense. How does one measure the strength of an emotion, if not via its effects on how the person feeling it acts? But it seems like there's a useful distinction to be made here. Perhaps something like this: say that an emotion is strong if, in the absence of deliberate effort, it has large effects on behaviour; then you want to (1) feel emotions that have a large effect on you if you let them but (2) be able to reduce those effects to almost nothing when you choose to. That is, you want a large dynamic range.

Comment author: Thomas 06 August 2017 08:28:59PM 0 points [-]

Isn't 2*T obviously better?

We are on the same page here. But a lot of people want to survive as long as possible. Not as much as possible, but as long as possible.

Comment author: gjm 07 August 2017 01:12:58PM 2 points [-]

I would guess that most people who want that simply haven't considered the difference between "how much" and "how long", and if convinced of the possibility of decoupling subjective and objective time would prefer longer-subjective to longer-objective when given the choice.

(Of course the experiences one may want to have will typically include interacting with other people, so "compressed" experience may be useful only if lots of other people are similarly compressing theirs.)

Comment author: Thomas 07 August 2017 08:09:16AM *  1 point [-]

This problem to think about.

Comment author: gjm 07 August 2017 01:08:45PM 2 points [-]

Initial handwaving:

Super-crudely the n'th prime number is about n log n. If this were exact then each tower would see all the others, because the function x -> x log x is convex. In practice there are little "random" fluctuations which make a difference. It's possible that the answer to the question depends critically on those random fluctuations and can be found only by brute force...

Comment author: MrMind 07 August 2017 12:25:51PM 0 points [-]

The intuitive answer seems to me to be: the last one. It's the tallest, so it witness exactly one billion towers. Am I misinterpreting something?

Comment author: gjm 07 August 2017 12:43:19PM 0 points [-]

Yes: merely being lower isn't enough to guarantee visibility, because another intermediate tower might be (lower than the tallest but still) tall enough to block it. Like this, if I can get the formatting to work:

#
# #
# #
# #
# # #

You can't see the third tower from the first, because the second is in the way.

Comment author: tadasdatys 20 July 2017 04:00:33PM 0 points [-]

Indeed, we can always make two things seem indistinguishable, if we eliminate all of our abilities to distinguish them. The two bodies in your case could still be distinguished with an fmri scan, or similar tool. This might not count as "behavior", but then I never wanted "behavior" to literally mean "hand movements".

I think you could remove that by putting the two people into magical impenetrable boxes and then randomly killing one of them, through some schrodinger's cat-like process. But I wouldn't find that very interesting either. Yes, you can hide information, but it's not just information about consciousness you're hiding, but also about "ability to do arithmetic" and many other things. Now, if you could remove consciousness without removing anything else, that would be very interesting.

Comment author: gjm 21 July 2017 12:35:46PM 0 points [-]

OK, so what did you mean by "behaviour" if it includes things you can only discover with an fMRI scan? (Possible "extreme" case: you simply mean that consciousness is something that happens in the physical world and supervenes on arrangements of atoms and fields and whatnot; I don't think many here would disagree with that.)

If the criteria for consciousness include things you can't observe "normally" but need fMRI scans and the like for (for the avoidance of doubt, I agree that they do) then you no longer have any excuse for answering "yes" to that last question.

My point wasn't about hiding information; it was that much of the relevant information is already hidden, which you seemed to be denying when you said consciousness is just a matter of "behaviours". It now seems like you weren't intending to deny that at all; but in that case I no longer understand how what you're saying is relevant to the OP.

Comment author: tadasdatys 17 July 2017 08:24:26AM 0 points [-]

The three examples deal with different kinds of things.

Knowing X mostly means believing in X, or having a memory of X. Ideally beliefs would influence actions, but even if they don't, they should be physically stored somehow. In that sense they are the most real of the three.

Having a mental skill to do X means that you can do X with less time and effort than other people. With honest subjects, you could try measuring these somehow, but, obviously, you may find some subject who claims to have the skill perform slower than another who claims not to. Ultimately, "I have a skill to do X" means "I believe I'm better than most at X" and while it is a belief as good as the previous one, but it's a little less direct.

Finally, being conscious doesn't mean anything at all. It has no relationship to reality. At best, "X is conscious" means "X has behaviors in some sense similar to a human's". If a computationalist answers "no" to the first two questions, and "yes" to the last one, they're not being inconsistent, they merely accepted that the usual concept of consciousness is entirely bullshit, and replaced it with something more real. That's, by the way, similar to what compatibilists do with free will.

Comment author: gjm 20 July 2017 11:21:52AM 0 points [-]

I agree with much of what you say but I am not sure it implies for cousin_it's position what you think it does.

I'm sure it's true that, as you put it elsewhere in the thread, consciousness is "extrapolated": calling something conscious means that it resembles an awake normal human and not a rock, a human in a coma, etc., and there is no fact of the matter as to exactly how this should be extrapolated to (say) aliens or intelligent robots.

But this falls short of saying that at best, calling something conscious equals saying something about its externally observable behaviours.

For instance: suppose technology advances enough that we can (1) make exact duplicates of human beings, which (initially) exactly match the memories, personalities, capabilities, etc., of their originals, and (2) reversibly cause total paralysis in a human being, so that their mind no longer has any ability to produce externally observable effects, and (3) destroy a human being's capacity for conscious thought while leaving autonomic functions like breathing normal.

(We can do #2 and #3 pretty well already, apart from reversibility. I want reversibility so that we can confirm later that the person was conscious while paralysed.)

So now we take a normal human being (clearly conscious). We duplicate them (#1). We paralyse them both (#2). Then we scramble the brain of one of them (#3). Then we observe them as much as you like.

I claim these two entities have exactly the same observable behaviours, past and present, but that we can reasonably consider one of them conscious and the other not. We can verify that one of them was conscious by reversing the paralysis. Verifying that the other wasn't depends on our confidence that by mashing up most of their cerebral cortex (or whatever horrible thing we did in #3) really destroys consciousness, but this seems like a thing we could reasonably be quite confident of.

You might say that our judgement that one of these (ex-?) human beings is conscious is dependent on our ability to reverse the paralysis and check. But, given enough evidence that the induction of paralysis is harmlessly reversible, I claim we could be very confident even if we knew that after (say) a week both would be killed without the paralysis ever being reversed.

View more: Next