Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cameroncowan 28 August 2014 10:27:05PM 0 points [-]

That is my point, its not and therefore can't pass the conscious language test and I think thats quite the problem.

I think the Vaidman procedure doesn't make consciousness present because the specific input and output being only a yes or no answer makes it no better than the computers we are using right now. I can ask SIRI yes or no answers and get something out but we can agree that Siri is an extremely simple kind of consciousness embodied in computer code built at Apple to work as an assistant in iPhones. If the Vaidman brain were to be conscious I should be able to ask it a "question" without definable bounds and get any answer between "42" and "I don't know or I cannot answer that." So for example, you can ask me all these questions and I can work to create an answer as I am now doing or I could simply say "I don't know" or "my head is parrot your post is invalid." The answer would exist as a signpost of my consciousness although it might be unsatisfying. The Vaidman brain could not work under these conditions because the bounds are set. Any time you have set bounds saying that it is apriori consciousness is impossible.

Comment author: gjm 28 August 2014 11:39:18PM 0 points [-]

That is my point [...]

Then I have no idea what you meant by "If you use the language test then yes and FHE encrypted sm with a lost key is still conscious".

the specific input and output being only a yes or no answer makes it no better than the computers we are using right now.

If I ask you a question and somehow constrain you only to answer yes or no, that doesn't stop you being conscious as you decide your answer. There's a simulation of your whole brain in there, and it arrives at its yes/no answer by doing whatever your brain usually does to decide. All that's unusual is the context. (But the context is very unusual.)

Comment author: cameroncowan 28 August 2014 07:41:23PM 0 points [-]

Language Test: The Language Test is simple language for the Heideggarian idea of language as a proof of consciousness.

Reversibility: I don't think that kind of reversibility is possible while also maintaining consciousness.

Vaidman Brain: Then that invalidates the idea if you remove the tricksiness. I would of course remain in a certain state of conscious the entire time.

Comment author: gjm 28 August 2014 08:47:01PM 0 points [-]

How is a simulation of a conscious mind, operating behind a "wall" of fully homomorphic encryption for which no one has the key, going to pass this "language test"?

I don't think that kind of reversibility is possible while also maintaining consciousness.

Then you agree with Scott Aaronson on at least one thing.

Then that invalidates the idea if you remove the tricksiness.

What I am trying to understand is what about the Vaidman procedure makes consciousness not be present, in your opinion. What you said before is "based on a specific input and a specific output", but we seem to be agreed that one can have a normal interaction with a normal conscious brain "based on a specific input and a specific output" so that can't be it. So what is the relevant difference, in your opinion?

Comment author: fubarobfusco 27 August 2014 05:06:04PM -2 points [-]

No, it's guilt by explicit participation.

Comment author: gjm 28 August 2014 12:18:05PM 3 points [-]

Perhaps you'd like to unpack that a bit.

Suppose Al is a would-be effective altruist. Al estimates that his charitable giving can "save a life" (i.e., do an amount of good that he judges equivalent to giving one person a reasonably full and happy life instead of dying very prematurely) for about $5k. Al is willing to give away half of what he earns above $40k/year, and everything above $150k/year. He can work for $50k/year as a librarian (giving $5k/year, 1 life/year) or for $250k/year as an investment banker (giving $155k/year, 31 lives/year).

The investment bank that's offering Al a job was recently involved in a scandal that effectively defrauded a lot of its customers of a lot of money. Al doesn't know of any similar frauds going on right now, and is fairly sure that the job he's being offered doesn't require him to defraud anyone. But of course it's entirely possible that somewhere in the large i-bank he'd be working for, other equally nasty things are going on.

OK. So, if Al takes the i-banking job then he is "guilty by explicit participation". That sounds bad. Should Al regard being "guilty by explicit participation" as more important than saving 30 extra lives per year? If I am introduced to Al and trying to work out what to think of him, should I think worse of him because he thought it more important to save an extra 30 lives/year than to avoid "guilt by explicit participation"?

Does "guilt by explicit participation" actually harm anyone? How?

Comment author: cameroncowan 27 August 2014 09:45:31PM 0 points [-]

I think the question is how are you going to define consciousness and how are you going apriori prove that? If you use the language test then yes and FHE encrypted sm with a lost key is still conscious (see comment below).

If I untorture a reversible simulation you have to decide how far the reversibility goes and if there is any imprint or trauma left behind. Does the computer feel or experience that reverse as a loss? Can you fully reverse the imprint of torture on consciousness in such a manner that running the simulation backwards has an incomplete or complete effect?

The Vaidman brain isn't conscious I don't think because its based on a specific input and a specific output. I still think John Searle is off on this despite my opinion.

Comment author: gjm 28 August 2014 12:06:09PM 0 points [-]

If you use the language test

What language test? (And, how would a fully-homomorphically-encrypted sim with a lost key be shown to be conscious by anything that requires communicating with it?)

you have to decide how far the reversibility goes

The sort of reversibility Scott Aaronson is talking about goes all the way: after reversal, the thing in question is in exactly the same state as it was in before. No memory, no trauma, no imprint, nothing.

The Vaidman brain isn't conscious I don't think because it's based on a specific input and a specific output.

I don't understand that at all. Why does that stop it being conscious? If I ask you a specific yes/no question (in the ordinary fashion, no Vaidman tricksiness) and you answer it, does the fact that you were giving a specific answer to a specific question mean that you weren't conscious while you did it?

Comment author: buural 20 August 2014 05:58:28AM 3 points [-]

Has anyone compiled a list of Chekhov's guns that haven't been fired yet in the story so far? Off the top of my head, I have:

  • Bacon's diary
  • Bellatrix Black
  • Sirius Black (incidentally a candidate for the Cloak and Hat, who possibly knows limitations of the Marauders' map)
  • Traps on the third floor
  • Significance of Dumbledore writing in Lily Potter's potion's book
  • Lesath Lestrange
  • Harry's 'shopping list' given to Gred and Forge
  • The missed glint in the Godric's Hollow graveyard
  • Chamber of Secrets / Salazar's snake?
  • Secrets of spell creation (which Quirrel is so keen on keeping away from Harry)

Anything else?

Comment author: gjm 27 August 2014 01:08:44AM *  3 points [-]

Many of these don't exactly count as Chekhov's guns, but they have this in common with your Chekhov's guns: They seem like substantial unresolved things and I will be disappointed if the end of HPMOR leaves a lot of them unresolved:

  • Prophecies about Harry and the end of the world (for some values of "end" and "world").
  • How magic works (e.g., why you have to say "Wingardium Leviosa" to make things float; resolving this may be more or less equivalent to resolving spell creation).
  • Harry's intention of defeating death, perhaps in some fashion that involves the Deathly Hallows.
  • The list of locations discussed by Harry and Quirrell, which may or may not correspond to Horcrux hiding places or something.
  • Harry's "power that the Dark Lord knows not"; probably not either Science or partial transfiguration, but unlikely to be "love" as in Rowling.
  • What's wrong (and how genuinely) with Quirrell.
  • The interaction between Harry's and Quirrell's magic (kinda the same as in Rowling? maybe, or maybe not).
  • Harry's vow to do away with Azkaban and the use of Dementors to guard human beings.
  • Harry's debt to Lucius Malfoy. (Or -- I forget -- did that get cancelled somehow when Hermione got killed?)
  • What, if anything, Harry was doing after Hermione's death; e.g., is he carrying her transfigured corpse around or something?
  • Harry's "father's rock" (just transfiguration practice? actually some powerful magical artefact in disguise? etc.)
  • What really happened in Godric's Hollow when Harry was a baby.
  • Exactly what Quirrell's plans really are. (On some plausible theories, closely related to what happened in Godric's Hollow.)
Comment author: gjm 26 August 2014 11:17:09PM 12 points [-]

I think this sort of post would be improved by adding some information about what you're using this system for. A bit of googling suggests that you're a PhD student; your needs are probably somewhat different from those of (to take a few examples) an undergraduate, or someone working a regular day job, or a consultant/contractor, or someone retired.

In response to Persistent Idealism
Comment author: gjm 26 August 2014 03:15:20PM 5 points [-]

The following two strategies seem (to me) roughly equally plausible but (unfortunately) exactly opposite.

  • Establish a ruthless Schelling fence like "never keep more than $X of income in a year" where X is a rather small number.

  • Accept that you are likely to be unable to maintain a really unspendy lifestyle when surrounded by spendy rich people, and instead decide from the outset on a level of self-indulgence that you are likely to be able to keep up.

If forced to guess, my guess is that the former is probably easier to keep up for longer but may lead to a more drastic failure mode when it fails. But I have no reason to trust my guesses much on this. I'd be interested in others' opinions.

Comment author: Lumifer 26 August 2014 12:19:11AM *  0 points [-]

Um. I was just making a point that "we know P(A & B) <= P(A)" is a true statement coming from math logic, while "if you add details to a story, it becomes less plausible" is a false statement coming from human interaction.

Not sure about your unrolling of the probabilities since P(B|A) = 1 which makes A and B essentially the same. If you want to express the whole thing in math logic terms you need notation as to who knows what.

Comment author: gjm 26 August 2014 01:09:03AM *  0 points [-]

[...] is a true statement coming from math logic, [...] is a false statement coming from human interaction

My reading of polymer's statement is that he wasn't using "plausible" as a psychological term, but as a rough synonym for "probable". (polymer, if you're reading: Was I right?)

P(B|A) = 1 which makes A and B essentially the same

No, P(B|A) is a little less than 1 because Beth might have read the email carelessly, or forgotten bits of it.

[EDITED to add: If whoever downvoted this would care to explain what they found objectionable about it, I'd have more chance of fixing it. It looks obviously innocuous to me even on rereading. Thanks!]

Comment author: Lumifer 25 August 2014 08:07:50PM -1 points [-]

We know P(A & B) < P(A). So if you add details to a story, it becomes less plausible.

Not so. Stories usually are considerably more complicated than can be represented as ANDing of probabilities.

A simple example: Someone tells me that she read my email to Alice, let's say I think that's X% plausible. But then she adds details: she says that the email mentioned a particular cafe. This additional detail makes the plausibility of this story skyrocket (since I do know that the email did mention that cafe).

Comment author: gjm 25 August 2014 11:38:47PM 3 points [-]

So maybe it's worth saying explicitly what's going on here: You're comparing probabilities conditional on different information.

A = "Beth read my email to Alice". B = "Beth knows that my email to Alice mentioned the Dead Badger Cafe". I = "Beth told me she read my email to Alice". J = "Beth told me my email to Alice mentioned the Dead Badger Cafe".

Now P(A&B|I) < P(A|I), and P(A&B|I&J) < P(A|I&J), but P(A&B|I&J) > P(A|I).

So there's no contradiction; there's nothing wrong with applying probabilities; but if you aren't careful you can get confused. (For the avoidance of doubt, I am not claiming that Lumifer is or was confused.)

And, yes, I bet this sort of conditional-probability structure is an important part of why we find stories more plausible when they contain lots of details. Unfortunately, the way our brains apply this heuristic is far from perfect, and in particular it works even when we can't or won't check the details and we know that the person telling us the story knows this. So it leads us astray when we are faced with people who are unscrupulous and good at lying.

Comment author: polymer 25 August 2014 08:35:13PM *  0 points [-]

So plausibility isn't the only dimension for assessing how "good" a belief is.

A or not A is a certainty. I'm trying to formally understand why that statement tells me nothing about anything.

The motivating practical problem came from this question,

"guess the rule governing the following sequence" 11, 31, 41, 61, 71, 101, 131, ...

I cried, "Ah the sequence is increasing!" With pride I looked into the back of the book and found the answer "primes ending in 1".

I'm trying to zone in on what I did wrong.

If I had said instead, the sequence is a list of numbers - that would be stupider, but well inline with my previous logic.

My first attempt at explaining my mistake, was by arguing "it's an increasing sequence" was actually less plausible then the real answer, since the real answer was making a much riskier claim. I think one can argue this without contradiction (the rule is either vague or specific, not both).

However, it's often easy to show whether some infinite product is analytic. Making the jump that the product evaluates to sin, in particular, requires more evidence. But in some qualitative sense, establishing that later goal is much better. My guess was that establishing the equivalence is a more specific claim, making it more valuable.

In my attempt to formalize this, I tried to show this was represented by the probabilities. This is clearly false.

What should I read to understand this problem more formerly, or more precisely? Should I look up formal definitions of evidence?

Comment author: gjm 25 August 2014 11:28:43PM 1 point [-]

"S is an increasing sequence" is a less specific hypothesis than "S consists of all prime numbers whose decimal representations end in 1, in increasing order". But "The only constraint governing the generation of S was that it had to be an increasing sequence" is not a less specific hypothesis than "The only constraint governing the generation of S was that it had to consist of primes ending in 1, in increasing order".

If given a question of the form "guess the rule governing such-and-such a sequence", I would expect the intended answer to be one that uniquely identifies the sequence. So I'd give "the numbers are increasing" a much lower probability than "the numbers are the primes ending in 1, in increasing order". (Recall, again, that the propositions whose probabilities we're evaluating aren't the things in quotation marks there; they're "the rule is: the numbers are increasing" and "the rule is: the numbers are the primes (etc.)".

Moving back to your question about analytic functions: Yes, more specific hypotheses may be more useful when true, and that might be a good reason to put effort into testing them rather than less specific, less useful hypotheses. But (as I think you appreciate) that doesn't make any difference to the probabilities.

The subject concerned with the interplay between probabilities, preferences and actions is called decision theory; you might or might not find it worth looking up.

I think there's some philosophical literature on questions like "what makes a good explanation?" (where a high probability for the alleged explanation is certainly a virtue, but not the only one); that seems directly relevant to your questions, but I'm afraid I'm not the right person to tell you who to read or what the best books or papers are. I'll hazard a guess that well over 90% of philosophical work on the topic has close to zero (or even negative) value, but I'm making that guess on general principles rather than as a result of surveying the literature in this area. You might start with the Stanford Encyclopedia of Philosophy but I've no more than glanced at that article.

View more: Next