Comment author: mikerpiker 03 August 2010 02:55:55AM *  6 points [-]

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

-H.P. Lovecraft

Comment author: toto 06 August 2010 08:06:27AM 0 points [-]

This seems to be the premise of Isaac Asimov's "Nightfall".

Comment author: cousin_it 22 July 2010 06:58:56AM *  6 points [-]

Sometime ago I figured out a refutation of this kind of reasoning in Counterfactual Mugging, and it seems to apply in Newcomb's Problem too. It goes as follows:

Imagine another god, Upsilon, that offers you a similar two-box setup - except to get the $2M in the box B, you must be a one-boxer with regard to Upsilon and a two-boxer with regard to Omega. (Upsilon predicts your counterfactual behavior if you'd met Omega instead.) Now you must choose your dispositions wisely because you can't win money from both gods. The right disposition depends on your priors for encountering Omega or Upsilon, which is a "bead jar guess" because both gods are very improbable. In other words, to win in such problems, you can't just look at each problem individually as it arises - you need to have the correct prior/predisposition over all possible predictors of your actions, before you actually meet any of them. Obtaining such a prior is difficult, so I don't really know what I'm predisposed to do in Newcomb's Problem if I'm faced with it someday.

Comment author: toto 22 July 2010 09:16:49AM 0 points [-]

OK. I assume the usual (Omega and Upsilon are both reliable and sincere, I can reliably distinguish one from the other, etc.)

Then I can't see how the game doesn't reduce to standard Newcomb, modulo a simple probability calculation, mostly based on "when I encounter one of them, what's my probability of meeting the other during my lifetime?" (plus various "actuarial" calculations).

If I have no information about the probability of encountering either, then my decision may be incorrect - but there's nothing paradoxical or surprising about this, it's just a normal, "boring" example of an incomplete information problem.

you need to have the correct prior/predisposition over all possible predictors of your actions, before you actually meet any of them.

I can't see why that is - again, assuming that the full problem is explained to you on encountering either Upsilon or Omega, both are truhful, etc. Why can I not perform the appropriate calculations and make an expectation-maximising decision even after Upsilon-Omega has left? Surely Omega-Upsilon can predict that I'm going to do just that and act accordingly, right?

Comment author: Nick_Tarleton 10 June 2010 07:17:05AM 4 points [-]
Comment author: toto 10 June 2010 03:18:31PM *  1 point [-]

I have problems with the "Giant look-up table" post.

"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."

If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.

Note that "creation of beliefs" (including about beliefs) is just a special case of memory. It's all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn't have this ability, it can't emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.

So I don't see how the non-consciousness of the GLUT is established by this argument.

But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...) In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.

Comment author: toto 03 June 2010 09:29:12AM *  5 points [-]

When it comes to proving such obvious things, one will invariably fail to convince.

Montesquieu, "The Spirit of the Laws", book XXV, chapter XIII. (Link to the book, Original French)

Comment author: djcb 01 May 2010 01:56:11PM *  11 points [-]

Men ought to know that from nothing else but the brain comes joy, delights, laughter, and sports, and sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good, what are sweet, and what are unsavory. ... And by the same organ we become mad and delirious, and fears and terrors assail us. ... All these things we endure from the brain. ...In these ways I am of the opinion that the brain exercises the greatest power in the man.

-- Hippocrates, On the sacred disease (ca. 4th century BCE).

[ In this and other of his writings, Hippocrates shows such an incredible early sense for rationality and against superstition that was only rarely seen in the next 2000 after that -- and in addition, he was not just a armchair philosopher, he actually put these things is practice. So, hats off for Hippocrates, even when his medicine was not without faults of course...]

Comment author: toto 01 May 2010 05:03:04PM *  6 points [-]

I don't know, to me he's just stating that the brain is the seat of sensation and reasoning.

Aristotle thought it was the heart. Both had arguments for their respective positions. Aristotle studied animals a lot and over-interpreted the evidence he had accumulated: to the naked eye the brain appears bloodless and unconnected to the organs; it is also insensitive, and can sustain some non-fatal damage; the heart, by contrast, reacts to emotions, is obviously connected to the entire body (through the circulatory system), and any damage to it leads to immediate death.

Also, in embryos the brain is typically formed much later than the heart. This is important if, like Aristotle, you spent too much time thinking about "the soul" (that mysterious folk concept which was at the same time the source of life and of sensation) and thus believed that the source of "life" was also necessarily the source of sensation, since both were functions of "the soul".

Hippocrates studied people more than animals, did not theorize too much about "the soul", and got it right. But it would be a bit harsh to cast that as a triumph of rationality against superstition.

Comment author: wedrifid 29 March 2010 07:30:36AM *  6 points [-]

I believe in continuity of substance, not similarity of pattern, as the basis of identity.

So Scotty killed Kirk and then created a zombie-Kirk back on the Enterprise? It would seem that the whole Star Trek is a fantasy story about a space faring necromancer who repeatedly kills his crew then uses his evil contraption to reanimate new beings out of base matter while rampaging through space seeking new and exotic beings to join his never ending orgy of death.

In response to comment by wedrifid on The I-Less Eye
Comment author: toto 29 March 2010 09:05:13AM 3 points [-]

Yes, yes he did, time and again (substituting "copy" for "zombie", as MP points out below). That's the Star Trek paradox.

Imagine that there is a glitch in the system, so that the "original" Kirk fails to dematerialise when the "new" one appears, so we find ourselves with two copies of Kirk. Now Scotty says "Sowwy Captain" and zaps the "old" Kirk into a cloud of atoms. How in the world does that not constitute murder?

That was not the paradox. The "paradox" is this: the only difference between "innocuous" teleportation, and the murder scenario described above, is a small time-shift of a few seconds. If Kirk1 disappears a few seconds before Kirk2 appears, we have no problem with that. We even show it repeatedly in programmes aimed at children. But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder.

How is it that a mere shift of a few seconds causes such a great difference in our perception? How is it that we can immediately see the murder in the second case, but that the first case seems so innocent to us? This stark contrast between our intuitive perceptions of the two cases, despite their apparent underlying similarity, constitutes the paradox.

And yes, it seems likely that the above also holds when a single person is made absolutely unconscious (flat EEG) and then awakened. Intuitively, we feel that the same person, the same identity, has persisted throughout this interruption; but when we think of the Star Trek paradox, and if we assume (as good materialists) that consciousness is the outcome of physical brain activity, we realise that this situation is not very different from that of Kirk1 and Kirk2. More generally, it illustrates the problems associated with assuming that you "are" the same person that you were just one minute ago (for some concepts of "are").

I was thinking of writing a post about this, but apparently all of the above seems to be ridiculously obvious to most LWers, so I guess there's not much of a point. I still find it pretty fascinating. What can I say, I'm easily impressed.

Comment author: taw 15 March 2010 12:53:17AM 5 points [-]

I'm increasingly inclined to use reactions to data that Communist economies did no worse on average than Capitalist economies as a new litmus test.

People who as their first reaction start pulling excuses why this must be wrong out of their asses get big negative points on this rationality test.

I don't need to explain why this is not mainstream. It is also extremely unlikely to be significantly wrong.

Comment author: toto 15 March 2010 02:20:15PM 17 points [-]

People who as their first reaction start pulling excuses why this must be wrong out >of their asses get big negative points on this rationality test.

Well, if people are absolutely, definitely rejecting the possibility that this might ever be true, without looking at the data, then they are indeed probably professing a tribal belief.

However, if they are merely describing reasons why they find this result "unlikely", then I'm not sure there's anything wrong with that. They're simply expressing that their prior for "Communist economies did no worse than capitalist economies" is, all other things being equal, lower than .5.

There are several non-obviously-wrong reasons why one could reasonably put a low prior on this belief. The most obvious is the fact that when the wall fell down, economic migration went from East to West, not the other way round (East-West Germany being the most dramatic example).

Of course, this should not preclude a look at the hard data. Reality is full of surprises, and casual musings often miss important points. So again, saying "this just can't be so" and refusing to look at the data (which I presume is what you had in mind) is indeed probably tribal. Saying "hmmm, I'd be surprised if it were so" seems quite reasonable to me. Maybe I'm just tribalised beyond hope.

Comment author: whpearson 01 March 2010 12:24:10PM *  8 points [-]

Pigeons can solve Monty hall (MHD)?

A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.

Behind a paywall

Comment author: toto 01 March 2010 02:24:10PM *  14 points [-]

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

Comment author: JGWeissman 27 February 2010 05:54:10AM 1 point [-]

jimrandomh claimed that frequentists don't report amounts of evidence. So you object that measuring in decibels is not how they don't report it? If they don't reports amount of evidence, then of course they don't report it in the precise way in the example.

Comment author: toto 27 February 2010 08:15:24PM *  1 point [-]

Frequentists (or just about anybody involved in experimental work) report p-values, which are their main quantitative measure of evidence.

Comment author: komponisto 21 February 2010 09:13:44AM *  27 points [-]

This is going to sound silly, but...could someone explain frequentist statistics to me?

Here's my current understanding of how it works:

We've got some hypothesis H, whose truth or falsity we'd like to determine. So we go out and gather some evidence E. But now, instead of trying to quantify our degree of belief in H (given E) as a conditional probability estimate using Bayes' Theorem (which would require us to know P(H), P(E|H), and P(E|~H)), what we do is simply calculate P(E|~H) (techniques for doing this being of course the principal concern of statistics texts), and then place H into one of two bins depending on whether P(E|~H) is below some threshold number ("p-value") that somebody decided was "low": if P(E|~H) is below that number, we put H into the "accepted" bin (or, as they say, we reject the null hypothesis ~H); otherwise, we put H into the "not accepted" bin (that is, we fail to reject ~H).

Now, if that is a fair summary, then this big controversy between frequentists and Bayesians must mean that there is a sizable collection of people who think that the above procedure is a better way of obtaining knowledge than performing Bayesian updates. But for the life of me, I can't see how anyone could possibly think that. I mean, not only is the "p-value" threshold arbitrary, not only are we depriving ourselves of valuable information by "accepting" or "not accepting" a hypothesis rather than quantifying our certainty level, but...what about P(E|H)?? (Not to mention P(H).) To me, it seems blatantly obvious that an epistemology (and that's what it is) like the above is a recipe for disaster -- specifically in the form of accumulated errors over time.

I know that statisticians are intelligent people, so this has to be a strawman or something. Or at least, there must be some decent-sounding arguments that I haven't heard -- and surely there are some frequentist contrarians reading this who know what those arguments are. So, in the spirit of Alicorn's "Deontology for Cosequentialists" or ciphergoth's survey of the anti-cryonics position, I'd like to suggest a "Frequentism for Bayesians" post -- or perhaps just a "Frequentism for Dummies", if that's what I'm being here.

Comment author: toto 22 February 2010 11:26:23AM *  2 points [-]

(which would require us to know P(H), P(E|H), and P(E|~H))

Is that not precisely the problem? Often, the H you are interested in is so vague ("there is some kind of effect in a certain direction") that it is very difficult to estimate P(E / H) - or even to define it.

OTOH, P(E / ~H) is often very easy to compute from first principles, or to obtain through experiments (since conditions where "the effect" is not present are usually the most common).

Example: I have a coin. I want to know if it is "true" or "biased". I flip it 100 times, and get 78 tails.Now how do I estimate the probability of obtaining this many tails, knowing that the coin is "biased"? How do I even express that analytically? By contrast, it is very easy to compute the probability of this sequence (or any other) with a "non-biased" coin.

So there you have it. The whole concept of "null hypotheses" is not a logical axiom, it simply derives from real-world observation: in the real world, for most of the H we are interested in, estimating P(E / ~H) is easy, and estimating P(E / H) is either hard or impossible.

what about P(E|H)?? (Not to mention P(H).)

P(H) is silently set to .5. If you know P(E / ~H), this makes P(E / H) unnecessary to compute the real quantity of interest, P(H / E) / P(~H / E). I think.

View more: Prev | Next