Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
toto20

One piece of evidence for the second is to notice how nations with small populations tend to cluster near the top of lists of countries by per-capita-GDP.

1) So do nations with very high taxes, i.e. Nordic countries (or most of Western Europe for that matter).

One of the outliers (Ireland) has probably been knocked down a few places recently, as a result of a worldwide crisis that might well be the result of excessive deregulation.

2) In very small countries, one single insanely rich individual will make a lot of difference to average wealth, even if the rest of the population is very poor. I think Brunei illustrates the point. So I'm not sure the supposedly high rank of small countries is indicative of anything (median GDP would be more useful).

3) There are many small-population countries at the bottom of the chart too.

Upvoted.

toto00

This seems to be the premise of Isaac Asimov's "Nightfall".

toto00

OK. I assume the usual (Omega and Upsilon are both reliable and sincere, I can reliably distinguish one from the other, etc.)

Then I can't see how the game doesn't reduce to standard Newcomb, modulo a simple probability calculation, mostly based on "when I encounter one of them, what's my probability of meeting the other during my lifetime?" (plus various "actuarial" calculations).

If I have no information about the probability of encountering either, then my decision may be incorrect - but there's nothing paradoxical or surprising about this, it's just a normal, "boring" example of an incomplete information problem.

you need to have the correct prior/predisposition over all possible predictors of your actions, before you actually meet any of them.

I can't see why that is - again, assuming that the full problem is explained to you on encountering either Upsilon or Omega, both are truhful, etc. Why can I not perform the appropriate calculations and make an expectation-maximising decision even after Upsilon-Omega has left? Surely Omega-Upsilon can predict that I'm going to do just that and act accordingly, right?

toto10

I have problems with the "Giant look-up table" post.

"The problem isn't the levers," replies the functionalist, "the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling... Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it's possible to program a conscious being in Haskell."

If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human's behaviour is dependent not just on the present state of the environment, but also on previous states. I don't see how you can successfully emulate a human without that. So the GLUT's entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.

Note that "creation of beliefs" (including about beliefs) is just a special case of memory. It's all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn't have this ability, it can't emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.

So I don't see how the non-consciousness of the GLUT is established by this argument.

But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, "Where did the improbability come from?"

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...) In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.

But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.

toto70

When it comes to proving such obvious things, one will invariably fail to convince.

Montesquieu, "The Spirit of the Laws", book XXV, chapter XIII. (Link to the book, Original French)

toto90

I don't know, to me he's just stating that the brain is the seat of sensation and reasoning.

Aristotle thought it was the heart. Both had arguments for their respective positions. Aristotle studied animals a lot and over-interpreted the evidence he had accumulated: to the naked eye the brain appears bloodless and unconnected to the organs; it is also insensitive, and can sustain some non-fatal damage; the heart, by contrast, reacts to emotions, is obviously connected to the entire body (through the circulatory system), and any damage to it leads to immediate death.

Also, in embryos the brain is typically formed much later than the heart. This is important if, like Aristotle, you spent too much time thinking about "the soul" (that mysterious folk concept which was at the same time the source of life and of sensation) and thus believed that the source of "life" was also necessarily the source of sensation, since both were functions of "the soul".

Hippocrates studied people more than animals, did not theorize too much about "the soul", and got it right. But it would be a bit harsh to cast that as a triumph of rationality against superstition.

toto30

Yes, yes he did, time and again (substituting "copy" for "zombie", as MP points out below). That's the Star Trek paradox.

Imagine that there is a glitch in the system, so that the "original" Kirk fails to dematerialise when the "new" one appears, so we find ourselves with two copies of Kirk. Now Scotty says "Sowwy Captain" and zaps the "old" Kirk into a cloud of atoms. How in the world does that not constitute murder?

That was not the paradox. The "paradox" is this: the only difference between "innocuous" teleportation, and the murder scenario described above, is a small time-shift of a few seconds. If Kirk1 disappears a few seconds before Kirk2 appears, we have no problem with that. We even show it repeatedly in programmes aimed at children. But when Kirk1 disappears a few seconds after Kirk2 appears, all of a sudden we see the act for what it is, namely murder.

How is it that a mere shift of a few seconds causes such a great difference in our perception? How is it that we can immediately see the murder in the second case, but that the first case seems so innocent to us? This stark contrast between our intuitive perceptions of the two cases, despite their apparent underlying similarity, constitutes the paradox.

And yes, it seems likely that the above also holds when a single person is made absolutely unconscious (flat EEG) and then awakened. Intuitively, we feel that the same person, the same identity, has persisted throughout this interruption; but when we think of the Star Trek paradox, and if we assume (as good materialists) that consciousness is the outcome of physical brain activity, we realise that this situation is not very different from that of Kirk1 and Kirk2. More generally, it illustrates the problems associated with assuming that you "are" the same person that you were just one minute ago (for some concepts of "are").

I was thinking of writing a post about this, but apparently all of the above seems to be ridiculously obvious to most LWers, so I guess there's not much of a point. I still find it pretty fascinating. What can I say, I'm easily impressed.

toto200

People who as their first reaction start pulling excuses why this must be wrong out >of their asses get big negative points on this rationality test.

Well, if people are absolutely, definitely rejecting the possibility that this might ever be true, without looking at the data, then they are indeed probably professing a tribal belief.

However, if they are merely describing reasons why they find this result "unlikely", then I'm not sure there's anything wrong with that. They're simply expressing that their prior for "Communist economies did no worse than capitalist economies" is, all other things being equal, lower than .5.

There are several non-obviously-wrong reasons why one could reasonably put a low prior on this belief. The most obvious is the fact that when the wall fell down, economic migration went from East to West, not the other way round (East-West Germany being the most dramatic example).

Of course, this should not preclude a look at the hard data. Reality is full of surprises, and casual musings often miss important points. So again, saying "this just can't be so" and refusing to look at the data (which I presume is what you had in mind) is indeed probably tribal. Saying "hmmm, I'd be surprised if it were so" seems quite reasonable to me. Maybe I'm just tribalised beyond hope.

toto200

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

toto10

Frequentists (or just about anybody involved in experimental work) report p-values, which are their main quantitative measure of evidence.

Load More