Comment author: kybernetikos 21 July 2011 05:10:39AM *  10 points [-]

eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.

Just because something only exists at high levels of abstraction doesn't mean it's not real or explanatory. Surely the important question is whether humans genuinely have preferences that explain their behaviour (or at least whether a preference system can occasionally explain their behaviour - even if their behaviour is truly explained by the interaction of numerous systems) rather than how these preferences are encoded.

The information in a jpeg file that indicates a particular pixel should be red cannot be analysed down to a single bit that doesn't do anything else, that doesn't mean that there isn't a sense in which the red pixel genuinely exists. Preferences could exist and be encoded holographically in the brain. Whether you can find a specific neuron or not is completely irrelevant to their reality.

Comment author: Eliezer_Yudkowsky 27 May 2011 10:10:02PM 3 points [-]

Exercise: Say "Ducks" whenever someone sneezes.

(No, this was not my idea. Jasen again.)

Comment author: kybernetikos 29 May 2011 01:43:58PM 1 point [-]

I sneeze quite often. When someone says 'bless you', my usual response is 'and may you also be blessed'. I've heard a number of people who had apparently never wondered before say 'why do we say that?' after receiving that response.

Comment author: RichardKennaway 19 January 2011 05:31:40PM 3 points [-]

A billion and a half words. Can you mentally grasp how big that is?

1.5N GB, where N is the average bytes per English word. Multiply by, say, 5 for the HTML overhead and it would still all fit onto a 64GB memory stick uncompressed, though I'd want something faster for actually accessing it.

It would actually be larger, as you'd need all the images as well, and you'd want the ancillary things like wikisource and wiktionary (I don't know if those are independent projects or if they're included in your figure) but even so, it sounds like the whole thing would easily fit onto a typical hard disc.

Comment author: kybernetikos 19 January 2011 05:39:47PM *  1 point [-]

I have all of the english wikipedia available for offline searching on my phone. It's big, sure, but it doesn't fill the memory card by any means (and this is just the default one that came with the phone).

For offline access on a windows computer, WikiTaxi is a reasonable solution.

I'd recommend that everyone who can carry around offline versions of wikipedia. I consider it part of my disaster preparedness, not to mention the fun of learning new things by hitting the 'random article' button.

Comment author: shokwave 18 January 2011 12:44:40PM *  0 points [-]

Because those are the class of problems this post discusses.

From the top of the post:

A parole board considers the release of a prisoner: Will he be violent again? A hiring officer considers a job candidate: Will she be a valuable asset to the company? A young couple considers marriage: Will they have a happy marriage?

The cached wisdom for making such high-stakes predictions is to have experts gather as much evidence as possible, weigh this evidence, and make a judgment. But 60 years of research has shown that in hundreds of cases, a simple formula called a statistical prediction rule (SPR) makes better predictions than leading experts do.

Comment author: kybernetikos 19 January 2011 05:32:18PM *  2 points [-]

A parole board considers the release of a prisoner: Will he be violent again?

I think this is the kind of question that Miller is talking about. Just because a system is correct more often, doesn't necessarily mean it's better.

For example if the human experts allowed more people out who went on to commit relatively minor violent offences and the SPRs do this less often, but are more likely to release prisoners who go on to commit murder then there would be legitimate discussion over whether the SPR is actually better.

I think this is exactly what he is talking about when he says

Where AI's compete well generally they beat trained humans fairly marginally on easy (or even most) cases, and then fail miserably at border or novel cases. This can make it dangerous to use them if the extreme failures are dangerous.

Whether or not there is evidence that says this is a real effect I don't know, but to address it what you really need to measure is total utility of outcomes rather than accuracy.

Comment author: Desrtopa 19 December 2010 07:06:17PM 3 points [-]

To answer my own question, I personally figured out that the whole Santa story was a lie around the age of six or so, but I continued to believe in belief, that it was right or appropriate that young children be encouraged to believe in Santa Claus. I never confronted my parents about it, but we held an "I know you know I know" understanding, and I continued to prop up my younger sister's belief for years afterward. It wasn't until years later, after my sister had stopped believing, that I started to wonder why adults would want children to believe in Santa Claus, and whether their reasons for it were actually good.

I don't think it ever really encouraged me to question adults' motives so much as learning to question adults motives led me to question it. I was a bit surprised when I started to learn how traumatic the discovery can be for children, since my own realization never seemed like a big deal to me.

Comment author: kybernetikos 25 December 2010 09:46:00PM 1 point [-]

that I started to wonder why adults would want children to believe in Santa Claus, and whether their reasons for it were actually good.

I think that lots of people have a kind of compulsion to lie to anyone they care about who is credulous, particularly children, about things that don't matter very much. I assume it's adaptive behaviour, to try to toughen up their reasoning skills on matters that aren't so important - to teach them that they can't rely on even good people to tell them stuff that is true.

Comment author: fischer 25 December 2010 06:55:57AM 2 points [-]

I'm bothered by the intertemporal implications of this, i.e. if I have $100 that I will spend to help the most humans possible, then I could either spend it today or invest it and spend $105 next year (assumed 5% ROR). Will I then ever spend the money on charity? Or will I always invest it, and just let this amassed wealth be distributed when I die?

Comment author: kybernetikos 25 December 2010 07:07:26AM *  7 points [-]

The good you do can compound too. If you save a childs life at $500, that child might go on to save other childrens lives. I think you might well get a higher rate of interest on the good you do than 5%. There will be a savings rate at which you should save instead of give, but I don't think we're near it at the moment.

Comment author: kybernetikos 25 December 2010 05:58:08AM *  16 points [-]

Most of us allocate a particular percentage to charity, despite the fact that most people would say that nearly nothing we spend money on is as important as saving childrens lives.

I don't know whether you think it's that we overestimate how much we value saving childrens lives, or underestimate how important xbox games, social events, large tvs and eating tasty food are to us. Or perhaps you think it's none of that, and that we're being simply irrational.

I doubt that anyone could consistently live as if the difference between choice of renting a nice flat and renting a dive was one life per month, or that halving normal grocery consumption for a month was a childs life that month, etc. If that's really the aim, we're going to have to do a significant amount of emotional engineering.

I also want to stick up for the necessity of analysing the way that a charity works, not just what they do. For example, charities that employ local people and local equipment may save fewer people per dollar in the short term, but may be less likely to create a culture of dependence, and may be more sustainable in the long term. These considerations are important too.

Comment author: kybernetikos 01 December 2010 03:01:26PM *  3 points [-]

I have to admire the cunning of your last sentence.

Or have I accidentally defected? I can't tell.

EDIT: I think the 'wizened' correction was intended to be a joke. When I read your piece originally the idea of you 'wizening up' made me smile, and I suspect that the corrector just wanted to share that idea with others who may have missed it.

Comment author: ata 29 November 2010 09:06:15PM 4 points [-]

Could that be a kind of intellectual assent vs belief test for Many Worlds?

No, because it assumes you're indifferent to any effects you have on worlds that you don't personally get to experience.

Comment author: kybernetikos 29 November 2010 09:12:50PM *  1 point [-]

I suppose the goal you were going to spend the money on would have to be of sufficient utility if achieved to offset that in order to make the scenario work. Maybe saving the world, or creating lots of happy simulations of yourself, or finding a way to communicate between them.

Comment author: kybernetikos 29 November 2010 08:54:47PM 0 points [-]

Imagine a raffle where the winner is chosen by some quantum process. Presumably under the many worlds interpretation you can see it as a way of shifting money from lots of your potential selves to just one of them. If you have a goal you are absolutely determined to achieve and a large sum of money would help towards it, then it might make a lot of sense to take part, since the self that wins will also have that desire, and could be trusted to make good use of that money.

Now, I wonder if anyone would take part in such a raffle if all the entrants who didn't win were killed on the spot. That would mean that everyone would win in some universe, and cease to exist in the other universes where they entered. Could that be a kind of intellectual assent vs belief test for Many Worlds?

View more: Prev | Next