The self-unfooling problem
Inspired by PuyaSharif's conundrum, I find myself continually faced with the opposite problem, which is identical to the original except in the bold-faced sentences:
You are given the following information:
Your task is to hide a coin in your house (or any familiar finite environment).
After you've hidden the coin your memory will be erased and restored to a state just before you receiving this information.
Then you will be told about the task (i.e that you have hidden a coin), and asked to try to find the coin.
If you find it you win. The faster you find it, the better you win.
Where do you leave the coin so that when you have no memory of where you put it, you can lay your hands on it at once?
For just one coin, you might think up some suitable Schelling point, but now multiply the task a thousandfold, for all of your possessions. (I am not a minimalist; of books alone I have 3500.) How do you arrange all your stuff, all your life, in such a way that everything is exactly where you would first think of looking for it?
[LINK] "The Limits of Intelligence"
Article in current Scientific American (first para and bullet points, rest is paywalled).
Podcast by the author (free).
The author, Douglas Fox, argues that there may be physical limits to how intelligent a brain made of neurons can become, limits that may not be very distant from where we are now.
He makes evolutionary arguments at a couple of points, suggesting that he is talking about how smart an organism could have evolved, rather than how smart we might make ourselves; he certainly isn't talking about how smart a machine we might create out of different materials.
From the podcast (I don't have access to the article):
Four routes to higher intelligence, which he argues won't get us very far:
- Increase the speed of axons. But that means making them fatter, which drives the neurons further apart, neutralising the gain.
- Increase brain size. That needs more energy, and before long you get something unsustainable. You get longer pathways in a larger brain, which slows them down. The neurons will make more connections, which makes them bigger, so the number of neurons scales slower than the volume of the brain. And anyway, whales and elephants have bigger brains than us but don't seem to be more intelligent, and cows have brains a hundred times the size of a mouse brain but aren't a hundred times smarter. So brain size doesn't seem to matter; at best the relationship with intelligence is unclear.
- Packing more neurons into the existing volume by making them smaller. You run into signal to noise problems. The ion channels involved in generating action potentials are a certain size, and you must have fewer in a smaller neuron, hence more random variation. The result is neurons spontaneously firing.
- Offload intelligence support. Books and the internet will remember things for you and help you tap the collective intelligence of your social network. Compare social insects doing things that they couldn't do individually. But by alleviating the necessity of intelligence this may even have reduced the evolutionary pressure to get smarter.
He's described simply as an "award-winning author", but I don't know if he has any scientific background, and there are too many people of the same name to Google him.
When is further research needed?
Here's a simple theorem in utility theory that I haven't seen anywhere. Maybe it's standard knowledge, or maybe not.
TL,DR: More information is never a bad thing.
The theorem proved below says that before you make an observation, you cannot expect it to decrease your utility, but you can sometimes expect it to increase your utility. I'm ignoring the cost of obtaining the additional data, and any losses consequential on the time it takes. These are real considerations in any practical situation, but they are not the subject of this note.
Omega and self-fulfilling prophecies
Omega appears to you in a puff of logic, and presents you with a closed box. "If you open this box you will find either nothing or a million dollars," Omega tells you, "and the contents will be yours to keep." "Great," you say, taking the box, "sounds like I can't lose!" "Not so fast," says Omega, "to get that possible million dollars you have to be in the right frame of mind. If you are at least 99% confident that there's a million dollars in the box, there will be. If you're less confident than that, it will be empty. I'm not predicting the state of your mind in advance this time, I'm reading it directly and teleporting the money in only if you have enough faith that it will be there. Take as long as you like."
Assume you believe Omega. Can you believe the million dollars will be there, strongly enough that it will be?
A gene for bad memory? (Link)
Original paper: "RGS14 is a natural suppressor of both synaptic plasticity in CA2 neurons and hippocampal-based learning and memory"
Sci. Am. article: "Knocking Out a 'Dumb' Gene Boosts Memory in Mice"
Science Daily article: "Gene Limits Learning and Memory in Mice"
They haven't found any deficit in the mice with RGS14 knocked out.
Reaching the general public
I recently discovered that in my home town (Norwich, England) there is currently running a series of "Café Conversations" in which some faculty members of the university that I work at are giving talks/hosting discussions on various topics. The meetings take place in a café that I know, which has room for about 20 people, and are open to the general public. Titles of some of the meetings already arranged are "What is infinity?", "Increasing happiness, decreasing consumption", "Bioplastics: wasteproduct or gold mine?", one on the nature of boredom (really about how old, retired people find things to do), and several on environmental topics. I have not been to any of them -- in fact, only the first of them has happened so far.
The obvious thing for me to do is to volunteer something on rationality, but quite apart from whether I would be able to do that at all (not being the friendly and outgoing, charismatic sort suitable for leading such a meeting), a problem that I foresee is this: these meetings are intended for the general public. This is nothing like a LessWrong meetup, or a Singularity conference, or delivering a lecture which, while nominally open to the general public is not actually intended for them.
Has anyone here had experience of communicating about rationality, one-to-one or one-to-a-small-roomful, with the general public? How do you approach the matter, and how far can you expect to get?
Norfolk, by the way, has given the world the expression "Normal for Norfolk". Go on, Google it.
Christmas
What does a rationalist do for Christmas (or whatever analogue is going on around you at this time)? Stay at home and grumble, "Bah, humbug! Stop having-fun-for-bad-reasons, and did you know that Láadan has a single word for that concept?"?
Attempting to light a candle instead, I am giving my teenaged nephew, who was into science but is now into history, "Guns, Germs and Steel", which combines both. Someone else (I haven't decided who) is getting "The Atheist's Guide To Christmas" which has chapters by Richard Dawkins, Ben Goldacre, Simon Singh, and the like.
What are you doing for Christmas?
Aieee! The stupid! it burns!
Last Wednesday (2010 Dec 01), BBC Radio 4 broadcast a studio discussion on the question: "should we actively try to extend life itself?" The programme can be listened to from the BBC here for one week from broadcast, and is also being repeated tomorrow (Saturday Dec 04) at 22:15 BST. (ETA: not BST, GMT.)
All of the dreadful arguments for why death is good came out. For uninteresting reasons I missed a few minutes here and there, but in what I heard, not one of the speakers on any side of the question said anything like, "This is a no-brainer! Death is evil. Disease is evil. The less of both we have, the better. There is nothing good about death, at all, and all the arguments to the contrary are moral imbecility."
Instead, I heard people saying that work on life extension is disrespectful to the old, that to prolong life would be like prolonging an opera, which has a certain natural size and shape, that the old are wise, so if we make them physically young then old people won't be old, so they won't be wise. Whatever cockeyed argument you can construct by scattering into a Deeply Wise template the words "old", "young", "wise", "decrepit", "healthy", "natural", "unnatural", "boredom", "inevitable", "denial", I heard worse.
If I can bear to listen again to the whole thing just to check I didn't miss anything important, I may write something on their discussion board.
Outreach opportunity
Ars Technica are holding a competition for people to make a science video up to 3 minutes long "to explain a scientific concept in terms that a high school science class would not only understand, but actually be interested in watching". Prizes in three categories: biology, physics, and mathematics. Deadline is December 25. More details here.
Anyone want to have a go at Bayes' theorem? Cognitive bias? Defeating death? Invisible purple dragons?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)