Open Thread: October 2009
Hear ye, hear ye: commence the discussion of things which have not been discussed.
As usual, if a discussion gets particularly good, spin it off into a posting.
(For this Open Thread, I'm going to try something new: priming the pump with a few things I'd like to see discussed.)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (425)
I have something of a technical question; on my personal wiki, I've written a few essays which might be of interest to LWers. They're in Markdown, so you would think I could just copy them straight into a post, but, AFAIK, you have to write posts in that WSYIWG editor thing. Is there any way around that? (EDIT: Turns out there's a HTML input box, so I can write locally, compile with Pandoc, and insert the results.)
The articles, in no particular order:
(If you have Gitit handy, you can run a mirror of my wiki with a command like
darcs get <http://www.gwern.net/> && cd www.gwern.net && gitit -f static/gwern.conf.)You should be able to just copy and paste the HTML version into the WYSIWYG editor and it will magic something for you.
There is a button in the editor that allows to enter raw HTML (and it should be easy to construct a regex script to get whatever).
The first essay was the best IMO. What do you think about banning net-unproductive websites?
As for your claim that old is as good as new - it's not.
Yes, about half of them. Not all were actually good, IMDB has some systemic biases. Good movies are much less common than you claim.
Also you cannot just decide to skip making mediocre movies (or anything else) and only do the good ones. At best by halving number of movies made, you'll halve number of great movies made. Due to expected positive externalities (directors and so on learning from previous movies how to make better ones), it might lower number of great movies even more.
If you make the list of best movies tend to be more recent. Looking at IMDB, which I consider very strongly biased towards old movies, top 250 are from:
Which is quite strongly indicative that movie making industry is improving (and this effect is underestimated by IMDB quite considerably). On list of movies I rated 10/10 on imdb, only 1 out of 28 is not from 1990s or 2000s.
It's also true for books - progress is not that fast, but I can think of very few really great books earlier than mid 20th century. Or highly enjoyable music earlier than the last quarter of 20th century. No solid data here, it might be due to progress of technology in case of music, and better cultural match with me in case of books.
Really? Really? I would put Mozart, Bach or Verdi against absolutely anyone from 1975 to the present.
This is obviously a matter of taste. I really like Ode to Joy, but that's the only old music that has a ghost of a chance of competing for my affections on a par with my favorite show tunes or other more recent selections. If you like a lot of old music and not a lot of new music, it just means that you a) have common tastes with people who were rich music patrons in the Golden Age of your choice, or b) you're succumbing to some signaling effect having to do with the perceived absolute quality of old dead white musicians' work. If there is something like objective musical quality out there (which is a matter of open debate in aesthetics), it's probably very fuzzy. Maybe Ode to Joy is objectively better than Sk8er Boi, but the jury is out and they don't seem inclined to come back soon.
Obviously it's a matter of taste, yes. (And I do think about the signaling effects of my musical tastes from time to time; it is rather an interesting topic.) I was only putting forth my "no good music has been written since the death of Gershwin"* opinion to contrast with taw's "no good music was written before 1975" opinion, in order to produce a synthesis that would support gwern's original contention that enough art now exists that we needn't subsidize more of it.
*not actually my opinion, but close
For me, and as far as I can tell vast majority of other people, they're just not terribly enjoyable.
I'm trained as a classical pianist, and I still don't enjoy Mozart, Verdi, Scarlatti, or pretty much any other of the classical period composers. I love Bach, but I'm not familiar with other baroque composers.
But mainly, I really enjoy romantic and modern classical composers. I'd absolutely agree with the thesis that music has been getting better and better, even limiting oneself to classical music. (Bach is an amazing exception.)
Comparing classical to popular music is very interesting. Perhaps the difference is that classical music requires a very developed ear in order to enjoy, and so it only appeals to a much smaller subset of people--those with training or high musical talent--while still being comparable or superior in quality to popular music. I would compare it to wine, except there's strong evidence that wine appreciation is almost entirely about status. I'm not sure if there's anything else to compare it to. Programming as an art form?
I think enjoying poetry or literature is a good comparison. Both take effort and some hard work to be able to appreciate and are considered dull and boring by people with no training/study in the relevant discipline. They all also unfortunately appeal to some people's shallow sense of "high culture" and thereby encourage inauthentic signaling by lots of people that don't really enjoy them. It's easy to understand that if you had no experience yourself, and your experience with a small number of people who profess enjoyment is that they are engaged in false signaling, that you would think there is nothing more to it than that, that everybody who professes passion is just engaged in false signaling.
I'm convinced that most people who took a music appreciation class and studied music theory and ear training for a year, combined with some music lessons, would at the end of that process have a completely different reaction to classical music (assuming they did it all by choice and weren't forced into it by parents).
Mightn't that just be because those courses are specifically to teach appreciation of those kinds of music? I expect it's probably possible to teach people who don't like rap, or country, to appreciate those genres; but because rap and country don't fit the shallow sense of high culture, no one is motivated to learn to appreciate them if they don't already. There is very little net benefit to learning to appreciate a new kind of music - there is abundant music in most genres, and one can easily fill one's ears with whatever one can most readily enjoy, so you probably don't get more total enjoyment from music by adding to your enjoyed genres. In the case of classical music, the benefit of learning to like it isn't really in the form of enjoyment of classical music; it's in the form of getting to sincerely claim to like classical music, and no longer being left out when highly cultured people discuss classical music.
Music theory, no, but the others, yes. (I wouldn't think music theory would increase classical appreciation more than other genres, though.)
Disagree. Whatever the genre, more variety means listening is less tiring (because less monotonous) and, on the whole, more edifying. Each genre is enjoyed differently, and stimulates different parts of the mind. And in the specific case of classical music, on the theory that it is deeper and richer than other music (in the same way that set theory is deeper and richer than propositional logic, or Netflix is deeper and richer than Blockbuster) the limit of enjoyment is actually higher.
I took most of a year of AP music theory in high school (dropped out of it because I was being picked on) and never got the impression that we were learning about anything but archaic, old rules of music followed by dead composers. That, and how to take musical dictation, but none of the examples were contemporary. Was my music theory teacher just incompetent? Did I miss the generally applicable parts by leaving the class early?
And while having a variety of music is definitely good, there's plenty of variety within a genre! It doesn't seem obvious to me that you can get more valuable variety per ounce of effort by taking classes to learn to appreciate more genres than you can by spending time on Pandora.
My music theory course only had a slight emphasis on classical music. (Mainly because classical music is more analyzable with theory, I guess.) Probably your textbook was just old or inferior. But I got very little out of the course anyway.
I'm not suggesting that it's necessarily worth the effort to increase one's appreciation of classical music, given the opportunity cost. (I'm not exactly chomping at the bit to appreciate Ulysses or Gravity's Rainbow, or Hegel or Kant or Foucault or Derrida. Or wine, for that matter.) But the easiest way would probably be to pick a CD with some good classical music on it and listen to it many times through until you start to understand it musically. Courses are likely overkill. When I first started learning Bach (around the age of 10) it made no musical sense to me at all. I forget how long until I started to understand it, so I don't know how long you'd have to listen to start to get it. Maybe too long to bother.
Hmmmmmmmmmmm no. Doubt there's a good way to resolve this disagreement.
Perhaps there are some genres with more or less variety than others? Or we're counting genres differently?
As for learning if coming to it as an adult, I'd recommend resources like Leonard Bernstein's Young People's Concerts (and any of his many writings on music, such as The Joy of Music), as well as Aaron Copland's What to Listen for in Music and works of that nature.
The key point in my opinion is that you have to learn to hear more in the music, to be able to hear and follow the different voices in a fugue, or recognize the development of a theme in a sonata allegro form, and this sort of ability only comes about through some offline study and intellectual training that is then applied when listening to music and the knowledge really comes alive.
Oh man I miss Pandora since they stopped streaming to the UK. :(
On topic: I had quite a few years of music lessons (though I wasn't really much good) and some musical theory, which I really enjoyed. And I do quite like listening to classical music in a vague sort of way, but I wouldn't say I have an "appreciation" for it: it's not as though I can pick out features or analyse it or anything. So am I appreciating it without a tuned ear, or am I just unaware of the work my bit of theoretical knowledge is doing behind the scenes?
I'd say appreciation is really just a synonym for enjoyment. You can be a world-class performer without knowing any theory at all.
I think music theory -- including ear training -- would disproportionately increase classical appreciation (but would also improve appreciation for other forms too). The reason is simple: classical music is more complex musically, so it rewards a more discriminating ear and a richer sense of harmony, counterpoint, etc.
There's a lot of popular music that I love and think is very interesting musically, harmonically, etc., but classical music is usually so much deeper and requires much more skill and musical knowledge to create (and also to appreciation). If you want to succeed in the classical world, as a performer or composer, you have to start by the age of 6, you have to be supremely talented, you have to work obsessively until you are accepted into a good conservatory, and then work even harder still. Your entire life is basically nothing but music from a very early age. That was true of Bach, Mozart, Beethoven, etc., and it is still true today. The situation with popular music is completely different. You can pick it up as an adult, and if you're talented, you might still have a successful career. You can pick it up as a teenager, and within a few years have developed enough musically to be on par with almost any other popular band. It seems pretty clear that something that takes decades of study and practice (and involves study of hundreds of years of music history) is going to involve more skill (and make more use of skills acquired) than something that can be achieved in years of study and practice, and when the composer is relying on decades of study and their intimate knowledge of hundreds of years of changes in music theory, counterpoint etc., it is definitely going to take some work on the part of a listener to do more than skim the surface in terms of enjoyment and appreciation.
How would you know this given your admittedly limited experience with classical music?
Speaking for myself, there is lots of music that I love listening too, in many different genres, but nothing else has such power to move me as classical music as its best does -- for example -- the Confutatis from Mozart's Requiem, or the Bach d-minor Chaccone, or in a lighter vein that I think anybody can appreciate and feel moved by, Paganiniani or the vitali chaconne.
I love lots of popular music, and probably listen to popular music about as much as I do classical, but there is a certain kind of ecstatic -- almost mystical -- experience that some classical music triggers that I've never gotten with popular music.
Okay - so you get special, unique value from classical. Meanwhile, I get special, unique value from Phantom of the Opera. Why should I think that learning to like classical music is more worth my time - given that I'm now left bored by most classical, or think of it as pleasant background noise - than pirating more Andrew Lloyd Weber?
That argument only works if we aren't allowed to enjoy novelty.
We can still enjoy novelty! For instance, I have a near-perfect track record of liking show tunes. There's lots. I can get a steady supply of novelty, supplementing older musicals with the new ones that come out every year and the other sorts of music I like. I don't need to learn to appreciate entire new genres to do it. Unless you mean that appreciating a new genre is a qualitatively different form of novelty? But then learning to appreciate the new genre is self-defeating. By the time you've learned to like it, you've already been exposed to lots of it and it's no longer new.
Do you actually feel this aversion? Because it's so... foreign to me. Learning to enjoy a new genre of music is always a fascinating discovery. I hear a curious snippet somewhere and go hmmm, gotta investigate deeper, then 24 hours later I'm swimming in the stuff, following connections, reading and listening... sort of like this (warning, that site is like crack for the right kind of person.)
It's not an aversion. If I had nothing better to do, or had a terrible time finding anything new to listen to, I'd be okay with learning about and learning to appreciate classical music. But as it happens, new, immediately fun music enters my life at a pretty satisfactory rate. I added a new artist to my library just yesterday because my roommate played his CD in the car on the way to the grocery store and it sounded neat. There's no reason for me to spend extra time on music that doesn't promptly catch my ear, when I can just hit up friends for personalized recommendations, cruise Pandora, and keep up with the artists I already enjoy - unless I feel like succumbing to the status signals that make classical different from other music!
Obviously, if we were actually going to work through this data we would want to know the rate of best-movie-ranking rather than the absolute numbers. Just as importantly, we'd want to know the frequency of best-movie-ranking relative to the number of movies watched from each decade, such that best-movie-rankings aren't simply dependent on availability.
In my experience, of the older movies I have watched, a greater fraction were strongly memorable than of the newer movies I have watched. In part, I suspect this is because I watch older movies intentionally, knowing that they are reputed to be good, where I watch newer movies with a somewhat lower bar for putting in the effort (because they are available in theaters, are easier to talk about, etc.).
Assuming the best old movies don't get filtered out and stay available, this data is accurate for our purpose.
IMDB top list is based on Bayes-filtered ratings, it says what proportion of people watching the movie loved it, not how many people watched it. It will be automatically biased towards intentional watching (therefore old movies), and the bias is in my opinion fairly strong. Still, in spite of this new movies win.
To be clear, I agree that the list should be biased towards old movies in the manner you describe.
The total number of films created has been rising for a while, however (under the "Theatrical Statistics" report here, for instance). It's not entirely unreasonable to believe that over 3x as many films were made in the 2000s as in the 1930s, though; compare Wikipedia's lists of 1930s films and 2000s films. The latter is dramatically longer.
Like I said, we would want to know the fraction of films making the Top 250 list, not the absolute numbers.
Random thoughts:
Values Dissonance is a real problem, even when applied over the scale of 50 years. Also, ScienceMarchesOn and even History Marches On. The more things we learn, the more things we can tell stories about.
I've found that, by reading an awful lot of books, I feel like I understand literature and storytelling. On the other hand, I really don't understand music very well. I can't tell what qualities make one piece of music good and another not as good. I can play the piano pretty well, but I can't really improvise or compose. My taste in music (or complete lack thereof) seems to have a great deal to do with the mere exposure effect; I like the kinds of music that I hear a lot and don't like the kinds of music that I hear less of.
Also, one other big difference between much contemporary popular music and much classical music is that a lot of contemporary popular music has lyrics that listeners can understand, and a lot of classical music is entirely instrumental or in foreign languages.
(Edit to say that this is in response to the culture and aesthetics article)
I take there to be a number of different things we want out of an piece of cultural production.
Expression of universal aspects of human nature, emotions.
Sensory stimuli (why old horror movies aren't scary, older movies have longer shots, and Michael Bay has a career).
Shared cultural experience- (we like to consume works that are already cultural embedded, we want to share in something nearly everyone experiences- this is why it is worth reading Homer, seeing Star Wars and listening to the Beatles).
Capturing the spirit of the times (we like it when works express what is unique in us, works that capture our sense of place and time, how we're different from our parents, etc. this is why punk music wouldn't have worked in the 18th century, why we have shows like the Wire, and why Rambo's motivations are really confusing for people born after 1980 who never took a modern history course.).
Your argument seems to turn on saying that whatever piece of culture you're consuming now you could be equally satisfied with something older. This seems to be the case with regard to the first criterion but once one admits the second and the fourth new production is essential.
One of the old standard topics of OB was cryogenics; why it's great even thought it's incredibly speculative & relatively expensive, and how we're all fools for not signing up. (I jest, but still.)
Why is there so much less interest in things like caloric restriction? Or even better, intermittent fasting, which doesn't even require cuts in calories? If we're at all optimistic about the Singularity or cryogenic-revival-level-technology being reached by 2100, then aren't those way superior options? They deliver concrete benefits now, for a price that can't be beat, and on the right time-scale*.
Yet I don't think I've seen Robin or Eliezer even once say something like "and if you don't buy the benefits of cryogenic preservation, why on earth aren't you at least doing CR?".
* Assume we're in or close to our teens - as many of the readers are, and would live to to 80 or 90 due to our family background; that pushes our death date out to ~2080; assume CR/IF deliver less benefits in humans than in lower organism, say, 20%; that gets us another 18 years, or to 2098, which is close enough to 2100 as to make no difference.
In the same vein, although I fear I tread too close to 'life-hacking' territory here (and I recall the LW community had consciously decided to avoid descending down into the 'cool tips/tools' territory? or am I wrong about that?), I've noticed very little discussion of the various substances labeled 'nootropics'.
We discussed quite a bit how to motivate ourselves and increase the percentage of time spent being 'productive'; shouldn't it be equally fascinating to us that things like modafinil* can eliminate the need for sleep, gaining hours? Even if modafinil's benefits averaged out cuts the need for sleep only in half or a quarter, well, it's the rare productivity or mind technique that saves you 4 and a half or 2 and a quarter hours a day.
* which I know for a fact some LWers happily & effectively use
This is an Open Thread. No restrictions here. Though, I wish we'd replace these with a proper forum that's active throughout the month.
I definitely agree that a forum would allow for more discussion, particularly of the less-momentous but still-beneficial topics. In particular, I think that discussions of actual strategies people have tried, what has worked and not worked, could actually be highly beneficial. I see them as data we need to collect in order to begin forming some kind of method for actually helping rationalists win in real world situations.
Even a general forum would be great - I wouldn't mind finding out what books and movies the rest of LW enjoys; this place is what turned me onto Torchwood. Though I could understand worries that it might distract from the core purpose of this site.
I think a forum here would be fantastic. I don't believe it would detract from the articles, it would just give discussions that have potentially smaller interest bases a chance to still develop.
I'm not a fan of having a Less Wrong forum. One of LW's advantages is that it has low volume and high quality. It doesn't take much of my time to read and most of the posts are worth reading. Forums are the opposite: higher volume and lower quality. This makes forums a bigger time sink for everyone: moderators, posters, and readers.
I think the low volume high quality nature of the LW front page is why a forum would be a bonus. People could hash out more low to mid quality ideas without detracting from the more developed postings that the readers who want to invest less time are looking for. I'm not a fan of a forum in lieu of the current LW format, but as an idea incubator, I think it could be interesting and of use.
Body building is extremely at odds with fasting.
Not necessarily true, actually. Fasting can release a good deal of growth hormone. It can also keep your insulin response in good condition. Intermittent fasting, in particular, doesn't even decrease the total number of calories a person eats, so could be ideal for body building.
True, if you mean body building as "bulking up". But I work with weights partially to keep from losing muscle mass when dieting. If you diet without strength training you lose muscle mass right along with the fat.
1) I can't work and starve at the same time.
2) State of evidence in favor of CR wasn't very good last time I checked. I recall something along the lines of, "Cutting calories by 40% extends the lifespan of (some short-lived creature) by a week, and it's looking like it may extend human lifespan by a week as well."
I remember that there is a considerable benefit for mice (not a week), but no good evidence for people. On the other hand, there is lots of evidence about correlation of weight with all sorts of diseases, which themselves kill.
I remember that there is this resource called Wikipedia. So I look up Calorie Restriction. I find there's a very detailed research summary there.
On primates, it starts out with this:
"A study on rhesus macaques, funded by the National Institute on Aging, was started in 1989 at the University of Wisconsin-Madison. This study showed that caloric restriction in rhesus monkeys blunts aging and significantly delays the onset of age related disorders such as cancer, diabetes, cardiovascular disease and brain atrophy. The monkeys were enrolled in the study at ages of between 7 and 14 years; at the 20 year point, 80% of the calorically restricted monkeys were still alive, compared to only half of the controls...."
The section on negative effects talks mainly about what happens when nutrition is poor, or when calories are too low to sustain life. My favorite: "A calorie restriction diet can cause extreme hunger that may lead to binge eating behaviour." Uh-huh. Every guide on CR I've read counsels taking a gradual approach, to give your body time to adjust, and people on CR diets often report that the feelings of hunger attenuate.
The so-called CRON approach ("Calorie Restriction with Optimal Nutrition") focuses on preventing malnutrition, as you'd expect from the name, and is decidely not "starvation", which is obviously an eventually terminal condition.
There is promising, but inconclusive, evidence for a positive effect with human beings. If it works well for monkeys, yeast, fruit flies, nematodes and mice, it's hard to see why it wouldn't work for human beings. But human beings are exceptional in a number of ways, so I suppose it's possible it doesn't work for us.
Indeed. Most mammals tend to have roughly the same number of heartbeats in a lifespan; short-lived mammals such as mice have much faster heartbeats than long-lived mammals such as elephants. Nearly every mammal on the planet (except those that hibernate) has a lifespan of about one billion heartbeats, give or take a few hundred million here and there.
Humans have a lifespan of two billion heartbeats.
Compared to other mammals, we already have a greatly enhanced lifespan. It's quite possible that whatever switch calorie restriction turns on in mice, humans already have turned on by default.
I thought that mild "obesity" (BMI 25) was associated with lower lethality rates than being thin (due to thin people dying more easily when sick; apparently that body fat actually does do its required job sometimes). Normal weight is probably still better, but is that what CR gets you?
I don't know about CR, but I've done IF (intermittent fasting) for months at a time while maintaining my normal body weight.
You are probably right, hence the disclaimer that it's unchecked memory. There clearly must be some unknown point after which the diet starts to kill you, and this point may be very human-specific.
Looking at mortality rates in the general population broken down by BMI gives a poor guide to the effects of dietary energy restriction - since many people get thin through being sick or malnourished.
A fairly typical study on the topic:
"How Much Should We Eat? The Association Between Energy Intake and Mortality in a 36-Year Follow-Up Study of Japanese-American Men"
Actually, the lower death rates with moderate rather than lower BMI, was an early claim and was later shown to be the result of people being thinner as the result of previously undiagnosed illnesses. I don't remember the source, as I have read several books on the subject, I sort of think it was from Fumento's "The Fat of the Land", but it could have been several others (none of which supported the superiority of moderate over lower BMI, until you get down to starvation levels, ie BMI of less than 18).
There is evidence of benefit for non-human primates.
That assumes you're starving during intermittent fasting. Many practitioners actually find that they are much more clear-headed when they have not eaten recently.
My guess is that you're equating hypoglycemia with hunger. I eat a paleo diet, which has low levels of dietary carbohydrates. This forces the body to use gluconeogenesis to meet its glucose needs. Because you're producing it endogenously, your blood sugar remains completely steady. You only suffer from hypoglycemia when you're dependent upon exogenous sources of glucose, forcing you to eat every few hours. I much prefer the freedom to eat whenever I want.
I find that I'm more light-headed when I haven't eaten enough, but it's not the same as clear-headed.
There's prior discussion on this subject that you haven't read -- in particular, this.
There's even been a little discussion of hypoglycemia.
I just wanted to add myself as another data point: I have been low-carb for three months and I can vouch for this. (I also lost 10 kg)
If only I had known this when I was a kid. So many mid-mornings at school, hungry (and suddenly sleepy) because of "healthy" breakast cereals!
One of my videos is about the topic. See:
"Tim Tyler: Why dietary energy restriction works"
Even if caloric restriction increases longevity, it doesn't protect you against death due to accident, disease, or violence.
A Bayesian gives the win to CR.
Actually, I think the costs of caloric restriction are higher than cryo and the benefits are less.
I'm a 24 year-old male. According to this actuarial table I can expect to be alive for 52 more years, which puts my death in 2061. I'll use gwern's numbers and assume caloric restriction increases life span by 20% in humans. In that case CR would give me 10 more years of life, moving my funeral out to 2071.
CR would only pay off if life-extension/singularity/whatever technology happens in that 10 year span. I'm very confident that advances in curing aging will happen sooner than 2060, so I'm not concerned about dying from old age. I am concerned about dying due to accident, disease, or violence, so I'm signed up for cryonics.
Caloric restriction doesn't cost money, but it does decrease quality of life. Hunger makes it harder for me to have fun. I can't think as clearly. I can't run or cycle as fast. I'm not nearly as productive. Cryonics doesn't require a major lifestyle change and it doesn't hurt my current quality of life.
What eating less energy does to your quality of life depends on how fat you are.
For many people in the west, eating less dietary energy would improve their quality of life - often rather dramatically.
First, let me explain why caloric restriction isn't for me: I weigh 120lbs and I exercise a lot.
I think you're overstating the benefits of caloric restriction and neglecting to mention other ways to get healthier, such as aerobic exercise. Also, there's a big difference between recommending that fat Americans eat less and recommending that fat Americans do caloric restriction.
Maybe - but protection against heart attack, stroke and cancer is worth quite a bit.
The real costs of caloric restriction are very high. We experience all sorts of negative symptoms, like lack of attention/lack of sexual function and physical pain when we are hungry. I am quite certain that I couldn't achieve a true CR diet if I tried. Even if I made a strong effort, there is still a fair chance I will wind up in an unhappy medium, in which I don't achieve the benefits because I couldn't pass some threshold at which CR becomes effective.
In fact, for most people, CR is probably impossible. Most of us do not even have the willpower to keep our weights in the "acceptable" range in spite of the fact that we idealize lean, low-fat bodies. We're battling millions of years of evolutionary programming.
However, we might see some of the same benefits from taking resveratrol or the forthcoming sirtuin drugs. Resveratrol is pretty cheap, much cheaper than CR (in terms of suffering), so I bet that would be a better candidate for most people than attempting (and likely failing) CR.
For you non-techies who'd like to be titillated, here's a second bleg about some very speculative and fringey ideas I've been pondering:
What do you think the connection between motivation & sex/masturbation is?
Here's my thought: it's something of a mystery to me why homosexuals seem to be so well represented among the eminent geniuses of Europe & America. The suggestion I like best is that they're not intrinsically more creative thanks to 'female genes' or whatever, but that they can't/won't participate in the usual mating rat-race and so in a Freudian manner channel their extra time into their art or science.
But then I did some googling looking for research on this, and though I didn't turn up much (it's a strangely hard area to search), I ran into some interesting pages on the links between motivation & dopamine, and dopamine & sex:
Which suggest to me an entirely different mechanism: it's not that they have more time, it's that they are having much less sex (even if only with their hand), and this depletes dopamine less & leaving motivation strong to do other things they'd like to do. (Cryptonomicon readers might also be familiar with this theory from one memorable section with Randy.)
So: does anyone know any research testing this? As I said, I couldn't find much.
What suggests that homosexuals are getting less sex than heterosexuals in the first place? Naively they are probably having more sex, and more sexual partners than median heterosexual males.
Also, what suggests homosexuals are overrepresented among "eminent geniuses"? Let's use some objective benchmark - how many Nobel Prize winners were homosexuals, and how it compares with society average?
Some objective benchmark yes, Nobel Prize winners no. There are too few Nobel Prize winners in the first place, the categories aren't obviously the right ones, and the selection process is far too political.
There are 789 Nobel Prize winners. We can throw away peace and literature obviously, but the rest don't seem to be that politicized, at least I doubt they care about scientist's sexual orientation much.
It's as objective as it gets really, and very widely accepted. If there are any known gay Nobel Prize winners, I'm sure gay organizations would mention them somewhere.
Yahoo answers can think of only one allegedly bisexual one, but for all Wikipedia says it might have been just some casual experimentation, as he was married, so he doesn't count as gay.
If this is accurate, it means gays, at least the out-of-the-closet ones, are vastly underrepresented among Nobel Prize winners, definitely conflicting with the gay genius over-representation theory.
Genius being easier to claim in retrospect, I think the real claim is that until recent decades, there were plenty of nearly celibate homosexuals (for lack of public opportunities to seek out others, or from internalized stigmas).
Obvious thing to check is the contribution to science and art from other known celibates; plenty more examples (including Erdös) leap to mind.
Along with what orthonormal said, I definitely think that up until ~1960, the Nobel Prize committee was very careful, in all categories, not to give the award to a person of "ill repute", which includes, among other things, being gay. So Nobel Prize winnings wouldn't be informative.
However, you could control for this by checking out how many men won the prize before 1960, and would be suspected of being gay (i.e. old and never-married).
Can you think of a better list, or is the entire question non-empirical in practice?
There's a passing mention in George Ainslie's book on akrasia, /Breakdown of Will/ which struck me as interesting. At the moment I can't recall just what or where. I'll dip into it again and see if I find it again.
You can narrow that down to: Sexually frustrated people have more motivation to do other things. This makes evolutionary sense. People who are sex-starved want to raise their status to better their odds.
I plan to develop this into a top level post, and it expands on my ideas in this comment, this comment, and the end of this comment. I'm interested in what LWers have to say about it.
Basically, I think the concept of intelligence is somewhere between a category error and a fallacy of compression. For example Marcus Hutter's AIXI purports to identify the inferences a maximally-intelligent being would make, yet it (and efficient approximations) does not have practical application. The reason (I think) is that it works by finding the shortest hypothesis that fits any data given to it. This means it makes the best inference, on average, over all conceivable worlds it could be placed in. But the No Free Lunch theorems suggest that this means it will be suboptimal compared to any algorithm tailored to any specific world. At the very least, having to be optimal for the all of the random worlds and anti-inductive worlds, should imply poor performance in this world.
The point is that I think "intelligence" can refer to two useful but very distinct attributes: 1) the ability to find the shortest hypothesis fitting the available data, and 2) having beliefs (a prior probability distribution) about one's world that are closest to (have the smallest KL divergence from) that world. (These attributes roughly correspond to what we intuit as "book smarts" and "street smarts" respectively.) A being can "win" if it does well on 2) even if it's not good at 1), since using a prior can be more advantageous than finding short hypothesis since the prior already points you to the right hypothesis.
Making something intelligent means optimizing the combination of each that it has, given your resources. What's more, no one algorithm can be generally optimal for finding the current world's probability distribution, because that would also violate the NFL theorems.
Organisms on earth have high intelligence in the second sense. Over their evolution history they had to make use of whatever regularity they could find about their environment, and the ability to use this regularity became "built in". So the history of evolution is showing the result of one approach to finding the environment's distribution (ETC), and making an intelligent being means improving upon this method, and programming it to "springboard" from that prior with intelligence in the first sense.
Thoughts?
This may be tangential to your point, but it's worth remembering that human intelligence has a very special property, which is that it is strongly domain-independent. A person's ability to solve word puzzles correlates with her ability to solve math puzzles. So you can measure someone's IQ by giving her a logic puzzle test, and the score will tell you a lot about the person's general mental capabilities.
Because of that very special property, people feel more or less comfortable referring to "intelligence" as a tangible thing that impacts the real world. If you had to pick between two doctors to perform a life-or-death operation, and you knew that one had an IQ of 100 and the other an IQ of 160, you would probably go with the latter. Most people would feel comfortable with the statement "Harvard students are smarter than high school dropouts", and make real-world predictions based on it (e.g a Harvard student is more likely to be able to write a good computer program than a high school dropout, even if the former didn't study computer science).
The point is that there's no reason this special domain-independence property of human intelligence should hold for non-human reasoning machines. So while it makes sense to score humans based on this "intelligence" quantity, it might be totally meaningless to attempt to do so for machines.
Not so fast. Human intelligence is relatively domain independent. But human minds are constantly exploiting known regularities of the environment (by making assumptions) to make better inferences. These reguarities make up a tiny sliver of the Platonic space of generating functions. By (correctly) assuming we're in that sliver, we vastly improve our capabilities compared to if we were AIXIs lacking that knowledge.
Human intelligence appears strongly domain-indepdent because it generalizes to all the domains that we see. It does not generalize to the full set of computable environments -- no intelligence can do that while still performing as well in each as we do in this environment.
Non-human animals are likewise "domain-independently intelligent" for the domains that they exist in. Most humans would die, for example, if dropped in the middle of the desert, ocean, or arctic.
Not just by making assumptions: you can learn (domain-specific) optimizations that don't introduce new info, but improve ability, allowing to understand more from the info you have (better conceptual pictures for natural science; math).
Another example of how domain-dependent human intelligence actually is, is optical illusions.
Optical illusions are when an image violates an assumption your brain is making to interpret visual data, causing it to misinterpret the image. And remember, this is only going slightly outside of the boundary of the assumptions your brain makes.
This is a subtle point. The NFL theorem does prohibit any algorithm from doing well over all possible worlds. But Solomonoff induction does well on any world that has any kind of computable regularity. If there is no computable regularity, then no prior can do well. In fact, the Solomonoff prior does just as well asymptotically as any computable prior.
As is often the case, thinking in terms of codes can clear up the issue. A world is a big data file. Certainly, an Earth-specific algorithm can get good compression rates if it is fed data that comes from Earth. But as the data file gets large, the Solomonoff general-purpose compression algorithm will achieve compression rates that are nearly as good; in the worst case, just has to prepend the code of the Earth-specific algorithm to its encoded data stream, and it only underperforms by that program size.
The reason AIXI doesn't work in practice is that the "efficient approximations" aren't really efficient, or aren't good approximations.
Okay, fair point. But by smearing its optimality over the entire Platonic space of computable functions, it is significantly worse than those algorithms tuned for this world's function. And not surprisingly, AIXI has very little practical application.
And that's unhelpful when, as is likely, you don't hit that asymptote until the heat death of the universe.
My point is that the most efficient approximations can't be efficient in any absolute sense. In order to make AIXI useful, you have to feed it information about which functions it can safely skip over -- in other words, feed it intelligence of type 2), the information about its environment that you already gained through other methods Which just shows that those kinds of intelligence are not the same.
Actually Solomonoff induction is insanely fast. Its generality is not just that it learns everything to as good an extent as anything else, but also that typically it learns everything from indirect observations "almost as fast as directly" (not really, but...). The "only problem" is that Solomonoff induction is not an algorithm, and so its "speed" is for all practical purposes a meaningless statement.
When someone says "very fast, but uncomputable", what I hear is "dragon in garage, but invisible".
Generalize that to a good chunk of classical math.
The analog would be to theorem proving. No one claims that knowing the axioms of math gets you to every theorem "very fast" -- because the problem of finding a proof/disproof for an arbitrary proposition is also uncomputable.
A "solution" might be that only proofs matter, while theorems (as formulas) are in general meaningless in themselves, only useful as commentary on proofs.
Nevertheless, the original point stands: no one says "I've discovered math! Now I can can learn the answer to any math problem very fast." In contrast, you are saying that because we have Solomonoff induction, we can infer distributions "very fast".
To be more precise, we can specify the denotation of distributions very close to the real deal from very few data. This technical sense doesn't allow the analogy with theorem-proving, which is about algorithms, not denotation.
Let's do another thought experiment. Say that humanity has finally resolved to send colonists to nearby star systems. The first group is getting ready to head out to Alpha Centauri.
The plan is that after the colonists arrive and set up their initial civilization, they will assemble a data archive of size T about the new world and send it back to Earth for review. Now it is expensive to send data across light-years, so obviously they want to minimize the number of bits they have to send. So the question is: what data format do the two parties agree on at the moment of parting?
If T is small, then it makes sense to think this issue over quite a bit. What should we expect the data to look like? Will it be images, audio, health reports? If we can figure something out about what the data will look like in advance (ie, choose a good prior), then we can develop a good data format and get short codes.
But if T is large (terabytes) then there's no point in doing that. When the Alpha Centauri people build their data archive, they spend some time analyzing it and figuring out ways to compress it. Finally they find a really good compression format (=prior). Of course, Earth doesn't know the format - but that doesn't matter, since the specification for the format can just be prepended to the transmission.
I think this thought experiment is nice because it reveals the pointlessness of a lot of philosophical debates about Solomonoff, Bayes, etc. Of course the colonists have to choose a prior before the moment of parting, and of course if they choose a good prior they will get short codes. And the Solomonoff distribution may not be perfect in some metaphysical sense, but it's obviously the right prior to choose in the large T regime. Better world-specific formats exist, but their benefit is small compared to T.
The choice that they will prepend a description (and the format of the description) is a choice of prior.
Well, the thought experiment doesn't accomplish that. Solomonoff induction is not necessarily optimal (and most probably isn't optimal) in your scenario, even and especially for large T. The amount of time it takes for any computable Occamian approximation of S/I to find the the optimal encoding, is superexponential in the length of the raw source data. So the fact that it will eventually get to a superior or near-superior encoding is little consolation, when Alpha Centauri and Sol will have long burned out before Solomonoff has converged on a solution.
The inferiority of Solomonoff Occamian induction, of iterating up through shorter generating algorithms until the data is matched, is not some metaphysical or philosophical issue, but rather, deals directly with the real-world time constraints that arise in practical situations.
My point is, any practical attempt to incorporate Solomonoff induction must also make use of knowledge of the data's regularity that was found some other way, making it questionable whether Solomonoff induction incorporates everything we mean by "intelligence". This incompleteness also raises the issue of what this-world-specific methods we actually did use to get to our current state of knowledge that makes Bayesian inference actually effective.
This seems to be a common belief. But see this discussion I had with Eliezer where I offered some good arguments and counterexamples against it.
The link goes to the middle, most relevant part of the discussion. But if you look at the top of it, I'm not arguing against the Solomonoff approach, but instead trying to find a generalization of it that makes more sense.
I've linked to that discussion several times in my comments here, but I guess many people still haven't seen it. Maybe I should make a top-level post about it?
NFL theorems are about max-entropy worlds. Solomonoff induction works on highly lawful, simplicity-biased, low-entropy worlds.
If you could actually do Solomonoff induction, you would become at least as smart as a human baby in roughly 0 seconds (some rounding error may have occurred).
The same (or a similar) point applies. If you limit yourself to the set of lawful worlds and use an Occamian prior, you will start off much worse than an algorithm that implictly assumes a prior that's close to the true distribution. As Solomonoff induction works its way up through longer algorithms, it will hit some that run into an infinite loop. Even if you program a constraint that gets it past or out of these, the optimality is only present "after a long time", which, in practice, means later than we need or want the results.
What else can you tell us about the implications of being able to compute uncomputable functions?
You are arguing against a strawman: it's not obvious that there are no algorithms that approximate Solomonoff induction well enough in practical cases. Of course there are silly implementations that are way worse than magical oracles.
Right, but any such approximation works by introducing a prior about which functions it can skip over. And for such knowledge to actually speed it up, it must involve knowledge (gained separately from S/I) about the true distribution.
But at that point, you're optimizing for a narrower domain, not implementing universal intelligence. (In my naming convention, you're bringing in type 2 intelligence.)
It introduces a prior, period. Not a prior about "skipping over". Universal induction doesn't have to "run" anything in a trivial manner.
You can't "speed up" an uncomputable non-algorithm.
Okay, we're going in circles. You had just mentioned possible computable algorithms that approximate Solomonoff induction.
So, we were talking about approximating algorithms. The point I was making, in response to this argument that "well, we can have working algorithms that are close enough to S/I", was that to do so, you have to bring in knowledge of the distribution gained some other way, at which point it is no longer universal. (And, in which case talk of "speeding up" is meaningful.)
Demonstrating my point that universal intelligence has its limits and must combine with intelligence in a different sense of the term.
You introduce operations on the approximate algorithms (changing the algorithm by adding data), something absent from the original problem. What doesn't make sense is to compare "speed" of non-algorithmic specification with the speed of algorithmic approximations. And absent any approximate algorithms, it's also futile to compare their speed, much less propose mechanisms for their improvement that assume specific structure of these absent algorithms (if you are not serious about exploring the design space in this manner to obtain actual results).
What you call "the original problem" (pure Solomonoff induction) isn't. It's not a problem. It can't be done, so it's a moot point.
Sure it does. The uncomputable Solomonoff induction has a speed of zero. Non-halting approximations have a speed greater than zero. Sounds comparable to me for the purposes of this discussion.
There are approximate algorithms. Even Bayesian inference counts. And my point is that any time you add something to modify Solomonoff induction to make it useful is, directly or indirectly, introducing a prior unique to the search space -- cleary showing the distinctness of type 2 intelligence.
To wrap up (as an alternative to not replying):
One pattern I have noticed: those who think the No Free Lunch theorems are interesting and important are usually the people who talk the most nonsense about them. The first thing people need to learn about those theorems is how useless and inapplicable to most of the real world they are.
So, you're disagreeing that an algorithm that is optimal, on average, over a set of randomly-selected computable environments, will perform worse in any specific environment than an algorithm optimized specifically for that environment?
Because if not, that's all I need to make my point, no matter what subtlety of NFL I'm missing. (Actually, I can probably make it with an even weaker premise, and could have gone without NFL altogether, but it grants some insight on the issue I'm trying to illuminate.)
The NFL deals with a space of all possible problems - while the universe typically presents embedded agents with a subset of those problems that are produced by short programs or small mechanisms. So: the NFL theorems rarely apply to the real world. In the real world, there are useful general-purpose compression algorithms.
Okay. I stated the NFL-free version of the premise I need. If you agree with that, this point is moot.
Now I know I'm definitely not using NFL, because I agree with this and it's consistent with the point in my initial post.
Yes, there are useful general-purpose programs: because researchers recognize regularities that generally appear across all types of files, which there must be because the raw data is rarely purely random. But they identify this regularity before writing the compressor, which then exploits that regularity by (basically) reserving shorter codes for the kinds of data consistent with that regularity.
Likewise, people have identified regularities specific to video files: each frame is very similar to the last. And regularities specific to picture files: each column or row is very similar to the neighboring.
But what they did not do was write an unbiased, Occamian prior program that went through various files and told them what regularities existed, because finding the shortest compression is uncomputable. Rather, they imported prior knowledge of the distribution of data in certain types of files, gained through some other method (type 2 intelligence in my convention), and tailored the compression algorithm to that distribution.
No "universal, all purpose" algorithm found that knowledge.
I should probably give you some proper feedback, as well as caustic comments. The intelligence subdivision looks useful and interesting - though innate intelligence is usually referred to as being 'instinctual'.
However, I was less impressed with the idea that the concept of intelligence lies somewhere between a category error and a fallacy of compression.
Okay, thanks for the proper feedback :-)
And I may be more leaning toward the "fallacy of compression" side, I'll grant that. But I don't see how you'd disagree with it since you find the subdivision I outlined to have some potential. If people are unknowingly shifting between two very different meanings of intelligence, that certainly is a fallacy of compression.
Another point: I'm not sure your description of AIXI is particularly great. AIXI works where Solomonoff induction works. Solomonoff induction works pretty well in this world. It might not be perfect - due to reference machine issues - but it is pretty good. AIXI would work very badly in worlds where Solomonoff induction was a misleading guide to its sense data. Its performance in this world doesn't suffer through trying to deal with those worlds - since in those worlds it would be screwed.
Well, actually you're highlighting the issue I raised in my first post: computable approximations of Solomonoff induction work pretty well ... when fed useful priors! But those priors come from a lot of implicit knowledge about the world that skips over an exponentially large number of shorter hypotheses by the time you get to applying it to any specific problem.
AIXI (and computable approximations), starting from a purely Occamian prior, is stuck iterating through lots of generating functions before it gets to the right one -- unfeasably long. To speed it up you have to feed it knowledge you gained elsewhere (and of course, find a way to represent that knowledge). But at that point, your prior includes a lot more than a penalty for length!
So, I'm reading A Fire Upon The Deep. It features books that instruct you how to speedrun your technological progress all the way from sticks and stones to interstellar space flight. Does anything like that exist in reality? If not, it's high time we start a project to make one.
Edit (10 October 2009): This is encouraging.
A lot of stick and stones civilizations that can read, are there?
Agree that it is a cool idea though, does Vinge give more details?
It strikes me that the most crucial aspects of such a book would probably be mechanical engineering (wheels, mills, ship construction, levers and pullies) and chemical identification (where to find and how to identify loadstones, peat, saltpeater, tungsten) things no one here is going to have much experience with.
What I'd like to know is what the ideal order of scientific discoveries would be. Like what would have been possible earlier in retrospect, what later inventions could have been invented earlier and sped up subsequent innovation the most. Could you teach a sticks and stones civilization calculus? What is the earliest you could build a computer? Many countries went skipped building phone infrastructure and have gone straight to cellular. What technologies were necessary intermediate steps and which could be skipped?
Any hypotheses for these questions?
Not yet.
Basic electrics are possible as soon as you have decent metalworking. Dynamos are just a bunch of spools of copper wires and magnets. Add some graphite, and you have telephones. ̶G̶r̶e̶e̶k̶s̶ ̶c̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶m̶a̶d̶e̶ ̶t̶h̶e̶m̶.
Well, did the Greeks have the ability to make decent enough wire in sufficient quantities?
I don't know, but even if they could do it, they had no reason to. So we can't really tell.
The real question is - if they really really wanted to and had a book of helpful tips, could they have made decent enough wire? (And could they get copper in sufficient quantities? By Roman times they certainly could.)
Could they make it thin enough (even with insulation) to be able to fit large amounts of windings?
ie, assuming they had reason to try, could they do it based on what we know of their capabilities at the time?
Incidentally, a radio would be much cheaper to make and almost certainly within their capabilities.
Wire making is easy if you have copper. The real problem is insulating the wire, especially with something flexible enough for winding coils. This is part of the problem of infrastructure - and very few people know enough to really even start working on a serious rebuilding problem, for example after a dinosaur killer impact. I know more than anyone else I have ever met, especially in the areas of food (agriculture and cooking) and shelter (designing, concrete, masonry, carpentry, plumbing, wiring, etc), and even I barely know enough to get started. For example, I don't know of any way to make insulation for wires without an already existing chemical industry, except natural rubber, which would most likely not be available.
Wind each layer sparsely so that wires don't touch and pack insulator (dried leaves) between the layers. Makes for a woefully inefficient spool, but still.
Gotta try this out with scrap metal.
That would probably work, the only problem with it is that you would have to know in advance what you were doing, this isn't something that would be tried by an experimenter trying to figure things out for example.
A printing press should be easier to make...
In the book it's chemicals (gunpowder) and radios. The application of radios by Vinge's version of non-anthropomorphic intelligences is especially interesting.
What about a "Mote In God's Eye" -style technology bunker? Would having a set of raw materials, instructions, and tomes of information be the ideal setup? Perhaps something along the lines of the Svalbard Seed Vault. What are the most useful artifacts that can survive A) the catastrophe and B) the length of time it takes for the artifacts to be recovered? Such a timeframe could be short or many, many generations long (even geologic time?). Do we want this to potentially survive until the next intelligent being evolves, in the case of total destruction of mankind? What sealing mechanism would still be noticeable and breach-able by a low-tech civilization?
Or do we want to assume there is NO remaining technology and we're attempting to bootstrap from pure knowledge? Either way, I think it would be an interesting problem to solve.
http://www.kk.org/thetechnium/archives/2006/02/the_forever_boo.php ?
What for? There aren't any stick-and-stones cultures around.
Do you assign significant probability to the need for such a book in humanity's future? I don't. It would require that:
But also that:
We can imagine a handbook that is written to be useful for a broad spectrum of possible disastrous situations.
The handbook could be written for post-disaster survivors finding themselves in many possible situations. For example, your first bullet "No technological human societies survive" could be expanded to "(No|Few|Distant|Hostile) technological human societies survive". Indeed, uncertainty about which of the aforementioned possibilities actually hold might be quite probable, given both a civilization-destroying disaster and some survivors.
To some extent, the Long Now's Rosetta project (to build sturdy discs inscribed with examples of many languages) is an example of this sort of handbook.
http://rosettaproject.org/
I agree a knowledge repository would be very useful for survivors right after the disaster. But I don't think any scenario is probable that involves a society with a reasonably stable level of technology and food production existing and profiting from such a book.
BTW, the Rosetta project seems to be purely about describing languages so future people can understand them.
If a few distant technological societies survive, even just one with some reasonable shipping & industry, then I expect they will quickly establish contact with most of the world, if only to exploit natural resources & farming. Most or all tech. economies today rely on many imports of minerals, food, etc. And knowledge and technology would be dispersed quicker with the assistance of this society than by means of such a book.
If a 'hostile' society survives - well, hostile towards whom? Towards all other, non-high-tech survivors? I don't see this as the default attitude of a surviving society that's the most powerful country left on Earth, so without knowing more I hesitate to try to empower whoever they're hostile towards. What did you have in mind here?
Your first point is that the handbook is not likely to be useful for the purpose of helping reconstruction after a disaster, because the chance of a disaster being total enough to destroy technology, but not total enough to destroy humanity, is small. I agree completely - you have a very strong argument there.
However, you go on to argue that IF a technology-destroying-humanity-sparing disaster occured, THEN technological societies would quickly establish contact, disperse knowledge, et cetera. In this after-the-disaster reasoning, you're using our present notions of what is likely and unlikely to happen.
Reasoning like this beyond the very very unlikely occurrence seems fraught with danger. In order for such an unlikely occurrence to occur, we must have something significantly wrong with our current understanding of the world. If something like that happened, we would revise our understanding, not continue to use it. Anyone writing the handbook would have to plan for a wild array of possibilities.
Instead of focusing on the fact that the handbook is not likely to be used for its intended purpose, consider:
If we assume that there is "something significantly wrong with our current understanding of the world" but don't know anything more specific, we can't come to any useful conclusions. There's a huge number of things we could do that we think aren't likely to be useful but where we might be wrong.
So is writing this book something we should do (as the original comment seemed to suggest)? No. But I agree it's something we could do, is very unlikely to be harmful, and is neat and fun into the bargain.
With that said, I'm going back to working on my cool, neat, fun, non-humanity-saving project :-)
There's a huge different between having the raw knowledge available and simple step by step instructions.
A book created for this express purpose would be an order of magnitude more useful than any number of encyclopedias or even entire libraries. A big challenge would be even knowing what to research--if you don't have the next technology, you may not even know what it will be.
The biggest obstacle is really distribution. What you'd need its a government, church, or NGO to put a copy in every branch or something.
Maybe you could donate a copy to every prison library. Prisons would actually be a really defensible location to stay post-societal collapse . . .
I see scenarios like the following not impossible.
90% of the human population dies from a plague/meteor along with the knowledge/sufficient numbers to maintain things like power plants, steel mills and the trappings of modern life. Those people that are left with the knowledge have to spend all their time subsistence farming just to survive.
A few generations later when the population has increased a bit and subsistence farming improved in yield due to experience. People want to recreate technology, with just the knowledge passed down by word of mouth.
Just saying "black swan" isn't enough to give higher probability. If you think I can't assign any meaningful probability at all to this scenario, why?
I don't believe anyone can assign meaningful very small or very large probabilities in most situations. It is one of my long-running disagreements with people here and on OB.
There are indeed many known human biases of this kind, plus general inability to predict small differences in probability.
But we can't treat every low probability scenario as being e.g. of p=0.1 or some other constant! What do you suggest then?
I don't know of a unified way of handling extremely small risks, but there are two things that can be helpful. First, as suggested by Marc Stiegler in "David's Sling", is to simply recognize explicitly that they are possible, that way if they do occur you can get on with dealing with the problem without also having to fight disbelief that it could have happened at all. Second, different people have different perspectives and interests and will treat different low possibility events differently, this sort of dispersion of views and preparation will help ensure that someone is at least somewhat prepared. As I said, neither of these is really enough, but I simply can't see any better options.
You have to assign probabilities anyway. See the amended article:
That's meaningless. You can't assign a value in dollars to the continued existence of our civilization. Dollars are only useful for pricing things inside that civilization. (Some people argue for using utilons to price the civilization's existence.)
The amount you're willing to pay is a fact about you, not about the book's usefulness. You're saying you estimate its probability of usefulness at 10^-14. But why?
Clearly the market for civilization creation books is efficient.
Nice point. Maybe we should instead talk about scenarios where humanity (including us) no longer suffers aging but a collapse still occurs.
Incidentally, I wonder what the market price for writing a civilization-destroying book might be?
Actually, all you would need for serious problems is that none of the relatively few people who know the essential details of a critical piece of support technology don't survive. Or at least don't survive in your group or that you otherwise have access to. And since if that happens, and you can't know ahead of time what bits of information you might lose, having references to everything possible only makes good sense. Especially given how relatively inexpensive references are now. Cheap insurance against an very unlikely result (of course, they can also be helpful day-to-day too).
There's a mixup of two different scenarios here.
What you seem to be talking about is a group of people a few years to a few decades post collapse, who want to operate or rebuild preexisting tech and need a reference work. If they had a copy of wikipedia plus a good technical & reference library, it would probably answer most of their needs. A special book isn't essential.
What I was talking about is a group of people completely lacking pre-collapse knowledge and experience. You can't give them instructions for building a radio because they tend to ask questions like "what's a screwdriver?" and "how can I avoid being burnt as a witch?" That's what a real stones-and-sticks to high-tech guide book needs to address.
You might think of "my book" as a subset of yours. My book would be more likely to be useful (though hopefully not) and could be expanded to add the material necessary for yours. And your book would be a library in itself, there is no possible way that such a "book" would not span many volumes.
A single long "book" would have high quality cross links, well ordered reading sequences, a uniform style, no internal contradictions, etc. In that sense it's a book as opposed to a library collection.
This reminds me of an episode of Mythbusters where the crew set up a bunch of of MacGyver puzzles for the two hosts - pick a lock with a lightbulb filament, develop film with common household chemicals, and signal a helicopter with a tent and camping supplies.
In all seriousness though, Philisophical Materialism and the Scientific Method are probably the most important things; three years ago I bought my first car for a pack of cigarettes, and a $20 Hayes manual. At the time I didn't even know what an alternator was; three months later I'd diagnosed a major electrical problem, and performed an engine swap. The manual helped (obviously), but for the most part it was the knowledge that any mechanical device could be reduced to simple causal patterns which allowed me to do this (incidentally, this is a hobby that I strongly recommend to other LW members - you get to put the scientific method into practice in a hands-on manner, and at the end of it you get a car which is slightly less crappy).
I tend to think that the mere knowledge that flying machines are possible will allow the survivors of WWIII to redevelop the prewar tech within a century.
Does the same principle apply to motorcycle maintenance? :-)
A book I was reading that suggested doing your own minor auto repairs, warned strongly against doing motorcycle repairs for anything after the late 1970s. He claimed that newer cycles were so tightly integrated and the tools for working on them so specialized, that you were too likely to get something taken apart that you literally could not reassemble.
I'd say that's true for modern supersports and superbikes, but a beginner bike like a Kawasaki Ninja EX-250 has very little in the way of electrics or other tightly-integrated mechanisms. Just as an anecdote: I do regular maintenance on my 2006 SV-650/S, but anything more complicated than oil changes on my 1972 Honda CB350 is done by a mechanic. While newer bikes have complicated parts like ECUs and fuel injection, those are usually the most reliable parts. Repairing older motorcycles typically involves scrounging e-bay for parts that are no longer manufactured.
The thing I like most about motorcycles is that they are simple, so it's pretty easy to diagnose any problems. It only takes a minute to tell if you're running lean or rich. Simply starting, hearing, and smelling an engine can tell you whether you just need new piston rings or if you've damaged the crankshaft journal bearings.
If you really want the most mechanically simple vehicle, I'd suggest an old scooter such as a Honda Cub. The set of failure modes for an air-cooled single-cylinder engine is quite small.
I tried this with one of my first cars back in the early 90s. It turns out that there are a very large number of things that can go wrong with essentially every step of repairing a car, and I didn't have the money or time to continue replacing parts I'd destroyed or troubleshooting problems I'd caused while trying to fix another problem.
I like programming because it has the same features of tracking down problems, but almost entirely without the autocommit feature of physical reality, as long as you choose to back up and test.
Also, even in the 90s, a computer was far cheaper than a good set of tools.
http://www.amazon.com/Caveman-Chemist-Circumstances-Achievements-Publication/dp/0841217874
http://www.amazon.com/Caveman-Chemistry-Projects-Creation-Production/dp/1581125666/ref=pd_bxgy_b_img_b
There's a time-traveler's cheat sheet that covers a lot of the basics. (Credit goes to Ryan North. )
Open threads should not be promoted, because.
Promoted articles as they are also serve a purpose: they screen low-value articles from a "feed for a busy reader". What you describe is also a good suggestion, but instead of redefining "promoted", a better way to implement it is to add a subcategory of promoted self-sufficient entry-level articles, and place them on the front page.
I would like to throw out some suggested reading: John Barnes's Thousand Cultures and Meme Wars series. The former deals with the social consequences of smarter-than-human AI, uploading, and what sorts of pills we ought to want to take. The latter deals with nonhuman, non-friendly FOOMs. Both are very good, smart science fiction quite apart from having themes often discussed here.
I have read "A Million Open Doors" and "A World Made of Glass" and don't remember ANY AI at all in them. And only limited uploading. And are there any Meme Wars novels other than "Kaleidoscope Century" and "Candle"? They were decent but not great stories, but the "memetic virus" background required a serious "suspension of disbelief". Barnes's least unrealistic uploading and FOOM novel was the space-farers in "Mother of Storms".
I'll make my more wrong confession here in this thread: I'm a multiple worlds skeptic. Or at least I'm deeply skeptical of Egan's law. I won't pretend I'm arguing from any sort of deep QM understanding. I just mean in my sci-fi, what-if, thinking about what the implications would be. I truly believe there would be more wacky outcomes in an MWI setting than we see. And I don't mean violations of physical laws; I'm hung up on having to give up the idea of cause and effect in psychology. In MWI, I don't see how it's possible to think there would be cause and effect behind conversations, personal identity, etc. Literally every word, every vocalization, is determined solely by quantum interactions, unless I'm deeply misunderstanding something. This goes against the determinism I hold to be true. I don't see how my next words won't be French, Arabic, Klingon, etc, and I don't see how what I consider to be normally isn't vanishingly unlikely to continue for an indefinite period of time.
I'll admit that works been busy, so I haven't worked through EY's latest posts, so if there's been some resolution in this in the anthropic threads, I'd appreciate a quick summary. Sorry if this is more of a question than answer; it's for that reason that I second a forum. I like blogs for articles, but they don't work for discussion as well as forums do, and forums better allow people to post questions.
This is a confusion about free will, not many-worlds.
I would describe my view on the free will question as basically being Dennett's in Elbow Room and Freedom Evolves. But that seems to be confounded by what I expect to be the utter randomness that would emerge from the MWI. I don't worry about having free will; I am concerned about having some sort of causal chain in my actions. I don't disavow that I'm confused, I just don't think I'm confused over free will.
There is deep similarity, that I expected to carry over: in both cases, you have some subjective feeling, and in both cases the nature of physical substrate in which you exist doesn't matter the slightest for the explanation of why you have that feeling. The feeling has a cognitive explanation that screens off physical explanation. Thus, you can be confused about physical explanation, but not confused about your question, since you have a cognitive explanation.
I'm not sure I quite follow. So I have the feeling of confusion, which I attribute to not understanding the ramifications of the physical explanation of quantum effects that the MWI provides. What's the cognitive explanation for this?
I don't quite understand what you're confused about. Why would MWI make you start talking in anything but english?
If you flip a hypothetical fair random coin 1000 times, you'll almost certainly get something around 500 heads and 500 tails. Getting anything like 995 heads would be rare.
The coin can be entirely nondeterministic in how it flips, and still be reliable in this regard.
Well, there's no physical limitation against me speaking something other than my birth language. Using the coin analogy, my tongue position, lips position, and airflow out of my throat are the variables. Those variables, across all distributions, can produce any human word. Across infinity, there will be worlds where I'm speaking my birth language, and other ones that I'm not, for my next statement. MWI seems to me to eliminate the prior state from having an influence on the next state of my language machine. If all probabilities do occur in the MWI, I see the probability of me continuing to speak English to be the 995 heads case (which is still possible, I just see it as unlikely). I don't think MWI "makes" me do anything, I just think the implication is that all possible worlds become reality. It really comes down to the prior state's apparent lack influence; that's what confuses me. Once that's gone, I just see causality in human actions going out the window.
You're confused about probability, causality in QM, and anthropics. (Note in particular that your objection can't be particular to MWI, since even in a collapse theory, the wacky things could happen).
The current state of your brain corresponds to a particular (small neighborhood of) configurations, and most of the wavefunction-mass that is in this neighborhood flows to a relatively small subset of configurations (i.e. ones where your next sentence is in English, or gibberish, rather than in perfect Klingon); this, precisely, is what causality actually means.
Yes, there is some probability that quantum fluctuations will cause your throat cells to enunciate a Klingon speech, without being prompted by a patterned command from your brain. But that probability is on the order of 10^{-100} at most.
And there is some probability, given the structure of your brain, that your nerves would send precisely the commands to make that happen; but given that you don't actually know the Klingon speech, that probability too is on the order of 10^{-100}.
The upshot of MWI in this regard is that very few of your future selves will see wacky incredibly-improbably-ordered events happen, and so you recover your intuition that you will not, in fact, see wacky things. It's just that an infinitesimal fraction of your future selves will be surprised.
Thanks, this really helps to clarify the picture for me.
Your claim is that MWI predicts things we don't see. If this is true then it is a really big deal- you'd be able to show that MWI was not just falsifiable (which is still a contentious issue) but already falsified. Suffice to say someone would have noticed this.
Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens. Unfortunately, this does suggest you have to give up your notion of robust, metaphysical causation since (1) shit ain't determined and (2) there are no objects (the usual units of causation) just overlapping fields. There are some efforts to maintain serious causal stories despite this but since no one really knew what was meant by causation before quantum mechanics this doesn't seem like that big a loss.
In any case, these sacrifices are purely philosophical, MWI changes nothing about what experiences you should expect (except possibly in regards to anthropic issues) and makes no new predictions about run of the mill everyday physics.
Hi Jack,
[i]Anyway it is true that MWI does entail that there is some non-zero possibility that your next words will be in Klingon. But the possibility is so small that the universe is likely to end many, many times over before it ever happens.[/i]
This all could just be an issue of me being massively off on the probabilities, but aren't there a greater number of possibilities that my next words will be not be in English than in English, and therefore a greater probability that what I would say would not be in English? And in this particular example, there are a number of universes that have branched off that I would have spoken Klingon. I'm not understanding the limitation that would demonstrate that there are more universes where I spoke English instead (i.e. why would there be a bell curve distribution with English sentences being the most frequently demonstrated average?)
And I do want to more clearly re-iterate that I'm not talking about Everett's formal proof, but the purely philosophical ramifications you mention (and also, I haven't got some earth shattering thesis waiting in the wings, I'm just describing my confusion). QM is fact, and MWI is a way of interpreting it. For whatever reason, I'm interested in that interpretation. So chalk it up to me thinking through a dumb question. I don't believe I've falsified a mainstream QM theory. I do feel I've demonstrated to my satisfaction that I don't fully understand the metaphysical implications of MWI. It sounds easier to just chalk it up to "it's the equations", but I do find the potential implications interesting.
No. So QM says that at time t every sub atomic particle in your brain has a superposition- a field which gives the possibility that that particle will be found at that location in the field. There is no end to the field but only a very small area will have a non-insignificant probability magnitude. Now scale up to the atomic level. Atoms will similarly have superpositions- these superpositions will be dictated by the superpositions of the subatomic particles which make up the atom. You can keep scaling up. The larger the scale the lower the chances of anything crazy happening is because for an entire atom to be discovered on the other side of the room every particle it is made up of would have to have tunneled ten feet at the same time to the same place. This is true for molecules that make up the entire brain mass. Whatever molecular/brain structural conditions that make you an English speaker at time t are very likely to remain in place at time t2 since their superposition is just a composite of the superpositions of their parts (well not really, my understanding is that it is way more complicated than that, suffice to say that the chances of many particles being discovered away from the peak of their wavefunction is much lower than the chance of finding a single electron outside the peak of its wavefunction).
For our purposes many worlds just says all of the possible outcomes happen. The chances you should assign to experiencing any one of these possibilities are just the chances you should assign to finding yourself in the world in which that possibility happens. Since in nearly all Everett branches you will still be speaking English (nearly all of the particles will have remained in approximately the same place) you should predict that you will never experience un mundo donde personas hablan espanol sin razones!
Heh. Right now, I'm pretty sure the QM does preclude robust, folk understandings of causation. But tell me, what is it that causation gives you that you want so badly?
Thanks, again, this is the type of explanation that helps me to much better understand the possibilities MWI was addressing. And causation just gives me the reasonable expectation that physics models, biology theories, do adequately model our world without worrying about spooking action throwing too big of a monkey wrench into things.
Sure. And don't worry about causation, you can inference and make predictions just fine without it.
Not all "possibilities", as you describe them, are equally likely. If I enter 2+2 into my calculator, and MWI is correct, there would be some worlds in which some transistors don't behave normally (because of thermal noise, cosmic rays, or whatever), bits flip themselves, and the calculator ends up displaying some number that isn't "4". The calculator can display lots of different numbers, and 4 is only one of them, but in order for any other number to appear, something weird had to have happened - and by weird, I mean "eggs unscrambling themselves" kind of weird. (Transistors are much smaller than chicken eggs, so flipped bits in a calculator are more like a microscopic egg unscrambling itself, but you get the idea.)
MWI basically says that, yes, someone will win the quantum lottery, but it won't be you.
This and the other probability discussions above have greatly helped me to understand what MWI was getting at. I wasn't fully grasping what the limitations were, that MWI wasn't describing limitless possibilities happening infinitely.
It's true that MWI doesn't absolutely rule out the possibility that your next words might be in another language, but neither does any other QM interpretation. They all predict just the amount of wackiness that we see.
The other interpretations allow for the possibility, but MWI seems to argue for it to definitely occur, in some universe branch.
I think it's the "wacky but not TOO wacky" world that I find pretty fascinating in QM. I just haven't seen a description that just seemed to nail it for me. Obviously, YMMV.
What's the best way to follow the new comments on a thread you've already read through? How do you keep up with which ones are new? It'd be nice if there were a non-threaded view. RSS feed?
Scanning through the new comments page is probably your best bet, though I wish there was a better solution.
XKCD visits human enhancement.
Is there a complete guide anywhere to comment/post formatting? If so, it should probably linked on the "About" page or something. I can't figure out how to do html entities; is that possible?
There is a comment formatting page on the Wiki. The syntax description says that you can just write HTML entities in the comments directly, but apparently it doesn't work here: ©
On the other hand, simple copy-past from an entity list page works: ©
A link you might find interesting:
The Neural Correlates of Religious and Nonreligious Belief
Summary:
Religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks, scientists at UCLA and other universities have found. They used fMRI to measure signal changes in the brains of committed Christians and nonbelievers as they evaluated the truth and falsity of religious and nonreligious propositions. For both groups, belief (judgments of "true" vs "false") was associated with greater signal in the ventromedial prefrontal cortex, an area important for self-representation, emotional associations, reward, and goal-driven behavior. "While religious and nonreligious thinking differentially engage broad regions of the frontal, parietal, and medial temporal lobes, the difference between belief and disbelief appears to be content-independent," the study concluded. "Our study compares religious thinking with ordinary cognition and, as such, constitutes a step toward developing a neuropsychology of religion. However, these findings may also further our understanding of how the brain accepts statements of all kinds to be valid descriptions of the world."
My thought of the day: An 'Infinite Improbability Drive' is slightly less implausible than a faster than light engine.
Eliezer and Robin argue passionately for cyronics. Whatever you might think of the chances of some future civilization having the technical ability, the wealth, and the desire to revive each of us -- and how that compares to the current cost of signing up -- one thing that needs to be considered is whether your head will actually make it to that future time.
Ted Williams seems to be having a tough time of it.
I'm not sure what to think of Larry Johnson. Some of his claims are normal parts of Alcor's cryopreservation process, but dressed up to sound bad to the layperson. Other parts just seem so outrageous. A monkey wrench? An empty tuna can? Really? He claims that conditions were terrible, which is also unlikely. Alcor is a business and gets inspected by OSHA, the fire department, etc. They even offer free tours to the public. If conditions were so terrible, you'd think they'd have some environmental or safety violations. At the very least, some people who toured the facility would speak up.
The article also claims that Ted Williams was cryopreserved against his will, which is almost certainly not true. Alcor requires that you sign and notarize a last will and testament with two witnesses who are not relatives.
Mind-killer warning.
What is the opinion of everyone here on this? It's an essay of sorts (adapted from a speech) making a case for a guaranteed minimum income.
Awesome link, thanks! I'm not sure about a GMI in the form of money per se, but if there's a way to make it represent (as he suggests) "real wealth", instead of a potentially slow-to-adjust numerical value, then it could work.
There's a difference between activities that are inherently desirable to do, just because they are fun/interesting/challenging, and activities that people can become accustomed to and eventually even like. I imagine farming is one of the latter. While I can envision a good deal of farmers continuing on farming without the economic incentive to do so, I doubt the replacement rate would be high enough to continue feeding the world.
I also imagine that, even if you abolish money, people would just recreate it, or at least an elaborate bartering system. I know I would personally. Note that there would be just as much desire from the 'consumer' as the 'producer' to recreate currency. Consider, for example, a hypothetical bridge building group, that just likes going around and building bridges for the sake of it. They're the best, and are in high demand. The group is happy to just build bridges as they work their way across the country, until suddenly a city not on their short list contacts them saying, "We desperately need a bridge! We'll do anything! You could live like kings here for months if you just build us a bridge!" It's one thing to want to do something for the joy of it, without remuneration, it's entirely another to actively reject payment. Thus, the cycle starts over again.
The author addresses this. He's not particularly opposed to paying people to do things; he's opposed to people having to do paid work or starve. The existence of a GMI should make people less willing to do unpleasant jobs for relatively low wages, effectively reducing the supply of unskilled labor. If you can't automate away a job that most people don't like doing, then just pay people the new, higher market rate.
I'm in favor of providing food and health care to anyone that needs it. However, a GMI that rivals minimum wage would probably have much larger consequences, which I'm not convinced anyone could predict.
Bayesian reasoning spotted in the wild at Language Log
More specifically, the Kullback-Leibler divergence, which is even awesomer.
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians.
Am I correct then to assume that the implicit goal of the AI for the majority in the community is to aid in the maximization of human happiness?
If so I think there are serious problems that would be encountered and I think that the goal of maximizing happiness would not be accomplished.
"Utilons" are a stand-in for "whatever it is you actually value". The psychological state of happiness is one that people value, but not the only thing. So, yes, we tend to support decision making based on consequentialist utilitarianism, but not hedonistic consequentialist utilitarianism.
See also: Coherent Extrapolated Volition
Of course - which makes them useless as a metric.
Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economist I have previously been neck deep in Bentham).
The discussion at the same time dismisses and reinforces the importance of the debate itself, which seems contrary. I personally think this is a much more important topic than is thought and I have yet to see a compelling argument otherwise.
From the people (researchers) I have talked to about this specifically, the responses I have gotten are: "I'm not interested in that, I want to know how intelligence works" or "I just want to make it work, I'm interested in the science behind it." And I think this attitude is pervasive. It is ignoring the subject.
The topic of what the goals of the AI should be has been discussed an awful lot.
I think the combination of moral philosopher and machine intelligence expert must be appealing to some types of personality.
Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
Probably the median of such discussions was on http://www.sl4.org/
Machines will probably do what they are told to do - and what they are told to do will probably depend a lot on who owns them and on who built them. Apart from that, I am not sure there is much of a consensus.
We have some books of the topic:
Moral Machines: Teaching Robots Right from Wrong - Wendell Wallach
Beyond AI: Creating The Conscience Of The Machine - J. Storrs Hall
...and probably hundreds of threads - perhaps search for "friendly" or "volition".
Mini heuristic that seems useful but not big enough for a post.
To combat ingroup bias: before deciding which experts to believe, first mentally sort the list of experts by topical qualifications. Allow autodidact skills to count if they have been recognized by peers (publication, citing, collaboration, etc).
I've only been reading Open Threads recently, so forgive me if it's been discussed before.
A band called The Protomen just recently came out with their second rock opera of a planned triology of rock operas based on (and we're talking based on) the Megaman video game. The first is The Protomen: Hope Rides Alone, the second one is Act II: The Father of Death.
The first album tells the story of a people who have given up and focuses on the idea of heroism. The second album is more about creation of the robots and the moral struggles that occur. I suggest you start with: The Good Doctor http://www.youtube.com/watch?v=HP2NePWJ2pQ