lukeprog comments on The Level Above Mine - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (387)
For the record: I, too, want an FAI team in which Eliezer isn't the only one with Eliezer-level philosophical ability or better. This is tougher than "merely" finding 1-in-10-million math talents, but still do-able.
What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a "normal" philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there's more, but I can't talk about it yet.
Also, as Eliezer said, Paul Christiano's existence is encouraging.
What about Kawoomba's existence? :-(
CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.
Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.
The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.
Can you name some older academics that have the requisite philosophical skill? (And if your first line isn't a joke, perhaps you can link me to some of your own philosophical works?)
Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis' course recommendations reads like my former grad CS reading list.
It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.
I'm not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds - or advanced algorithms.
(As for myself, the first line was meant as a joke - alas! How sad!)
No, I wouldn't say that. The problem is that we (humans) don't know how to teach the philosophical skill I'm talking about, so there aren't classes on it, so I can only recommend courses on "the basics" or "prerequisites." I don't know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.
I suspect you and Luke do not share a referent for "better philosophy" here. In particular, I doubt either Luke or Eliezer would agree that the ability to write clearly, or to analyze and formulate arguments for purposes of compellingly engaging with existing arguments in the tradition of analytic philosophy, is the rare skill that Luke is talking about.
Trying to have a conversation about how hard it is to find an X without common referents for X is not likely to lead anywhere productive.
You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.
It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.
Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?
Downvoted because:
Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.
[Edited for formatting.]
Can you qualify that with describing your experience with philosophers? E.g. "There are very few philosophers at EY's level, and I've met Philosopher John Conway." Whoever Philosopher John Conway turns out to be.
"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we draw a random number between 1 and a million; the prior for any particular number between 1 and a million is straightforwardly very low - one in a million, of course. Now, I have just generated the number 493250 using random.org. Surely this prior of 1 in a million that I have generated any specific number like 493250 is low enough to not be overcome by you being impressed by looking at this comment and see '493250' in it? The prior for you having very special powers of perception of the right number is likewise proportionally low to how very special it is, and so on."
"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."
Yes. I agree. Some of these self proclaimed Bayesians cannot even fully specify their examples or prove their arguments or explain the crucial part of what they were probably arguing.
"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."
So, putting the analogy into reverse, the top post is wrong. You can judge N levels above your own.
I was making the point that Dmytry's claim was flawed in 2 separate ways; 'you can judge N levels above your own' is closer to the point of the random.org example than the DC example. (The DC example was more about neither DC nor EY being a random selection, not the strength of personal judgment.)
When did I claim no one at SI held your views? That would've been hard since you refused to use standard terminology like SIA or SSA which I could then go 'ah yes, that's Bostrom's current view'.
No, that's not the analogy. The analogy is that there are at least 2 ways in which we are long past a prior of 1 in a million and don't have judgments which are equivalent to random choice, and those were illustrating them: the first is one's own ability to recognize some level of quality in a philosopher, and the second is about looking at a non-random selection at the end of a process with some selection for quality.
Even a small correlation is enough to move the needle.
So you cite, in a statistical claim throwing around numbers like 1 in a million, a single example? And I wonder how many people really consider Jesus a philosopher, as opposed to an excuse like GWB to signal their religion and cover up that they don't actually have any preferences as to secular philosophers...
But yeah, popularity is a meaningful index! Go down the list of great philosophers and you'll find they are popular and even appear in pop culture; Zeno, Plato, Socrates, Aristotle, Confucius, Descartes, Nietzsche, Russell, Wittgenstein to name a few off the top of my head are all widely read by laymen and appear in popular culture, and were often world-famous in their own lifetime. Of course it's not a perfect correlation - not all great philosophers will find popularity after their death among non-philosophers (Plotinus or Spinoza or Hume may have been the greatest philosophers of their time but only philosophers read them these days) - but think of how many minor or poor philosophers from those respective time periods remain obscure... Very few of them succeed like Ayn Rand in being a poor philosopher and also popular.
When did I claim no one at SI held your views on anthropics? And I really don't think anthropics could be called straightforward by anyone.
Congratulations, you understood the point. Similarly, decent arguments are highly diagnostic of philosophical ability because most people couldn't make an even half-assed argument if they sat down and spent all day at it; by LW standards, most philosophy grads can't find their asses either, and that's a very selective filter as well (philosophy majors are the highest-scoring group on the GRE for both verbal and writing sections, and are around 4 or 5 for the math section below physics & mathematics as one would expect). Making an argument that doesn't instantly fail is sadly so rare that just seeing one moves you a long way towards '1 in a million'.
I never said that fame scaled smoothly with importance. If I had to put the cutoff where fame stops adding additional evidence, I think I'd put it somewhere upwards of a Wikipedia article.
Sure. You're fishing from a limited pool to begin with: there aren't many professional philosophers these days, their numbers are probably shrinking as humanities programs get pressured. To put some numbers in perspective: the annual East coast meeting of the American Philosophical Association (APA) is the biggest single gathering of philosophers (tenured professors, associates, adjuncts, grad students, etc) in the world as far as I know. It numbers ~2000 attendants. Making things even more difficult, if I were one of them, I doubt I would spend much time on MIRI/FHI-related issues even if I were a true believer: it'd be way too risky for my already extremely precarious career. (Recruiting-wise, it might be best just to try to find computer science people and have them try their hand at philosophy; there's a lot of them, they're almost as smart in general, they have direct familiarity with a lot of the issues, they'll have the right intuitions about things like 'computers really are just machines that do what the programs say', and funding is a lot easier for them.)
I've actually never heard of Keith Raniere despite growing up in NY and visiting RPI; Wikipedia doesn't do a good job of describing what's so bad about it... ("Expensive brainwashing"? Brainwashing doesn't work, that's why cults have annual attrition rates in the double-digits.)
Anyway; yes, I would agree that the previous points also increase the chance EY would fall into that category of frauds. After all, such frauds are also pretty rare, so it's hardly impossible for evidence to increase our beliefs both that EY is a good philosopher and also such a fraud.]
(An example: houses catching on fire are rare. Houses not on fire with red spotlights around them are also rare. If I see in the sky above a house in the woods flickering red light, this is consistent with both the house being on fire and them having set up spotlights for a party; and my beliefs in the possibility of a fire and the possibility of spot spotlights will both increase quite a bit even though they're mutually exclusive scenarios.)
What evidence gave you this impression?
Whatever caused your slide into jadedness?
By arse standards, most philosophy grads can't find LW.
(Sorry, what was this permutation meant to accomplish?)
Says the person whose whole argument of opposition to compatibilism was basically the cry "but where is the choice?!?"
How much of CSA have you read? Search for the sweet-spot just before Luke discovered LW and you should find high level philosophy going on.