lukeprog comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 20 January 2013 05:09:49AM *  1 point [-]

I've argued against this plan but I'm guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.

For the record: I, too, want an FAI team in which Eliezer isn't the only one with Eliezer-level philosophical ability or better. This is tougher than "merely" finding 1-in-10-million math talents, but still do-able.

What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a "normal" philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there's more, but I can't talk about it yet.

Also, as Eliezer said, Paul Christiano's existence is encouraging.

Comment author: Kawoomba 20 January 2013 08:27:27AM 1 point [-]

What about Kawoomba's existence? :-(

CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.

Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.

The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.

Comment author: lukeprog 20 January 2013 08:34:41AM *  0 points [-]

Can you name some older academics that have the requisite philosophical skill? (And if your first line isn't a joke, perhaps you can link me to some of your own philosophical works?)

Comment author: Kawoomba 20 January 2013 12:10:06PM 2 points [-]

Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis' course recommendations reads like my former grad CS reading list.

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.

I'm not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds - or advanced algorithms.

(As for myself, the first line was meant as a joke - alas! How sad!)

Comment author: lukeprog 20 January 2013 06:53:19PM -1 points [-]

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills

No, I wouldn't say that. The problem is that we (humans) don't know how to teach the philosophical skill I'm talking about, so there aren't classes on it, so I can only recommend courses on "the basics" or "prerequisites." I don't know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.

Comment deleted 20 January 2013 05:28:03PM *  [-]
Comment author: TheOtherDave 20 January 2013 06:37:15PM *  6 points [-]

I suspect you and Luke do not share a referent for "better philosophy" here. In particular, I doubt either Luke or Eliezer would agree that the ability to write clearly, or to analyze and formulate arguments for purposes of compellingly engaging with existing arguments in the tradition of analytic philosophy, is the rare skill that Luke is talking about.

Trying to have a conversation about how hard it is to find an X without common referents for X is not likely to lead anywhere productive.

Comment author: lukeprog 20 January 2013 07:16:45PM *  3 points [-]

You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.

It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.

Comment author: protest_boy 20 June 2014 08:09:54AM -1 points [-]

Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?

Comment author: MugaSofer 21 January 2013 01:34:25PM *  0 points [-]

Downvoted because:

  • In my experience of philosophy, there are very few philosophers at EY's level.
  • You provided no evidence for your claims; and when you edited your comment ...
  • ... and the link you gave consists of him clarifying his terminology, and thanking them for interpreting his unclear wording charitably.

Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.

[Edited for formatting.]

Comment author: Kindly 21 January 2013 01:58:28PM 5 points [-]

In my experience of philosophy, there are very few philosophers at EY's level.

Can you qualify that with describing your experience with philosophers? E.g. "There are very few philosophers at EY's level, and I've met Philosopher John Conway." Whoever Philosopher John Conway turns out to be.

Comment deleted 21 January 2013 02:18:33PM *  [-]
Comment deleted 02 February 2013 08:28:43AM *  [-]
Comment author: gwern 02 February 2013 05:10:52PM 0 points [-]

Surely low enough not to be overcome by you being impressed or you agreeing with his philosophy

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we draw a random number between 1 and a million; the prior for any particular number between 1 and a million is straightforwardly very low - one in a million, of course. Now, I have just generated the number 493250 using random.org. Surely this prior of 1 in a million that I have generated any specific number like 493250 is low enough to not be overcome by you being impressed by looking at this comment and see '493250' in it? The prior for you having very special powers of perception of the right number is likewise proportionally low to how very special it is, and so on."

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

The ensuring debates and demands for evidence that something with very low prior isn't true, are particularly illuminating with regards to just how incapable certain self proclaimed Bayesians are of the most basic forms of probabilistic reasoning.

Yes. I agree. Some of these self proclaimed Bayesians cannot even fully specify their examples or prove their arguments or explain the crucial part of what they were probably arguing.

Comment author: whowhowho 02 February 2013 05:52:52PM *  2 points [-]

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

So, putting the analogy into reverse, the top post is wrong. You can judge N levels above your own.

Comment author: gwern 02 February 2013 06:00:24PM 0 points [-]

I was making the point that Dmytry's claim was flawed in 2 separate ways; 'you can judge N levels above your own' is closer to the point of the random.org example than the DC example. (The DC example was more about neither DC nor EY being a random selection, not the strength of personal judgment.)

Comment deleted 02 February 2013 09:11:09PM [-]
Comment author: gwern 02 February 2013 09:32:09PM -1 points [-]

I would of thought the latter but I now think you're honest given the earlier conversation involving 'crazy' anthropic reasoning of mine which turned out to be favoured by much everyone at SI as well, contrary to your claims.

When did I claim no one at SI held your views? That would've been hard since you refused to use standard terminology like SIA or SSA which I could then go 'ah yes, that's Bostrom's current view'.

If you had some omniscient Omega that had a web interface where you could enter "Pick an 1 in a million - quality philosopher" and it would reply "Eliezer Yudkowsky" and that's how you came around Yudkowsky, then it would have been analogous to that random.org example.

...Prior for the comic is low. You update it away if the choice of comic is very well correlating with what you consider the "best". If you were just shown various clip art at random you'd have a lot of trouble guessing the most popular one, because your eye for popularity certainly won't provide enough evidence.

No, that's not the analogy. The analogy is that there are at least 2 ways in which we are long past a prior of 1 in a million and don't have judgments which are equivalent to random choice, and those were illustrating them: the first is one's own ability to recognize some level of quality in a philosopher, and the second is about looking at a non-random selection at the end of a process with some selection for quality.

Are you making a point that popularity of a philosopher among non philosophers is very correlated to their philosophical ability?

Even a small correlation is enough to move the needle.

What's about lack of recognition by other philosophers, how is that correlated with philosophical ability? What's about Jesus, a dead philosopher who's quite damn popular?

So you cite, in a statistical claim throwing around numbers like 1 in a million, a single example? And I wonder how many people really consider Jesus a philosopher, as opposed to an excuse like GWB to signal their religion and cover up that they don't actually have any preferences as to secular philosophers...

But yeah, popularity is a meaningful index! Go down the list of great philosophers and you'll find they are popular and even appear in pop culture; Zeno, Plato, Socrates, Aristotle, Confucius, Descartes, Nietzsche, Russell, Wittgenstein to name a few off the top of my head are all widely read by laymen and appear in popular culture, and were often world-famous in their own lifetime. Of course it's not a perfect correlation - not all great philosophers will find popularity after their death among non-philosophers (Plotinus or Spinoza or Hume may have been the greatest philosophers of their time but only philosophers read them these days) - but think of how many minor or poor philosophers from those respective time periods remain obscure... Very few of them succeed like Ayn Rand in being a poor philosopher and also popular.

Comment deleted 02 February 2013 10:09:43PM [-]
Comment author: gwern 03 February 2013 01:18:54AM 0 points [-]

If you don't know it other than by name, that's not my problem. It was straightforward mathematics.

When did I claim no one at SI held your views on anthropics? And I really don't think anthropics could be called straightforward by anyone.

There are pathological, intuitively confusing cases such as the number example; reading a number is incredibly selective for it being that number, so the update, in fact, does pull the probability up.

Congratulations, you understood the point. Similarly, decent arguments are highly diagnostic of philosophical ability because most people couldn't make an even half-assed argument if they sat down and spent all day at it; by LW standards, most philosophy grads can't find their asses either, and that's a very selective filter as well (philosophy majors are the highest-scoring group on the GRE for both verbal and writing sections, and are around 4 or 5 for the math section below physics & mathematics as one would expect). Making an argument that doesn't instantly fail is sadly so rare that just seeing one moves you a long way towards '1 in a million'.

Yudkowsky is not exactly Ayn Rand level popular, is he? If that's what you're after, pick anyone more famous than Yudkowsky and you're done. Easy.

I never said that fame scaled smoothly with importance. If I had to put the cutoff where fame stops adding additional evidence, I think I'd put it somewhere upwards of a Wikipedia article.

Look back up, Muehlhauser has stated that it is a tough task choosing someone of Yudkowsky's level of philosophical ability.

Sure. You're fishing from a limited pool to begin with: there aren't many professional philosophers these days, their numbers are probably shrinking as humanities programs get pressured. To put some numbers in perspective: the annual East coast meeting of the American Philosophical Association (APA) is the biggest single gathering of philosophers (tenured professors, associates, adjuncts, grad students, etc) in the world as far as I know. It numbers ~2000 attendants. Making things even more difficult, if I were one of them, I doubt I would spend much time on MIRI/FHI-related issues even if I were a true believer: it'd be way too risky for my already extremely precarious career. (Recruiting-wise, it might be best just to try to find computer science people and have them try their hand at philosophy; there's a lot of them, they're almost as smart in general, they have direct familiarity with a lot of the issues, they'll have the right intuitions about things like 'computers really are just machines that do what the programs say', and funding is a lot easier for them.)

By the way there's a pattern, various Ayn Rands and Keith Ranieres and Ron Hubbards and other self improvement gurus slash philosophers slash world saviours are popular philosophers among non-philosophers but not recognized by other philosophers.

I've actually never heard of Keith Raniere despite growing up in NY and visiting RPI; Wikipedia doesn't do a good job of describing what's so bad about it... ("Expensive brainwashing"? Brainwashing doesn't work, that's why cults have annual attrition rates in the double-digits.)

Anyway; yes, I would agree that the previous points also increase the chance EY would fall into that category of frauds. After all, such frauds are also pretty rare, so it's hardly impossible for evidence to increase our beliefs both that EY is a good philosopher and also such a fraud.]

(An example: houses catching on fire are rare. Houses not on fire with red spotlights around them are also rare. If I see in the sky above a house in the woods flickering red light, this is consistent with both the house being on fire and them having set up spotlights for a party; and my beliefs in the possibility of a fire and the possibility of spot spotlights will both increase quite a bit even though they're mutually exclusive scenarios.)

Comment deleted 03 February 2013 10:03:19PM [-]
Comment author: BerryPick6 03 February 2013 10:07:22PM 1 point [-]

What evidence gave you this impression?

Comment deleted 04 February 2013 11:35:30AM *  [-]
Comment author: Kawoomba 03 February 2013 10:09:07PM 2 points [-]

Whatever caused your slide into jadedness?

Comment author: fubarobfusco 03 February 2013 11:18:53PM 4 points [-]

by LW standards, most philosophy grads can't find their asses

By philosophy standards, most LWers can't find their arses.

By arse standards, most philosophy grads can't find LW.

(Sorry, what was this permutation meant to accomplish?)

Comment author: ArisKatsaris 04 February 2013 07:49:51PM -1 points [-]

Says the person whose whole argument of opposition to compatibilism was basically the cry "but where is the choice?!?"

Comment deleted 03 February 2013 10:01:03PM *  [-]
Comment author: BerryPick6 03 February 2013 10:08:40PM -1 points [-]

Yes. But he is a barely mediocre philosopher who is in no position to recognise real talent, whether EY's, if it exists, or anyone else's. He confuses ability with style or adherence to doctrines that he approves of.

How much of CSA have you read? Search for the sweet-spot just before Luke discovered LW and you should find high level philosophy going on.