Eliezer_Yudkowsky comments on The Level Above Mine - Less Wrong

42 Post author: Eliezer_Yudkowsky 26 September 2008 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (387)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 26 September 2008 05:54:06PM 9 points [-]

Manuel, "enroll in a grad program for AI" != "you're smart, you should go to college".

Kragen, the short answer is, "It's easy to talk about the importance of effort if you happen to be Hamming." If you can make the ante for the high-stakes table, then you can talk about how little the ante counts for, and the importance of playing your cards well. But if you can't make the ante...

Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point. Not Andrew J. Wiles math, but Jaynes doesn't use Wiles-math either. I quite share your prejudice against math for the sake of looking impressive, because that gets you the wrong math. (Formality isn't about Precision?)

Ken, it's exclusively my work that gives me the motivation to keep working on something for years, but things like pride can give me the motivation to keep working on something for the next minute. I'll take whatever sources of motivation I can get (er, that aren't outright evil, of course).

Douglas, yes, my father changed at 40. But one of my primary sources of hope is that people have been known to do basic research later than this if they changed fields late in life, which suggests that it actually can be a matter of approach/outlook/methodology and avoiding serving on prestigious committees.

Retired, I don't understand the apparent contradiction you see. I participated in the Midwest Talent Search at a young age (not "Northwestern" anything, maybe you're confusing with Northwestern University?) and scored second-best for my grade category, but at that point I'd skipped a grade. But I think I can recall hearing about someone who got higher SAT scores than mine, at age nine. That would be decisive, if the SAT were a perfect noiseless measurement of ability to work on AI.

Vassar: You see, for many many people it is possible to choose a weighting scheme among a dozen or so factors contribute to intellectual work such that they are the best.

Yes, this is the well-known phenomenon where asking someone "How dumb are you?" produces a different answer than "How smart are you?" because they recall a different kind of evidence. But the question I'm trying to answer is "How much potential do you have to solve the remaining FAI problems you know about?" As I said to Robin, I do think this is going to involve taking a step up in math level.

To all commenters who observed that I don't seem to stand out from 10 other smart people they know, either you didn't comprehend the entirety of today's post, or you have very high confidence that you occupy the highest possible rank of human ability.

Comment author: Wei_Dai 06 March 2011 09:33:31AM *  17 points [-]

Robin, it's not blind faith in math or math for the sake of impressiveness, but a specific sense that the specific next problems I have to solve, will require more math than I've used up to this point.

I'm curious if this is still your sense, and if so, what kind of math are you talking about?

My sense is that currently the main problems in FAI are philosophical. Skill in math is obviously very useful, but secondary to skill in philosophy, because most of the time it's still "I have no idea how to approach this problem" instead of "Oh, if I can just solve this math problem, everything will be clear".

...or I'm strictly dumber than Conway, dominated by him along all dimensions. Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

Marcello observed "In terms of philosophical intuition, you are head and shoulders above Conway." Making progress in FAI theory seems to require a combination of rationality, good philosophical intuition, math talent, motivation, and prerequisite background knowledge. (Am I leaving out anything?) Out of these, perhaps good philosophical intuition is rarest, in large part because we don't know how to teach it (or screen for it at a young age). Is this a problem you've considered?

Comment author: ESRogs 19 January 2013 01:38:25AM 3 points [-]

I'd be curious to hear the answers to Wei's questions (in the sibling comment)...

Comment author: Wei_Dai 20 January 2013 03:46:08AM 5 points [-]

From subsequent discussions, my understanding is that Eliezer doesn't think it's likely that he can recruit/train another FAI researcher with a level of philosophical competence similar to himself, and that he's planning/hoping to solve the main philosophical (as opposed to mathematical) problems himself. I've argued against this plan but I'm guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.

Comment author: Eliezer_Yudkowsky 20 January 2013 05:04:14AM 4 points [-]

I certainly intend to try that recruiting thing (Paul Christiano ain't half bad) but recruiting philosophy seems much less straightforward than recruiting mathematical talent. If I have to resolve it all myself, I wouldn't flinch from trying. It seems like that part should be less difficult in an absolute sense than the rest of the labor, though that might just be comparative advantage talking. The resolutions to philosophical confusions usually seem relatively straightforward once you have them, in my experience so far.

Comment author: Wei_Dai 20 January 2013 06:25:36AM 4 points [-]

If I have to resolve it all myself, I wouldn't flinch from trying.

As I asked in the linked comment, if you're the only philosopher in the team, how will others catch your mistakes? I would not trust that when you stop feeling confused, the problem has been correctly solved, or that your feelings of confusion are a reliable indicator of problems existing in the first place.

I certainly intend to try that recruiting thing (Paul Christiano ain't half bad) but recruiting philosophy seems much less straightforward than recruiting mathematical talent.

Having Paul involved certainly makes me feel better, and if you do succeed in recruiting more philosophy talent, then the issue may be moot. But I'm still concerned about your readiness to go it alone, and what that implies about your views, not only of how hard the problems are, but also how much one needs to worry about making philosophical mistakes.

Comment author: Eliezer_Yudkowsky 20 January 2013 07:19:29PM 3 points [-]

Do you have some reliable way of recruiting? What's the policy alternative? You do what you gotta do, if ends up being just you, nonetheless, you do what you gotta do. Zero people won't make fewer mistakes than one person.

Comment author: Wei_Dai 22 January 2013 10:39:45PM 4 points [-]

Quoting Carl Shulman from about a year ago:

If we condition on having all other variables optimized, I'd expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this "halt, melt, and catch fire") that cannot be shown to be safe (given limited human ability, incentives and bias). If that works (and it's a separate target in team construction rather than a guarantee, but you specified optimized non-talent variables) then I would expect a big shift of probability from "UFAI" to "null."

I'm not sure if he had both math and philosophy in mind when he wrote that or just math, but in any case surely the same principle applies to the philosophy. If you don't reach a high confidence that the philosophy behind some FAI design is correct, then you shouldn't move forward with that design, and if there is only one philosopher on the team, you just can't reach high confidence in the philosophy.

Comment author: Eliezer_Yudkowsky 23 January 2013 12:20:47AM 4 points [-]

if there is only one philosopher on the team, you just can't reach high confidence in the philosophy.

This does not sound correct to me. Resolutions of simple confusions usually look pretty obvious in retrospect. Or do you mean something broader by "philosophy" than trying to figure out free will?

Comment author: Wei_Dai 23 January 2013 04:30:50AM 11 points [-]

Did you read the rest of that thread where I talked about how in cryptography we often used formalizations of "security" that were discovered to be wrong years later, and that's despite having hundreds of people in the research community constantly trying to attack each other's ideas? I don't see how formalizing Friendliness could be not just easier and less error prone than formalizing security, but so much so that just one person is enough to solve all the problems with high confidence of correctness.

Or do you mean something broader by "philosophy" than trying to figure out free will?

I mean questions like your R1 and R2, your "nonperson predicate", how to distinguish between moral progress and moral error / value drift, anthropic reasoning / "reality fluid". Generally, all the problems that need to be solved for building an FAI besides the math and the programming.

Comment author: Eliezer_Yudkowsky 23 January 2013 05:39:28AM 17 points [-]

Yes, formalizing Friendliness is not the sort of thing you'd want one person doing. I agree. I don't consider that "philosophy", and it's the sort of thing other FAI team members would have to be able to check. We probably want at least one high-grade actual cryptographer.

Of the others, the nonperson predicate and the moral-progress parts are the main ones where it'd be unusually hard to solve and then tell that it had been solved correctly. I would expect both of those to be factorable-out, though - that all or most of the solution could just be published outright. (Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off... how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter... do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.) Anthropic reasoning / "reality fluid" is the sort of thing I'd expect to be really obvious in retrospect once solved. R1 and R2 should be both obvious in retrospect, and publishable.

I have hopes that an upcoming post on the Lob Problem will offer a much more concrete picture of what some parts of the innards of FAI development and formalizing look like.

Comment author: Wei_Dai 23 January 2013 07:40:11AM *  1 point [-]

Resolutions of simple confusions usually look pretty obvious in retrospect.

Can you give some more examples of this, besides "free will"? (I don't understand where your intuitions comes from that certain problems will turn out to have solutions that are obvious in retrospect, and that such feelings of obviousness are trustworthy. Maybe it would help me see your perspective if I got some more past examples.)

Comment author: MugaSofer 23 January 2013 01:26:55PM 0 points [-]

A tree falls in a forest with no-one to hear it. Does it make a sound?

Comment author: shminux 23 January 2013 12:30:14AM *  3 points [-]

Do you have an example in mind where a certain philosophical question claimed to have been solved or dissolved by Eliezer turned out to be not solved after all, or the solution was wrong?

Comment author: Wei_Dai 23 January 2013 01:05:31AM 5 points [-]

Do you have an example in mind where a certain philosophical question claimed to have been solved or dissolved by Eliezer turned out to be not solved after all, or the solution was wrong?

Also, instances where Eliezer didn't seem to realize that a problem existed until someone pointed it out to him:

Comment author: Eliezer_Yudkowsky 23 January 2013 01:16:02AM 6 points [-]

Order-dependence and butterfly effects - knew about this and had it in mind when I wrote CEV, I think it should be in the text.

Counterfactual Mugging - check, I don't think I was calling TDT a complete solution before then but the Counterfactual Mugging was a class of possibilities I hadn't considered. (It does seem related to Parfit's Hitchhiker which I knew was a problem.)

Solomonoff Induction - again, I think you may be overestimating how much weight I put on that in the first place. It's not a workable AI answer for at least two obvious reasons I'm pretty sure I knew about from almost-day-one, (a) it's uncomputable and (b) it can't handle utility functions over the environment. However, your particular contributions about halting-oracles-shouldn't-be-unimaginable did indeed influence me in toward my current notion of second-order logical natural induction over possible models of axioms in which you could be embedded. Albeit I stand by my old reply that Solomonoff Induction would encompass any computable predictions or learning you could do about halting oracles in the environment. (The problem of porting yourself onto any environmental object is something I already knew AIXI would fail at.)

Comment author: CarlShulman 23 January 2013 07:22:19AM 0 points [-]

I'm not sure if he had both math and philosophy in mind when he wrote that or just math,

Both.

Comment deleted 25 January 2013 08:18:41PM [-]
Comment author: V_V 25 January 2013 03:38:19PM *  -1 points [-]

It seems like that part should be less difficult in an absolute sense than the rest of the labor

You never did any engineering-level mathematical modeling of real system, did you?

The main difficulty is not proving the theorems, it is finding the right axioms to describe the relevant aspects of the system and the properties of interest. And that's where errors often occur.

Now, typical engineering tasks pale in comparison to the task you are trying to undertake: creting a fully specified mathematical model of ethics.

though that might just be comparative advantage talking.

Most likely it's just the Dunning–Kruger effect

The resolutions to philosophical confusions usually seem relatively straightforward once you have them, in my experience so far.

Just like when you "resolved" the interpretation of quantum mechanics? Well, good thing that you are never going to make anything close to an AGI and that AGI risk is probably overrated, otherwise it wouldn't end well...

Comment author: lukeprog 20 January 2013 05:09:49AM *  1 point [-]

I've argued against this plan but I'm guessing that Eliezer is probably still set on this course. Others at SIAI may have more reservations about it.

For the record: I, too, want an FAI team in which Eliezer isn't the only one with Eliezer-level philosophical ability or better. This is tougher than "merely" finding 1-in-10-million math talents, but still do-able.

What am I doing about it? I wrote a post encouraging a specific kind of philosophical education that I think will be more likely to produce Eliezer-level philosophers than a "normal" philosophical education (or even a CMU or UPitts one). When Louie came up with the idea to write a list of Course recommendations for Friendliness researchers, I encouraged it. Also, one of the reasons I ended up supporting the plan to launch CFAR in 2012 was its potential not only to make people more effective at achieving their goals, but also to learn ways to make some people better philosophers (see my last paragraph here). And there's more, but I can't talk about it yet.

Also, as Eliezer said, Paul Christiano's existence is encouraging.

Comment author: Kawoomba 20 January 2013 08:27:27AM 1 point [-]

What about Kawoomba's existence? :-(

CFAR and related are good efforts at raising the sanity waterline (which is an average), not so much for identifying the extreme outliers that could Alan-Turing their way towards an FAI. Those will make waves on their own.

Such grassroots organisations may be good ways of capturing the attention of a wider audience, although second to publishing in the field / personally building a network at conferences.

The time horizon and viability of having a few hundred self-selected college aged students and trying to grow them into a seminal figure of extraordinary capabilities seems prohibitive, especially when there are already exceedingly capable people at Stanford et al, who already bring the oomph and just lack the FAI-motivation.

Comment author: lukeprog 20 January 2013 08:34:41AM *  0 points [-]

Can you name some older academics that have the requisite philosophical skill? (And if your first line isn't a joke, perhaps you can link me to some of your own philosophical works?)

Comment author: Kawoomba 20 January 2013 12:10:06PM 2 points [-]

Sipser, Russell&Norvig et al are core parts of your proposed philosophical curriculum, Louis' course recommendations reads like my former grad CS reading list.

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills.

I'm not postulating that Bayesian superstars also make the best drummers and fencing masters, but between your analytical CS-style philosophy and Machine Learning groups there is a cross-domain synergy effect that comes with the clarity of designing minds - or advanced algorithms.

(As for myself, the first line was meant as a joke - alas! How sad!)

Comment author: lukeprog 20 January 2013 06:53:19PM -1 points [-]

It follows that, say, many with or pursuing a PhD in Machine Learning and related have also picked up a majority of your desired (per your recommendations) philosophical skills

No, I wouldn't say that. The problem is that we (humans) don't know how to teach the philosophical skill I'm talking about, so there aren't classes on it, so I can only recommend courses on "the basics" or "prerequisites." I don't know how to turn a math/CS PhD under Stuart Russell into the next Eliezer Yudkowsky.

Comment deleted 20 January 2013 05:28:03PM *  [-]
Comment author: TheOtherDave 20 January 2013 06:37:15PM *  6 points [-]

I suspect you and Luke do not share a referent for "better philosophy" here. In particular, I doubt either Luke or Eliezer would agree that the ability to write clearly, or to analyze and formulate arguments for purposes of compellingly engaging with existing arguments in the tradition of analytic philosophy, is the rare skill that Luke is talking about.

Trying to have a conversation about how hard it is to find an X without common referents for X is not likely to lead anywhere productive.

Comment author: lukeprog 20 January 2013 07:16:45PM *  3 points [-]

You're right, I should say more about what I mean by "Eliezer-level philosophical ability." Clearly, I don't mean "writing clarity," as many of my favorite analytic philosophers write more clearly than Eliezer does.

It'll take me some time to prepare that explanation. For now, let me show some support for your comment by linking to another example of Eliezer being corrected by a professional philosopher.

Comment author: protest_boy 20 June 2014 08:09:54AM -1 points [-]

Do you have anything quick to add about what you mean by "Eliezer-level philosophical ability"?

Comment author: MugaSofer 21 January 2013 01:34:25PM *  0 points [-]

Downvoted because:

  • In my experience of philosophy, there are very few philosophers at EY's level.
  • You provided no evidence for your claims; and when you edited your comment ...
  • ... and the link you gave consists of him clarifying his terminology, and thanking them for interpreting his unclear wording charitably.

Also because it irritates me that this site is scattered with comments at anything from -3 to +15 (not exact figures) that criticize cryonics/ASI/other things lots of us believe in, LW policies, or EY, and then talk about how they're going to get downvoted into oblivion for speaking out against the consensus.

[Edited for formatting.]

Comment author: Kindly 21 January 2013 01:58:28PM 5 points [-]

In my experience of philosophy, there are very few philosophers at EY's level.

Can you qualify that with describing your experience with philosophers? E.g. "There are very few philosophers at EY's level, and I've met Philosopher John Conway." Whoever Philosopher John Conway turns out to be.

Comment deleted 21 January 2013 02:18:33PM *  [-]
Comment deleted 02 February 2013 08:28:43AM *  [-]
Comment author: gwern 02 February 2013 05:10:52PM 0 points [-]

Surely low enough not to be overcome by you being impressed or you agreeing with his philosophy

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we draw a random number between 1 and a million; the prior for any particular number between 1 and a million is straightforwardly very low - one in a million, of course. Now, I have just generated the number 493250 using random.org. Surely this prior of 1 in a million that I have generated any specific number like 493250 is low enough to not be overcome by you being impressed by looking at this comment and see '493250' in it? The prior for you having very special powers of perception of the right number is likewise proportionally low to how very special it is, and so on."

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

The ensuring debates and demands for evidence that something with very low prior isn't true, are particularly illuminating with regards to just how incapable certain self proclaimed Bayesians are of the most basic forms of probabilistic reasoning.

Yes. I agree. Some of these self proclaimed Bayesians cannot even fully specify their examples or prove their arguments or explain the crucial part of what they were probably arguing.

Comment author: whowhowho 02 February 2013 05:52:52PM *  2 points [-]

"Here is a very simple example of Bayesian reasoning, that most people are in fact capable of. Suppose we are looking at people who write clip art web comics; the prior for any particular clip art being the best or most popular is straightforwardly very low - one in a million, say, or what ever is your number. Now, we look at http://www.qwantz.com/index.php Surely this prior of 1 in a million is low enough to not be overcome by you being impressed by looking at this Dinosaur Comics? The prior for you having very special powers of perception of clip art is likewise proportionally low to how very special it is, and so on."

So, putting the analogy into reverse, the top post is wrong. You can judge N levels above your own.

Comment author: gwern 02 February 2013 06:00:24PM 0 points [-]

I was making the point that Dmytry's claim was flawed in 2 separate ways; 'you can judge N levels above your own' is closer to the point of the random.org example than the DC example. (The DC example was more about neither DC nor EY being a random selection, not the strength of personal judgment.)

Comment deleted 02 February 2013 09:11:09PM [-]
Comment author: gwern 02 February 2013 09:32:09PM -1 points [-]

I would of thought the latter but I now think you're honest given the earlier conversation involving 'crazy' anthropic reasoning of mine which turned out to be favoured by much everyone at SI as well, contrary to your claims.

When did I claim no one at SI held your views? That would've been hard since you refused to use standard terminology like SIA or SSA which I could then go 'ah yes, that's Bostrom's current view'.

If you had some omniscient Omega that had a web interface where you could enter "Pick an 1 in a million - quality philosopher" and it would reply "Eliezer Yudkowsky" and that's how you came around Yudkowsky, then it would have been analogous to that random.org example.

...Prior for the comic is low. You update it away if the choice of comic is very well correlating with what you consider the "best". If you were just shown various clip art at random you'd have a lot of trouble guessing the most popular one, because your eye for popularity certainly won't provide enough evidence.

No, that's not the analogy. The analogy is that there are at least 2 ways in which we are long past a prior of 1 in a million and don't have judgments which are equivalent to random choice, and those were illustrating them: the first is one's own ability to recognize some level of quality in a philosopher, and the second is about looking at a non-random selection at the end of a process with some selection for quality.

Are you making a point that popularity of a philosopher among non philosophers is very correlated to their philosophical ability?

Even a small correlation is enough to move the needle.

What's about lack of recognition by other philosophers, how is that correlated with philosophical ability? What's about Jesus, a dead philosopher who's quite damn popular?

So you cite, in a statistical claim throwing around numbers like 1 in a million, a single example? And I wonder how many people really consider Jesus a philosopher, as opposed to an excuse like GWB to signal their religion and cover up that they don't actually have any preferences as to secular philosophers...

But yeah, popularity is a meaningful index! Go down the list of great philosophers and you'll find they are popular and even appear in pop culture; Zeno, Plato, Socrates, Aristotle, Confucius, Descartes, Nietzsche, Russell, Wittgenstein to name a few off the top of my head are all widely read by laymen and appear in popular culture, and were often world-famous in their own lifetime. Of course it's not a perfect correlation - not all great philosophers will find popularity after their death among non-philosophers (Plotinus or Spinoza or Hume may have been the greatest philosophers of their time but only philosophers read them these days) - but think of how many minor or poor philosophers from those respective time periods remain obscure... Very few of them succeed like Ayn Rand in being a poor philosopher and also popular.

Comment deleted 02 February 2013 10:09:43PM [-]
Comment author: gwern 03 February 2013 01:18:54AM 0 points [-]

If you don't know it other than by name, that's not my problem. It was straightforward mathematics.

When did I claim no one at SI held your views on anthropics? And I really don't think anthropics could be called straightforward by anyone.

There are pathological, intuitively confusing cases such as the number example; reading a number is incredibly selective for it being that number, so the update, in fact, does pull the probability up.

Congratulations, you understood the point. Similarly, decent arguments are highly diagnostic of philosophical ability because most people couldn't make an even half-assed argument if they sat down and spent all day at it; by LW standards, most philosophy grads can't find their asses either, and that's a very selective filter as well (philosophy majors are the highest-scoring group on the GRE for both verbal and writing sections, and are around 4 or 5 for the math section below physics & mathematics as one would expect). Making an argument that doesn't instantly fail is sadly so rare that just seeing one moves you a long way towards '1 in a million'.

Yudkowsky is not exactly Ayn Rand level popular, is he? If that's what you're after, pick anyone more famous than Yudkowsky and you're done. Easy.

I never said that fame scaled smoothly with importance. If I had to put the cutoff where fame stops adding additional evidence, I think I'd put it somewhere upwards of a Wikipedia article.

Look back up, Muehlhauser has stated that it is a tough task choosing someone of Yudkowsky's level of philosophical ability.

Sure. You're fishing from a limited pool to begin with: there aren't many professional philosophers these days, their numbers are probably shrinking as humanities programs get pressured. To put some numbers in perspective: the annual East coast meeting of the American Philosophical Association (APA) is the biggest single gathering of philosophers (tenured professors, associates, adjuncts, grad students, etc) in the world as far as I know. It numbers ~2000 attendants. Making things even more difficult, if I were one of them, I doubt I would spend much time on MIRI/FHI-related issues even if I were a true believer: it'd be way too risky for my already extremely precarious career. (Recruiting-wise, it might be best just to try to find computer science people and have them try their hand at philosophy; there's a lot of them, they're almost as smart in general, they have direct familiarity with a lot of the issues, they'll have the right intuitions about things like 'computers really are just machines that do what the programs say', and funding is a lot easier for them.)

By the way there's a pattern, various Ayn Rands and Keith Ranieres and Ron Hubbards and other self improvement gurus slash philosophers slash world saviours are popular philosophers among non-philosophers but not recognized by other philosophers.

I've actually never heard of Keith Raniere despite growing up in NY and visiting RPI; Wikipedia doesn't do a good job of describing what's so bad about it... ("Expensive brainwashing"? Brainwashing doesn't work, that's why cults have annual attrition rates in the double-digits.)

Anyway; yes, I would agree that the previous points also increase the chance EY would fall into that category of frauds. After all, such frauds are also pretty rare, so it's hardly impossible for evidence to increase our beliefs both that EY is a good philosopher and also such a fraud.]

(An example: houses catching on fire are rare. Houses not on fire with red spotlights around them are also rare. If I see in the sky above a house in the woods flickering red light, this is consistent with both the house being on fire and them having set up spotlights for a party; and my beliefs in the possibility of a fire and the possibility of spot spotlights will both increase quite a bit even though they're mutually exclusive scenarios.)

Comment deleted 03 February 2013 10:01:03PM *  [-]
Comment author: itaibn0 22 March 2013 02:46:58PM 0 points [-]

To all commenters who observed that I don't seem to stand out from 10 other smart people they know, either you didn't comprehend the entirety of today's post, or you have very high confidence that you occupy the highest possible rank of human ability.

The only thing that's certain is that somebody has trouble properly apprehending your intelligence.