All of DanielVarga's Comments + Replies

I’ll say that a model linearly represents a binary feature f if there is a linear probe out of the model’s latent space which is accurate for classifying f

 

If a model linearly represents features a and b, then it automatically linearly represents  and .

I think I misunderstand your definition. Let feature a be represented by x_1 > 0.5, and let feature b be represented by x_2 > 0.5. Let x_i be iid uniform [0, 1]. Isn't that a counterexample to (a and b) being linearly representable?

2Sam Marks
Thanks, you're correct that my definition breaks in this case. I will say that this situation is a bit pathological for two reasons: 1. The mode of a uniform distribution doesn't coincide with its mean. 2. The variance of the multivariate uniform distribution U([0,1]×[0,1]) is largest along the direction x1+x2, which is exactly the direction which we would want to represent a AND b. I'm not sure exactly which assumptions should be imposed to avoid pathologies like this, but maybe something of the form: we are working with boolean features f whose class-conditional distributions D(⋅|f),D(⋅|¬f) satisfy properties like * D(⋅|f),D(⋅|¬f) are unimodal, and their modes coincide with their means * The variance of D(⋅|f),D(⋅|¬f) along any direction is not too large relative to the difference of the means Ex∼D(⋅|f)(x)−Ex∼D(⋅|¬f)(x)

Here is something I'd like to see: You give the machine the formally specified ruleset of a game (go, chess, etc), wait while the reinforcement learning does its job, and out comes a world-class computer player.

Here is one reason, but it's up for debate:

Deep learning courses rush through logistic regression and usually just mention SVMs. Arguably it's important for understanding deep learning to take the time to really, deeply understand how these linear models work, both theoretically and practically, both on synthetic data and on high dimensional real life data.

More generally, there are a lot of machine learning concepts that deep learning courses don't have enough time to introduce properly, so they just mention them, and you might get a mistaken impression ab... (read more)

-2rpmcruz
It depends on the competitions. All kaggle image-related competitions I have seen have been obliterated by deep neural networks. I am a researcher, albeit a freshman one, and I completely disagree. Knowing about linear and logistic regressions is interesting because neural networks evolved from there, but it's something you can watch a couple of videos on, maybe another one about maximum likelihood and you are done. Not sure why SVMs are that important.

In the last two days I alone wrote a prototype that can take a whiteboard photo, and automatically turn it into a mindmap-like zoomable chart. Pieces of the chart then can be rearranged and altered individually:

https://prezi.com/igaywhvnam2y/whiteboard-prezi-2015-12-04-152935/

This was part of a company hackathon, and I had some infrastructure to help me regarding the visualization, but with the shape recognition/extraction, it was just me and the nasty python bindings for OpenCV.

Oh my god, look at 0-4-year old assaults, both ED visits and deaths. (Assault is the leading TBI-related cause of death for 0-4-year olds.) Some of those falling 4 year olds were assaulted.

2Lumifer
Yes. You don't want to be a child of someone with low IQ and anger management problems.

There are worse fates than not being able to top your own discovery of general relativity.

That's not a top-level comment, so it's excluded by my script from this version. I won't manually edit the output, sorry. There's another version where non-top-level comments are kept, too. Your quote is in there:

4Ben Pace
Ah, thank you.

Top quote contributors by statistical significance level:

  • 0.00000 (23.11 in 45): Alejandro1
  • 0.00007 (17.98 in 63): James_Miller
  • 0.00016 (19.02 in 43): Stabilizer
  • 0.00016 (25.25 in 16): dspeyer
  • 0.00020 (18.69 in 45): GabrielDuquette
  • 0.00052 (26.91 in 11): Oscar_Cunningham
  • 0.00142 (24.33 in 12): peter_hurford
  • 0.00183 (50.50 in 2): Delta
  • 0.00252 (68.00 in 1): Solvent
  • 0.00290 (19.35 in 23): Yvain
  • 0.00352 (66.00 in 1): westward
  • 0.00360 (24.78 in 9): Mestroyer
  • 0.00529 (29.20 in 5): michaelkeenan
  • 0.00591 (41.00 in 2): nabeelqu
  • 0.00591 (41.00 in 2): VincentYu
  • 0.0
... (read more)

Top quote contributors by karma score collected in 2014:

  • 369 James_Miller
  • 277 dspeyer
  • 239 Jayson_Virissimo
  • 181 Stabilizer
  • 165 Alejandro1
  • 163 lukeprog
  • 146 arundelo
  • 129 Salemicus
  • 124 johnlawrenceaspden
  • 117 Kaj_Sotala
  • 117 B_For_Bandana
  • 116 NancyLebovitz
  • 110 Pablo_Stafforini
  • 107 Gunnar_Zarncke
  • 100 Eugine_Nier
  • 97 aarongertler
  • 94 shminux
  • 90 Azathoth123
  • 88 EGarrett
  • 84 elharo
  • 81 Benito
  • 79 Torello
  • 74 MattG
  • 74 AspiringRationalist
  • 73 satt
  • 73 JQuinton
  • 73 27chaos
  • 67 Tyrrell_McAllister
  • 66 Vulture
  • 65 Cyan
  • 62 michaelkeenan
  • 60 WalterL
  • 60 Ixiel
  • 58 jaime2000
  • 58 [deleted]
  • 57
... (read more)

Top quote contributors by total (2009-2014) karma score collected:

  • 1394 RichardKennaway
  • 1133 James_Miller
  • 1040 Alejandro1
  • 1037 [deleted]
  • 978 gwern
  • 971 Jayson_Virissimo
  • 847 lukeprog
  • 846 Eugine_Nier
  • 841 GabrielDuquette
  • 827 Eliezer_Yudkowsky
  • 818 Stabilizer
  • 775 Rain
  • 750 MichaelGR
  • 734 NancyLebovitz
  • 628 Konkvistador
  • 590 anonym
  • 521 CronoDAS
  • 479 arundelo
  • 445 Yvain
  • 434 RobinZ
  • 431 Kaj_Sotala
  • 404 dspeyer
  • 372 Alicorn
  • 357 Grognor
  • 353 Vaniver
  • 347 Tesseract
  • 332 shminux
  • 328 DSimon
  • 296 Oscar_Cunningham
  • 296 billswift
  • 293 Pablo_Stafforini
  • 292 peter_hurford
  • 284 Nominull
  • 277
... (read more)
3DanielVarga
Top quote contributors by statistical significance level: * 0.00000 (23.11 in 45): Alejandro1 * 0.00007 (17.98 in 63): James_Miller * 0.00016 (19.02 in 43): Stabilizer * 0.00016 (25.25 in 16): dspeyer * 0.00020 (18.69 in 45): GabrielDuquette * 0.00052 (26.91 in 11): Oscar_Cunningham * 0.00142 (24.33 in 12): peter_hurford * 0.00183 (50.50 in 2): Delta * 0.00252 (68.00 in 1): Solvent * 0.00290 (19.35 in 23): Yvain * 0.00352 (66.00 in 1): westward * 0.00360 (24.78 in 9): Mestroyer * 0.00529 (29.20 in 5): michaelkeenan * 0.00591 (41.00 in 2): nabeelqu * 0.00591 (41.00 in 2): VincentYu * 0.00604 (60.00 in 1): RomeoStevens * 0.00719 (24.00 in 8): philh * 0.00725 (19.28 in 18): Tesseract * 0.00780 (57.00 in 1): Zando * 0.00820 (39.00 in 2): sediment * 0.00830 (23.62 in 8): Qiaochu_Yuan * 0.00871 (23.50 in 8): Maniakes * 0.00993 (32.00 in 3): benelliott * 0.01012 (15.17 in 64): Jayson_Virissimo * 0.01226 (26.00 in 5): Ezekiel * 0.01359 (49.00 in 1): Liron * 0.01627 (23.67 in 6): AspiringRationalist * 0.01711 (45.00 in 1): Mycroft65536 * 0.01816 (34.00 in 2): summerstay * 0.02114 (43.00 in 1): bentarm * 0.02134 (16.58 in 26): Kaj_Sotala * 0.02265 (42.00 in 1): Andy_McKenzie * 0.02600 (22.17 in 6): ShardPhoenix * 0.03044 (30.50 in 2): gRR * 0.03200 (24.00 in 4): Particleman * 0.03435 (18.25 in 12): MinibearRex * 0.03523 (37.00 in 1): andreas * 0.03875 (36.00 in 1): NoisyEmpire * 0.03876 (16.23 in 22): Grognor * 0.04292 (28.00 in 2): roystgnr
5DanielVarga
Top quote contributors by karma score collected in 2014: * 369 James_Miller * 277 dspeyer * 239 Jayson_Virissimo * 181 Stabilizer * 165 Alejandro1 * 163 lukeprog * 146 arundelo * 129 Salemicus * 124 johnlawrenceaspden * 117 Kaj_Sotala * 117 B_For_Bandana * 116 NancyLebovitz * 110 Pablo_Stafforini * 107 Gunnar_Zarncke * 100 Eugine_Nier * 97 aarongertler * 94 shminux * 90 Azathoth123 * 88 EGarrett * 84 elharo * 81 Benito * 79 Torello * 74 MattG * 74 AspiringRationalist * 73 satt * 73 JQuinton * 73 27chaos * 67 Tyrrell_McAllister * 66 Vulture * 65 Cyan * 62 michaelkeenan * 60 WalterL * 60 Ixiel * 58 jaime2000 * 58 [deleted] * 57 Zubon * 55 Jack_LaSota * 55 CronoDAS * 52 Vaniver * 52 hairyfigment

Top original authors by karma collected:

  • 894 Graham
  • 603 Russell
  • 534 Pratchett
  • 489 Chesterton
  • 475 Feynman
  • 372 Dennett
  • 343 Franklin
  • 340 Munroe
  • 306 Aaronson
  • 294 Newton
  • 282 Einstein
  • 279 Nietzsche
  • 270 Pinker
  • 262 Friedman
  • 252 Shaw
  • 249 Egan
  • 240 Bacon
  • 239 Stephenson
  • 236 Aristotle
  • 235 Taleb
  • 228 Heinlein
  • 209 Kahneman
  • 201 Silver
  • 196 McArdle
  • 187 Sagan
  • 184 Voltaire
  • 183 Wilson
  • 183 Darwin
  • 182 Plato
  • 177 SMBC
  • 173 Buffett
  • 171 Milton
  • 165 Mencken
  • 162 Moldbug
  • 160 Wittgenstein
  • 160 Johnson
  • 160 Hofstadter
  • 158 Asimov
  • 156 Dawkins
  • 154 Winston
  • 147 Godin
  • 145 Marcus
  • 141 Wong
  • 140 Confu
... (read more)

Top original authors by number of quotes. (Note that authors and mentions are not disambiguated.)

  • Graham 47
  • Feynman 47
  • Russell 40
  • Taleb 39
  • Chesterton 37
  • Pratchett 35
  • Einstein 30
  • Dennett 29
  • Nietzsche 26
  • Aaronson 23
  • Heinlein 22
  • Johnson 21
  • Bacon 21
  • Shaw 19
  • Newton 19
  • Franklin 19
  • Wilson 18
  • Darwin 18
  • Kahneman 17
  • Wittgenstein 15
  • Munroe 15
  • Dawkins 15
  • Stephenson 14
  • Sowell 14
  • Silver 14
  • Pinker 14
  • Meier 14
  • Asimov 14
  • Aristotle 14
  • Sagan 13
  • Moldbug 13
  • Eliezer 13
  • Churchill 13
  • Voltaire 12
  • Minsky 12
  • Mencken 12
  • Maynard 12
  • Locke 12
  • Egan 12
  • Clark 12
  • SMBC 11
  • Plato 11
  • Orwell 11
... (read more)
3DanielVarga
Top original authors by karma collected: * 894 Graham * 603 Russell * 534 Pratchett * 489 Chesterton * 475 Feynman * 372 Dennett * 343 Franklin * 340 Munroe * 306 Aaronson * 294 Newton * 282 Einstein * 279 Nietzsche * 270 Pinker * 262 Friedman * 252 Shaw * 249 Egan * 240 Bacon * 239 Stephenson * 236 Aristotle * 235 Taleb * 228 Heinlein * 209 Kahneman * 201 Silver * 196 McArdle * 187 Sagan * 184 Voltaire * 183 Wilson * 183 Darwin * 182 Plato * 177 SMBC * 173 Buffett * 171 Milton * 165 Mencken * 162 Moldbug * 160 Wittgenstein * 160 Johnson * 160 Hofstadter * 158 Asimov * 156 Dawkins * 154 Winston * 147 Godin * 145 Marcus * 141 Wong * 140 Confucius * 136 Descartes * 133 Brandon * 130 Orwell * 129 Nielsen * 127 Hayden * 127 Georg * 123 Minsky * 123 Maynard * 123 Bakker * 121 Sowell * 121 Razib * 119 Hanson * 117 Kaas * 117 Churchill * 116 Vulcan * 112 Obama * 111 Jaynes * 107 Keynes * 106 Tao * 106 Hume * 102 Greene * 102 Deutsch * 101 Saul * 100 Screwtape * 98 Lessing * 98 Christoph * 98 Botton * 97 Watson * 97 Carroll * 96 Rollins * 96 Marx * 96 Kurt * 96 Isn * 96 Harris * 96 Bostrom * 94 Santa * 94 Morris * 93 Shera * 93 Neumann * 93 Holmes * 93 Gawande * 93 Dann * 92 Vonnegut * 92 Locke * 92 Futurama * 91 Adamek * 90 Hoffer

Top short quotes (2009-2014) by karma per character:

  • 60 A Bet is a Tax on BullshitAlex Tabarrok
  • 45 Luck is statistics taken personally.Penn Jillette
  • 35 Comic Quote Minus 37-- Ryan ArmandAlso a favourite.
  • 42 I've got to start listening to those quiet, nagging doubts.Calvin
  • 34 Nobody is smart enough to be wrong all the time.Ken Wilber
  • 51 I will not procrastinate regarding any ritual granting immortality.--Evil Overlord List #230
  • 34 A problem well stated is a problem half solved.Charles Kettering
  • 26 "I accidentally changed my mind."my four-year-ol
... (read more)

Nice. If we analyze the game using Vitalik's 2x2 payoff matrix, defection is a dominant strategy. But now I see that's not how game theorists would use this phrase. They would work with the full 99-dimensional matrix, and there defection is not a dominant strategy, because as you say, it's a bad strategy if we know that 49 other people are cooperating, and 49 other people are defecting.

There's a sleight of hands going on in Vitalik's analysis, and it is located at the phrase "regardless of one’s epistemic beliefs [one is better off defecting]". I... (read more)

3vbuterin
So, I did not forget about that particular case. In my particular brand of cryptoeconomic analysis, I try to decompose cooperation incentives into three types: 1. Incentives generated by the protocol 2. Altruism 3. Incentives arising from the desire to have the protocol succeed because one has a stake in it I often group (2) and (3) into one category, "altruism-prime", but here we can separate them. The important point is that category 1 incentives are always present as long as the protocol specifies them, category 2 incentives are always present, but the size of category 3 incentives is proportional to the "probability of being pivotal" of each node - essentially, the probability that the node actually is in a situation where its activity will determine the outcome of the game. Note that I do not consider 49/50 Nash equilibria realistic; in real massively multiplayer games, the level of confusion, asynchronicity, trembling hands/irrational players, bounded rationality, etc, is such that I think it's impossible for such a finely targeted equilibrium to maintain itself (this is also the primary keystone of my case against standard and dominant assurance contracts). Hence why I prefer to think of the probability distribution on the number of players that will play a particular strategy and from there the probability of a single node being pivotal. In the case of cryptoeconomic consensus protocols, I consider it desirable to achieve a hard bound of the form "the attacker must spend capital of at least C/k" where C is the amount of capital invested by all participants in the network and k is some constant. Since we cannot prove that the probability of being pivotal will be above any particular 1/k, I generally prefer to assume that it is simply zero (ie, the ideal environment of an infinite number of nodes of zero size). In this environment, my usage of "dominant strategy" is indeed fully correct. However, in cases where hostile parties are involved, I assume th

I don't know too much about decision theory, but I was thinking about it a bit more, and for me, the end result so far was that "dominant strategy" is just a flawed concept.

If the agents behave superrationally, they do not care about the dominant strategy, and they are safe from this attack. And the "super" in superrational is pretty misleading, because it suggests some extra-human capabilities, but in this particular case it is so easy to see through the whole ruse, one has to be pretty dumb not to behave superrationally. (That is, not... (read more)

They're running on the blockchain, which slows them down.

They can follow the advice of any off-the-blockchain computational process if that is to their advantage. They can even audit this advice, so that they don't lose their autonomy. For example, Probabilistically Checkable Proofs are exactly for that setup: when a slow system has to cooperate with an untrusted but faster other. There's the obvious NP case, when the answer by Merlin (the AI) can be easily verified by Arthur (the blockchain). But the classic IP=PSPACE result says that this kind of coop... (read more)

8skeptical_lurker
An unusual feature of an AI of this form is its speed - while the off-the-blockchain subprocesses can run at normal speed, IIRC the blockchain itself is optimistically going to have a block time of 12 seconds. This means you couldn't have a realtime conversation with the AI as a whole, nor could it drive a car for instance, although a subprocess might be able to complete these tasks. Overall, it would perhaps be more like a superintelligent ant colony.

An advanced DAO (decentralized/distributed autonomous organization), the way Vitalik images it, is a pretty believable candidate for an uncontrolled seed AI, so I'm not sure Eliezer and co shares Vitalik's apparent enthusiasm regarding the convergence of these two sets of ideas.

7somnicule
I don't think so. They're running on the blockchain, which slows them down. The primary decision-making mechanisms for them are going to basically be the same as can be used for existing organizations, like democracy, prediction markets, etc. Unless you think your bank or government is going to become a seed AI, there's not that much more to DAOs.

I was unsurprised but very disappointed when it turned out there are no other posts tagged one_mans_vicious_circle_is_another_mans_successive_approximation. But Shalizi has already used the joke once in his lecture notes on Expectation Maximization.

Tononi gives a very interesting (weird?) reply: Why Scott should stare at a blank wall and reconsider (or, the conscious grid), where he accepts the very unintuitive conclusion that an empty square grid is conscious according to his theory. (Scott's phrasing: "[Tononi] doesn’t “bite the bullet” so much as devour a bullet hoagie with mustard.") Here is Scott's reply to the reply:

Giulio Tononi and Me: A Phi-nal Exchange

I have no problem with an arbitrary border. I wouldn't even have a problem with, for example, old people gradually shrinking in size to zero just to make the image more aesthetically pleasing.

Wow, I'd love to see some piece of art depicting that pink worm vine.

2summerstay
I assumed that was the intention of the writers of Donnie Darko. The actual shapes coming out of their chests we got were not right, but you could see this is what they were trying to do.
0chaosmage
It gets complicated if you do not draw an arbitrary border where matter becomes part of your body and where it ceases to do so.

Can you ask the second doctor to examine you to at least the same standard as the first one?

Unfortunately, no. See my answer to Lumifer.

What he proposed is in fact laser iridotomy, although they called it laser iridectomy.

It was less than a disagreement. I'm sorry that I over-emphasized this point. The first time the pressure was Hgmm 26/18, the second time 19/17. The second doctor said that the pressure can fluctuate, and her equipment is not enough to settle the question. (She is an I-don't-know-the-correct-term national health service doctor, the first one is an expensive private doctor with better equipment, and more time for a patient.)

4Lumifer
My recommendation for more independent opinions (or, actually, more measurements) stands.

My eye doctor diagnosed closed-angle glaucoma, and recommends an iridectomy. I think he might be a bit too trigger-happy, so I followed up with another doctor, and she didn't find the glaucoma. She carefully stated that the first diagnosis can still be the correct one, the first was a more complete examination.

Any insights about the pros and cons of iridectomy?

4[anonymous]
Is there a family history of this? If so that would skew my assessment towards that of the first doctor. If not, seriously another opinion...
4Lumifer
My impression is that glaucoma (which is, basically, too high intraocular pressure) is easy to diagnose. Two doctors disagreeing on it would worry me. Don't get just a third independent opinion, get a fourth one as well.
0polymathwannabe
Laser iridotomy appears to be less risky: http://www.surgeryencyclopedia.com/La-Pa/Laser-Iridotomy.html http://www.surgeryencyclopedia.com/Fi-La/Iridectomy.html
3Pfft
Can you ask the second doctor to examine you to at least the same standard as the first one? Maybe someone on Less Wrong who has access to UpToDate can send you a copy of their glaucoma page, for an authoritative list of pros and cons.
Shmi140

Get a third independent opinion.

Yes. To be exact, not all capitalized words, but all capitalized words that my English spellchecker does not recognize. With all capitalized words the list would start like this:

  • 1523 I
  • 1327 The
  • 558 It
  • 428 If
  • 379 But

Of course the spellchecking method is itself a source of errors. Previous years I never felt like manually correcting these, but checking now it seems like these were the main victims:

  • Graham 43
  • Bacon 20
  • Newton 18
  • Franklin 18
  • Shaw 17
  • Silver 12
  • Pinker 10

Graham is actually number one. I added them to this list, and also to the "Top ori... (read more)

4shokwave
You know that feeling you get when you're coding, and you write something poorly and briefly expect it to Do What You Mean, before being abruptly corrected by the output? I think I just had that feeling at long distance.

Those numbers are also there, in this child comment. I edited the comment to make it clear.

You are #2 by karma collected from 2009 to 2013, not just in 2013. You earned an average of 8.20 karma points from 5 quotes in 2013, and an average of 11.05 karma points from 81 quotes in total, which is near to a P-value of 0.5 in my statistical test.

0gwern
Oh, these are all cumulative lifetime total karma scores...? I thought these numbers were just for 2013.

Top short quotes (2009-2013) by karma per character:

  • 55 A Bet is a Tax on Bullshit Alex Tabarrok
  • 45 Luck is statistics taken personally. Penn Jellete
  • 42 I've got to start listening to those quiet, nagging doubts.Calvin
  • 33 Comic Quote Minus 37 -- Ryan Armand Also a favourite.
  • 34 Nobody is smart enough to be wrong all the time.Ken Wilber
  • 32 A problem well stated is a problem half solved.Charles Kettering
  • 48 I will not procrastinate regarding any ritual granting immortality. --Evil Overlord List #230
  • 29 The greatest weariness comes from work not d
... (read more)
0fortyeridania
The one from Carnap ("Anything you can do, I can do meta") might not really be from Carnap. Can anyone find a source besides this one, which only gets it back to 1991?
5christopherj
This post made me consider using spaced repetition software.

Top original authors by karma collected:

  • 800 Graham
  • 564 Russell
  • 434 Chesterton
  • 428 Pratchett
  • 395 Feynman
  • 268 Franklin
  • 265 Dennett
  • 255 Friedman
  • 238 Newton
  • 238 Aaronson
  • 236 Munroe
  • 234 Nietzsche
  • 231 Egan
  • 229 Shaw
  • 210 Heinlein
  • 209 Aristotle
  • 201 Bacon
  • 193 Einstein
  • 183 Wilson
  • 183 Sagan
  • 175 Plato
  • 172 Voltaire
  • 172 Stephenson
  • 170 Pinker
  • 169 Darwin
  • 163 SMBC
  • 163 Kahneman
  • 160 Silver
  • 151 Hofstadter
  • 150 Asimov
  • 149 Mencken
  • 149 Dawkins
  • 144 Moldbug
  • 144 Godin
  • 142 Johnson
  • 136 Wong
  • 133 Buffett
  • 125 Descartes
  • 122 Orwell
  • 121 Taleb
  • 119 Bakker
  • 118 Maynard
  • 114 Minsky
  • 114 Hanson
  • 10
... (read more)

Top original authors by number of quotes. (Note that authors and mentions are not disambiguated.)

  • Graham 43
  • Russell 41
  • Feynman 39
  • Pratchett 30
  • Chesterton 29
  • Einstein 27
  • Nietzsche 25
  • Heinlein 23
  • Dennett 22
  • Johnson 20
  • Bacon 20
  • Wilson 19
  • Newton 18
  • Franklin 18
  • Aaronson 18
  • Shaw 17
  • Darwin 17
  • Taleb 16
  • Dawkins 16
  • Voltaire 14
  • Kahneman 14
  • Wittgenstein 13
  • Sowell 13
  • Munroe 13
  • Aristotle 13
  • Silver 12
  • Meier 12
  • Maynard 12
  • Hume 12
  • Asimov 12
  • Stephenson 11
  • Sagan 11
  • Plato 11
  • Orwell 11
  • Moldbug 11
  • Mencken 11
  • Locke 11
  • Huxley 11
  • Hoffer 11
  • Egan 11
  • SMBC 10
  • Pinker 10
  • Peirce 10
  • Neum
... (read more)
4chaosmage
I do not recognize any names of women in this.
2ChristianKl
16 times Taleb and 13 times Nassim. What's happening hear, is there another Nassim?
6DanielVarga
Top original authors by karma collected: * 800 Graham * 564 Russell * 434 Chesterton * 428 Pratchett * 395 Feynman * 268 Franklin * 265 Dennett * 255 Friedman * 238 Newton * 238 Aaronson * 236 Munroe * 234 Nietzsche * 231 Egan * 229 Shaw * 210 Heinlein * 209 Aristotle * 201 Bacon * 193 Einstein * 183 Wilson * 183 Sagan * 175 Plato * 172 Voltaire * 172 Stephenson * 170 Pinker * 169 Darwin * 163 SMBC * 163 Kahneman * 160 Silver * 151 Hofstadter * 150 Asimov * 149 Mencken * 149 Dawkins * 144 Moldbug * 144 Godin * 142 Johnson * 136 Wong * 133 Buffett * 125 Descartes * 122 Orwell * 121 Taleb * 119 Bakker * 118 Maynard * 114 Minsky * 114 Hanson * 109 Hume * 106 Sowell * 102 Keynes * 98 Deutsch * 97 Churchill * 94 Lichtenberg * 91 Dijkstra * 90 Jaynes * 90 Hoffer * 89 Marx * 89 Holmes * 88 Wittgenstein * 87 Neumann * 87 Harris * 85 Jefferson * 79 Huxley * 76 Leibniz * 73 Wilde * 72 Locke * 70 Mitchell * 65 Meier * 62 Peirce * 61 Munger * 58 Clark * 57 Gould * 54 Aurelius * 48 Babbage * 47 Medawar * 46 Crowley * 44 Diogenes * 41 Carlyle * 40 Yudkowsky * 35 Turing * 34 Schopenhauer * 28 Rochefoucauld * 28 Goethe * 27 Thoreau

Top quote contributors of 2013 by statistical significance level:

  • 0.00091 (61.00 in 2): gotdistractedbythe
  • 0.00235 (34.60 in 5): philh
  • 0.00511 (31.80 in 5): Mestroyer
  • 0.00695 (21.21 in 19): James_Miller
  • 0.00882 (55.00 in 1): westward
  • 0.00882 (55.00 in 1): Zando
  • 0.01319 (21.00 in 16): Stabilizer
  • 0.01365 (30.00 in 4): Kaj_Sotala
  • 0.01471 (52.00 in 1): VincentYu
  • 0.01558 (36.50 in 2): sediment
  • 0.01923 (21.58 in 12): Alejandro1
  • 0.02115 (30.33 in 3): Particleman
  • 0.02344 (23.12 in 8): Qiaochu_Yuan
  • 0.02491 (33.50 in 2): MinibearRex
  • 0.04559 (37.00 in 1): nabeelqu
  • 0
... (read more)
2[anonymous]
I posted 2 rationality quotes that year, but the one from may attracted a surprising number of upvotes. It seems a little unfair that Anna Salamon's checklist of rationality habits, one of the top five best html documents I've found ever, has only a couple points more. Other users had already opined that karma is a bit broken as a measure, but that was when I started to alieve it. It's amused me to see other users want to submit posts and ask for free karma, because LW is already handing out free karma pretty much

Top quote contributors by karma score collected in 2013:

  • 512 Eugine_Nier
  • 403 James_Miller
  • 336 Stabilizer
  • 259 Alejandro1
  • 208 jsbennett86
  • 197 GabrielDuquette
  • 195 Vaniver
  • 185 Qiaochu_Yuan
  • 180 shminux
  • 175 lukeprog
  • 173 philh
  • 165 RolfAndreassen
  • 159 Mestroyer
  • 149 Pablo_Stafforini
  • 141 NancyLebovitz
  • 140 Eliezer_Yudkowsky
  • 133 Zubon
  • 133 Jayson_Virissimo
  • 122 gotdistractedbythe
  • 120 Kaj_Sotala
  • 118 JQuinton
  • 118 BT_Uytya
  • 117 dspeyer
  • 114 cody-bryce
  • 112 satt
  • 92 ShardPhoenix
  • 91 Particleman
  • 84 katydee
  • 82 elharo
  • 78 snafoo
  • 74 Cthulhoo
  • 74 Benito
  • 73 sediment
  • 72 arundelo
  • 71 tingr
... (read more)

Top quote contributors by total (2009-2013) karma score collected:

  • 1283 RichardKennaway
  • 895 gwern
  • 843 Alejandro1
  • 815 GabrielDuquette
  • 777 Eliezer_Yudkowsky
  • 753 James_Miller
  • 751 Eugine_Nier
  • 735 Rain
  • 715 MichaelGR
  • 662 Jayson_Virissimo
  • 660 lukeprog
  • 619 Stabilizer
  • 599 NancyLebovitz
  • 585 Konkvistador
  • 572 anonym
  • 436 CronoDAS
  • 415 RobinZ
  • 408 Yvain
  • 358 Alicorn
  • 350 Grognor
  • 342 Tesseract
  • 316 arundelo
  • 309 Kaj_Sotala
  • 304 DSimon
  • 300 Vaniver
  • 285 Oscar_Cunningham
  • 283 peter_hurford
  • 270 Nominull
  • 270 [deleted]
  • 258 billswift
  • 245 Thomas
  • 244 katydee
  • 240 shminux
  • 240 jsbennett86
  • 2
... (read more)
2gwern
I am a little chagrined that though I am #2 by total karma, I have only 2 in the bests list. Seems I need to be a little more selective in the future.
3DanielVarga
Top quote contributors of 2013 by statistical significance level: * 0.00091 (61.00 in 2): gotdistractedbythe * 0.00235 (34.60 in 5): philh * 0.00511 (31.80 in 5): Mestroyer * 0.00695 (21.21 in 19): James_Miller * 0.00882 (55.00 in 1): westward * 0.00882 (55.00 in 1): Zando * 0.01319 (21.00 in 16): Stabilizer * 0.01365 (30.00 in 4): Kaj_Sotala * 0.01471 (52.00 in 1): VincentYu * 0.01558 (36.50 in 2): sediment * 0.01923 (21.58 in 12): Alejandro1 * 0.02115 (30.33 in 3): Particleman * 0.02344 (23.12 in 8): Qiaochu_Yuan * 0.02491 (33.50 in 2): MinibearRex * 0.04559 (37.00 in 1): nabeelqu * 0.05000 (36.00 in 1): andreas * 0.05000 (36.00 in 1): NoisyEmpire * 0.06794 (23.00 in 4): ShardPhoenix * 0.08824 (32.00 in 1): David_Gerard * 0.08824 (32.00 in 1): Dentin * 0.09853 (31.00 in 1): HungryHippo * 0.10441 (30.00 in 1): ciphergoth * 0.11119 (19.50 in 6): dspeyer * 0.11176 (29.00 in 1): roystgnr * 0.11176 (29.00 in 1): Turgurth * 0.11242 (17.91 in 11): GabrielDuquette * 0.12794 (27.00 in 1): JonMcGuire * 0.12794 (27.00 in 1): XerxesPraelor * 0.13156 (17.33 in 12): jsbennett86 * 0.14339 (20.67 in 3): Nomad * 0.14559 (26.00 in 1): Creutzer * 0.14559 (26.00 in 1): curiousepic * 0.14559 (26.00 in 1): etotheipi * 0.14636 (22.00 in 2): Will_Newsome * 0.14978 (19.50 in 4): snafoo * 0.16765 (25.00 in 1): BlueSun * 0.16765 (25.00 in 1): Carwajalca * 0.16765 (25.00 in 1): pewpewlasergun * 0.16765 (25.00 in 1): Rubix * 0.16765 (25.00 in 1): SatvikBeri
2DanielVarga
Top quote contributors by karma score collected in 2013: * 512 Eugine_Nier * 403 James_Miller * 336 Stabilizer * 259 Alejandro1 * 208 jsbennett86 * 197 GabrielDuquette * 195 Vaniver * 185 Qiaochu_Yuan * 180 shminux * 175 lukeprog * 173 philh * 165 RolfAndreassen * 159 Mestroyer * 149 Pablo_Stafforini * 141 NancyLebovitz * 140 Eliezer_Yudkowsky * 133 Zubon * 133 Jayson_Virissimo * 122 gotdistractedbythe * 120 Kaj_Sotala * 118 JQuinton * 118 BT_Uytya * 117 dspeyer * 114 cody-bryce * 112 satt * 92 ShardPhoenix * 91 Particleman * 84 katydee * 82 elharo * 78 snafoo * 74 Cthulhoo * 74 Benito * 73 sediment * 72 arundelo * 71 tingram * 67 MinibearRex * 63 pjeby * 62 Nomad * 62 CronoDAS * 60 RichardKennaway

Is this the latest open thread? Generally, how do I find the latest open thread? The tag does not help.

4Douglas_Knight
Click on the the words "latest open thread" in the sidebar (use your browser's search). The tag works if you reach the open thread via discussion, so that the word discussion appears in the URL, but not if you reach it in some other ways, like from going through an individual's recent comments. (I think that there may be some delay in updating these two sources, but they are both up to date as I write this, only two hours after the new open thread. That thread is two hours younger than your comment, perhaps its trigger.)

Amusingly, google chrome autofill still remembered my answers from last year. This made filling the demographic part a bit faster, and allowed a little game: after giving a probability estimation I could check my answer from a year ago.

The smaller thing could be a human, too. Giant, good looking but creepy child holding small vulnerable human in one hand, looking at it emotionlessly. But MIRI will not like this version, because they really want to avoid anthropomorphizing the AI.

I fully agree with this point, and I fully agree with Page's goals. But I think there are things here that a simple total-years-of-potential-life-lost framework can not capture. As you might have guessed even from my first comment, this issue is very personal to me. Not long ago a good friend died after terrible suffering, leaving three young children behind. That's very sad, and I really don't know for what values of N could this be balanced in a utilitarian sense by lengthening the healthy old age of N of my friends with 10 years. Obviously, such trade-offs are taboo, but even if I try to force myself into some detached outside view, I still believe that number N must be large.

4Vaniver
Agreed that not all years are equal, and the impacts of early deaths can be large. For measuring N, I wonder how much of this is availability bias. I know a number of old people who have outlived half of their friends, as well as seeing a few friends lost early in life. My weak suspicion is that they would favor a lower N, just by more familiarity with death due to old age and a better idea of what old age deaths do to the culture / friend groups / family.

“Are people really focused on the right things? One of the things I thought was amazing is that if you solve cancer, you’d add about three years to people’s average life expectancy,” Page said. “We think of solving cancer as this huge thing that’ll totally change the world. But when you really take a step back and look at it, yeah, there are many, many tragic cases of cancer, and it’s very, very sad, but in the aggregate, it’s not as big an advance as you might think.” (Larry Page as quoted in the Time article)

This is something like the ecological falla... (read more)

Vaniver200

Looking at the individual level, most of us had close friends who had lost 30 years of potential life.

Suppose you could either extend the life of one close friend by 30 years, or the lives of all of your friends by 10 years. (Hopefully you have more than three friends.) Page is pointing out that the second could possibly be on the table, but it wouldn't be obvious because we're so used to treating rare serious diseases instead of making everyone a bit healthier or live a bit longer on the margins.

Yosarian2120

Anyway, the only reason that we lose "only 3 years" to cancer is that something else is going to kill us not long after if cancer doesn't. However, if we were able to prevent all other forms of death, cancer would still kill all of us eventually.

I tend to think of curing cancer as not just "adding 3 years of life", but as a small but vital part of developing extreme medical (that is, organic) longevity.

I am not a physicist, but this stack exchange answer seems to disagree with your assessment: What are the primary obstacles to solve the many-body problem in quantum mechanics?

1leplen
This is sort of true. The fact that it turns into the n-body problem prevents us from being able to do quantum mechanics analytically. Once we're stuck doing it numerically, then all the issues of sampling density of the wave function et al. crop up, and they make it very difficult to solve numerically. Thanks for pointing this out. These numerical difficulties are also a big part of the problem, albeit less accessible to people who aren't comfortable with the concept of high-dimensional Hilbert spaces. A friend of mine had a really nice write-up in his thesis on this difficulty. I'll see if I can dig it up.

I did exactly that after looking at this thread, and only spotted your comment when I wanted to post the results.

I skipped some obvious refinements as this was a 5 minute project.

  • 55 A Bet is a Tax on Bullshit. Alex Tabarrok
  • 45 Luck is statistics taken personally. Penn Jellete
  • 33 Comic Quote Minus 37 -- Ryan Armand Also a favourite.
  • 34 Nobody is smart enough to be wrong all the time.Ken Wilber
  • 32 A problem well stated is a problem half solved.Charles Kettering
  • 48 I will not procrastinate regarding any ritual granting immortality. --Evil Overlor
... (read more)
1A1987dM
The ones I find most T-shirtable are “Most haystacks do not even have a needle”, “Things are only impossible until they're not”, “The truth will set you free. But first, it will piss you off”, “Reality is not optional”, and “The best way to escape from a problem is to solve it.” (Note how the first two send apparently contradictory messages. The next twos seem to be variations on the Litany of Gendlin; they remind of something I've read somewhere, “Deal with reality or reality will deal with you.”)

unrestricted Turing test passing should be sufficient unto FOOM

I tend to agree, but I have to note the surface similarity with Hofstadter's disproved "No, I'm bored with chess. Let's talk about poetry." prediction.

4gjm
Consider first of all a machine that can pass an "AI-focused Turing test", by which I mean convincing one of the AI team that built it that it's a human being with a comparable level of AI expertise. I suggest that such a machine is almost certainly "sufficient unto FOOM", if the judge in the test is allowed to go into enough detail. An ordinary Turing test doesn't require the machine to imitate an AI expert but merely a human being. So for a "merely" Turing-passing AI not to be "sufficient unto FOOM" (at least as I understand that term) what's needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert. It seems unlikely that there's a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn't close to being FOOM-ready, it seems like what's needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn't "scale up" to harder problems like the ordinary human architecture apparently does. Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can't be entirely ruled out, but it feels much harder to believe than a "narrow" chess-playing AI was even at the time of Hofstadter's prediction. Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it's a physicist, competent literary critics that it's a novelist, civil rights activists that it's a black person who's suffered from racial discrimination, etc.

I was trying to position the paper in terms of LW opinions, because my target audience were LW readers. (That's also the reason I mentioned the tangential Eliezer reference.) It's beneath my dignity to list all the different philosophical questions where my opinion is different from LW consensus, so let's just say that I used the term as a convenient reference point rather than a creed.

-4IlyaShpitser
Why?

If it really has only finitely many utility levels, then for a sufficiently small epsilon and some even smaller delta, it will not care whether it ends up in Hell with probability epsilon or probability delta.

6Paul Crowley
That's if they only recognise finitely many expected utility levels. However, such an agent is not VNM-rational.

I removed the broken index.html, sorry. Now you can see the whole (messy) directory. The README is actually a list of commands with some comments, the source code consists of parse.py and convolution.py.

When I stated that the middle is roughly exponential, this was the graph that I was looking at:

d <- density(karma)

plot(log(d$y) ~ d$x)

I don't do this for a living, so I am not sure at all, but if I really really had to make this formal, I would probably use maximum likelihood to fit an exponential distribution on the relevant interval, and then Kolmogorov-Smirnoff. It's what shminux said, except there is probably no closed formula because the cutoffs complicate the thing. And at least one of the cutoffs is really necessary, because below 3 it is obviously not exponential.

I am afraid I don't understand your methodology. How is a rank versus value function supposed to look like for an exponentially distributed sample?

0gwern
How else would you do it?

It is roughly exponential in the range between 3 and 60 karma.

You can find the raw data here.

Edit: I didn't spot gwern's more careful analysis. I am still digesting it. gwern, you should use the above link, it contains the below-10 quotes, too.

0gwern
The extra data doesn't seem to make much difference: R> karma <- read.table("http://people.mokk.bme.hu/~daniel/rationality_quotes_2012/scores") R> karma <- sort(karma$V2) R> summary(karma) Min. 1st Qu. Median Mean 3rd Qu. Max. -8.0 4.0 8.0 10.7 15.0 105.0 ... Nonlinear regression model model: y ~ exp(a + b * x) data: temp a b -0.01088 0.00134 residual sum-of-squares: 22772 Number of iterations to convergence: 7 Achieved convergence tolerance: 3.59e-06 Eyeballing it, looks like the previous fit crosses around 40. R> karma <- karma[karma<40] ... Nonlinear regression model model: y ~ exp(a + b * x) data: temp a b -0.01088 0.00134 residual sum-of-squares: 22772 Number of iterations to convergence: 7 Achieved convergence tolerance: 3.59e-06 The fit looks much better:
Load More