So if you're giving examples and you don't know how many to use, use three.
I'm not sure I follow. Could you give a couple more examples of when to use this heuristic?
Seems I'm late to the party, but if anyone is still looking at this, here's another color contrast illusion that made the rounds on the internet some time back.
For anyone who hasn't seen it before, knowing that it's a color contrast illusion, can you guess what's going on?
Major hint, in rot-13: Gurer ner bayl guerr pbybef va gur vzntr.
Full answer: Gur "oyhr" naq "terra" nernf ner gur fnzr funqr bs plna. Lrf, frevbhfyl.
The image was created by Professor Akiyoshi Kitaoka, an incredibly prolific source of crazy visual perception illusions.
Commenting in response to the edit...
I took the Wired quiz earlier but didn't actually fill in the poll at the time. Sorry about that. I've done so now.
Remarks: I scored a 27 on the quiz, but couldn't honestly check any of the four diagnostic criteria. I lack many distinctive autism-spectrum characteristics (possibly to the extent of being on the other side of baseline), but have a distinctly introverted/antisocial disposition.
A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week's Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.
Which is all well and ...
Ah, true, I didn't think of that, or rather didn't think to generalize the gravitational case.
Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.
Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That's roughly the density of electron-degenerate matter; I'm pretty sure nothing will hold together at that density without substantial outside pressure, and since we're excluding gravitational compression here I don't think that's likely.
Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.
I don't think you'd be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we're talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon's surface.
Of course, since the centrifugal force at ...
It's an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, though--I guess one could assume some sort of per-universe Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.
The example being race/intelligence correlation? Assuming any genetic basis for intelligence whatsoever, for there to be absolutely no correlation at all with race (or any distinct subpopulation, rather) would be quite unexpected, and I note Yvain discussed the example only in terms as uselessly general as the trivial case.
Arguments involving the magnitude of differences, singling out specific subpopulations, or comparing genetic effects with other factors seem to quickly end up with people grinding various political axes, but Yvain didn't really go there.
The laws of the physics are the rules, without which we couldn't play the game. They make it hard for any one player to win.
Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:
...At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.
I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, "And therefore such-and-such is true."
"Why is that?" th
Since when has being "good enough" been a prerequisite for loving something (or someone)? In this world, that's a quick route to a dismal life indeed.
There's the old saying in the USA: "My country, right or wrong; if right, to be kept right; and if wrong, to be set right." The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken--the danger lies not...
Really, does it actually matter that something isn't a magic bullet? Either the cost/benefit balance is good enough to warrant doing something, or it isn't. Perhaps taw is overstating the case, and certainly there are other causes of akrasia, but someone giving disproportionate attention to a plausible hypothesis isn't really evidence against that hypothesis, especially one supported by multiple scientific studies.
From what I can see, there's more than sufficient evidence to warrant serious consideration for something like the following propositions:
I thought the mathematical terms went something like this:
It's said that "ignorance is bliss", but that doesn't mean knowledge is misery!
I recall studies showing that major positive/negative events in people's lives don't really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.
Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We're talking about a time span a thousand times longer than the current age of the universe. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.
It's a reasonable point, if one considers "eventual cessation of thought due to thermodynamic equilibrium" to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?
A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).
Also, few ways are more effective at discovering flaws in an idea than to begin explaining it to someone else; the greatest error will inevitably spring to mind at precisely the moment when it is most socially embarrassing to admit it.
My interpretation was to read "value" as roughly meaning "subjective utility", which indeed does not, in general, have a meaningful exchange rate with money.
You know, this really calls for a cartoon-y cliche "light bulb turning on" appearing over byrnema's head.
It's interesting the little connections that are so hard to make but seem simple in retrospect. I give it a day or so before you start having trouble remembering what it was like to not see that idea, and a week or so until it seems like the most obvious, natural concept in the world (which you'll be unable to explain clearly to anyone who doesn't get it, of course).
SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them.
Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.
Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...
Sorry for the late reply; I don't have much time for LW these days, sadly.
Based on some of your comments, perhaps I'm operating under a different definition of group vs. individual rationality? If uncoordinated individuals making locally optimal choices would lead to a suboptimal global outcome, and this is generally known to the group, then they must act to rationally solve the coordination problem, not merely fall back to non-coordination. A bunch of people unanimously playing D in the prisoner's dilemma are clearly not, in any coherent sense, rationally...
Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)
And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.
It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation.
It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.
Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.
Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing ...
I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.
Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time wi...
I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.
"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay
C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.
C+...
C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don't think that's really what XiXiDu is looking for.
Dijkstra's quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn't actually a bad language at all. On the other hand, it also lacks much of the "easy to pick up and experiment with" aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.
Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.
Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!
I'm inclined to agree with your actual point here, but it might help to be clearer on the distinction between "a group of idealized, albeit bounded, rationalists" as opposed to "a group of painfully biased actual humans who are trying to be rational", i.e., us.
Most of the potential conflicts between your four forms of rationality apply only to the latter case--which is not to say we should ignore them, quite the opposite in fact. So, to avoid distractions about how hypothetical true rationalists should always agree and whatnot, it may be helpful to make explicit that what you're proposing is a kludge to work around systematic human irrationality, not a universal principle of rationality.
All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".
Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.
Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.
More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about.
I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It h...
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.
Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.
Due to not being an appropriately-credentialed expert, I expect. The article does mention that he got a very negative reaction from a doctor.
Scraping in just under the deadline courtesy of a helpful reminder, I've donated a modest amount (anonymously, to the general fund). Cheers, folks.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone's expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know wha...
presumably you refer to the violation of individuals' rights here - forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
Out of curiosity, what do you have in mind here as "participate in society"?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to ...
On the other hand, if you look around at the real world it's also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn't really seem much better. "Sure, he may not be aware of the cliff he's about to walk off of, but he chose to walk that way and we shouldn't force...
Linus replies by quoting the Bible, reminding Charlie Brown about the religious significance of the day and thereby guarding against loss of purpose.
Loss of purpose indeed.
...Charlie Brown: Isn't there anyone who knows what Christmas is all about?
Linus: Sure, Charlie Brown, I can tell you what Christmas is all about. Lights, please?
Hear ye the word which the LORD speaketh unto you, O house of Israel:
Thus saith the LORD, Learn not the way of the heathen, and be not dismayed at the signs of heaven; for the heathen are dismayed at them. For the customs of t
If you're actually collecting datapoints, not just using the term semi-metaphorically, it may help to add that I've been diagnosed with (fairly moderate) ADHD; if my experience is representative of anything, it's probably that.
The former category would include not experiencing, or noticing that you're experiencing, 'tiredness', even when your body is acting tired in a way that others would notice (e.g. yawning, stretching, body language).
I'm not sure if this is what you're talking about, but I've long distinguished two aspects of "tiredness". One is the sensation of fatigue, exhaustion, muddled thinking, &c.--physical indicators of "I need sleep now".
The second is the sensation of actually being sleepy, in the sense of reduced energy, body relaxation, ...
This is my experience as well, for the most part.
The only times I recall "going to bed" feeling like a good idea is when I've been so far into exhausted sleep deprivation that base instincts took over and I found myself doing so almost involuntarily.
Even in those cases, my conscious mind was usually confabulating wildly about how I wasn't actually going to sleep, just lying down for a half a moment, not sleeping at all... right up until I pretty much passed out.
It's rather vexing.
For what it's worth, the credit score system makes a lot more sense when you realize it's not about evaluating "this person's ability to repay debt", but rather "expected profit for lending this person money at interest".
Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.
This is another one of those Hanson-esque "X is not about X-ing" things.