lukeprog wrote "philosophers are 'spectacularly bad' at understanding that their intuitions are generated by cognitive algorithms." I am pretty confident that minds are physical/chemical systems, and that intuitions are generated by cognitive algorithms. (Furthermore, many of the alternatives I know of are so bizarre that given that such an alternative is the true reality of my universe, the conditional probability that rationality or philosophy is going to do me any good seems to be low.) But philosophy as often practiced values questioning ever...
I'd prefer that their answers about equal responsibility for parenting be consistent with their answers for equal right to be awarded disputed child custody. Holding either consistent position (mothers' parenting presence is essentially special in very important ways that can't generally be replaced by fathers, or mothers and fathers should be treated equally) seems less wrong than opportunistically switching between one position to justify extra parental rights and roles in divorce and the other position to justify equal parental responsibilities and role...
Of course there could well be some exaggeration for dramatic effect there --- as David Friedman likes to say, one should be skeptical of any account which might survive on its literary or entertainment value alone. But it's not any sort of logical impossibility. In Dallas near UTD (which had a strong well-funded chess team which contributed some of the strong coffeehouse players) ca. 2002 I was able to play dozens of coffeehouse games against strangers and casual acquaintances. One can also play in tournaments and in open-to-all clubs. Perhaps one could even play grudge matches against people one dislikes. Also today one can play an enormous number of strangers online, and even in the 1970s people played postal chess.
I don't have enough data to compare such gaming outcomes very well, but I'll pass on something that I thought was funny and perhaps containing enough truth to be thought-provoking (from Aaron Brown's The Poker Face of Wall Street): "National bridge champion and hedge fund manager Josh Parker explained the nuances of serious high school games players to me. The chess player did well in school, had no friends, got 800s on his SATs, and did well at a top college. The poker and backgammon set (one crowd in the 1970s) did badly in school, had tons of frien...
Wei_Dai writes "I wonder if I'm missing something important by not playing chess."
I am a somewhat decent chess player[*] and a reasonable Go player (3 dan, barely, at last rated tournament a few years ago). If you're inclined to thinking about cognition itself, and about questions like the value of heuristics and approximations that only work sometimes, such games are great sources of examples. In some cases, the strong players have already thinking along those lines for more than a century, though using a different vocabulary. E.g., Go concepts ...
Empirically, we have more impressive instrumental rationalists, such as Peter Thiel, Tyler Cowen and Demis Hassabis coming from the much smaller field of chess than from the much larger field of math (where I think there's only James Simmons). There's also Watizkin, who seems very interesting. It seems to me that math emphasizes excess rigor and a number of other elements which constitute the instrumental rationality equivalent of anti-epistemology, and possibly also that the way in which it is taught emphasizes learning concepts prior to the questions t...
I wasn't trying to be hard on that kind of collecting, though I was making a distinction. To me, choosing stamps (as opposed to, e.g., butterflies or historical artifacts) as a type specimen suggests that the collecting is largely driven by fashion or sentiment or some other inner or social motive, not because the objects are of interest for piecing together a vast disorderly puzzle found in the outer physical world. Inner and social motives are fine with me, though my motivation in such things tends to things other than collecting. (E.g., music and Go and Chess.)
You wrote "what I chose to do to resolve the matter was to deep dive into three often-raised skeptic arguments using my knowledge of physics as a starting point" and "deliberate misinformation campaigns in the grand tradition of tobacco [etc.]".
Less Wrong is not the place for a comprehensive argument about catastrophic AGW, but I'd like to make a general Less-Wrong-ish point about your analysis here. It is perceptive to notice that millions of dollars are spent on a shoddy PR effort by the other side. It is also perceptive to notice tha...
It has a germ of truth, but I think it's deeply misleading. In particular, it needs some kind of nod to the importance of relevance to everyday life. E.g., it would be more serious to claim "all science is either physics, or the systematizing side of some useful discipline like engineering, or stamp collecting." Pure stamp collecting endeavors have nothing to stop them from veering into the behavior stereotypically associated with modern art or the Sokal hoax. Fields like paleobotany or astronomy (or, indeed, physics itself in near-unobservable l...
It seems to me that once our ancestors' tools got good enough that their reproductive fitness was qualitatively affected by their toolmaking/toolusing capabilities (defining "tools" broadly enough to include things like weapons, fire, and clothing), they were on a steep slippery slope to the present day, so that it would take an dinosaur-killer level of contingent event to get them off it. (Language and such helps a lot too, but as they say, language and a gun will get you more than language alone.:-) Starting to slide down that slope is one kind...
You write "Eliezer made a very interesting claim-- that current hardware is sufficient for AI. Details?"
I don't know what argument Eliezer would've been using to reach that conclusion, but it's the kind of conclusion people typically reach if they do a Fermi estimate. E.g., take some bit of nervous tissue whose function seems to be pretty well understood, like the early visual preprocessing (edge detection, motion detection...) in the retina. Now estimate how much it would cost to build conventional silicon computer hardware performing the same o...
You write "We haven't evolved a tendency to use Type 2 because we mostly suck at it."
Maybe "type 2" is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it. It seems pretty common in successful search-based solutions to AI problems (like planning, pathing, or adversarial games) to use something analogous to the "type 1" vs. "type 2" split, moving a considerable amount of logic into "type 1"-like endpoint evaluation and/or heuristic hints for the sea...
(To summarize my upcoming point in tl;dr form: if you don't find yourself rationalizing "maybe I'm onto the pattern" while your stomach rumbles as you contemplate the upside of getting gummi bears marginally more often, you might be tickling a different variety-seeking mechanism than you think. Nothing wrong with that, but if you want to get really good at optimizing that tickle, detailed knowledge about which mechanism it is might be helpful.)
From time to time when reading technical articles related to effective strategies for artificial agents ...
In my experience, the rational actor model is generally more like a "model" or an "approximation" or sometimes an "emergent behavior" than an "assumption," and people who want us to criticize it as an "assumption" or "dogma" or "faith" or some such thing are seldom being objective.
(If you think this criticism is merely uninformed or based on a deep misunderstanding, then perhaps it would be rational to turn the phrase "the rationality assumption of neoclassical economics" in y...
See also the conversational thread which runs through http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb3 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kb8 http://lesswrong.com/lw/qm/machs_principle_antiepiphenomenal_physics/kba
Perhaps the root of our disagreement is that you think (?) that the GR field equations constrain their solutions to conform to Mach's principle, while I think they admit many solutions which don't conform to Mach's principle, and that furthermore that Vladimir_M is probably correct in his sketch of a family of non-Mach-principle solutions.
EY's article seems pretty clear about claiming not that Mach's principle follows deductively from the equations of GR, but that there's a sufficiently natural fit that we might make an inductive leap from observed regular...
You write "In GR, the very question is nonsense. [0] The universe does not have a position, just relative positions of objects. [1] The universe does not have a velocity, just relative velocities of various objects. [2] The universe does not have an acceleration, just relative accelerations of various objects." This passage incorrectly appeals to GR to lump together three statements that GR doesn't lump together.
See http://en.wikipedia.org/wiki/Inertial_frames_of_reference and note the distinction there between "constant, uniform motion"...
"But the good news is, there is no need! All we need to do to check which is faster, is throw some sample inputs at each and run tests."
"no need"? Sadly, it's hard to use such simple methods as anything like a complete replacement for proofs. As an example which is simultaneously extreme and simple to state, naive quicksort has good expected asymptotic performance, but its (very unlikely) worst-case performance falls back to bubble sort. Thus, if you use quicksort naively (without, e.g., randomizing the input in some way) somewhere wher...
(two points, one about your invocation of frame-dragging upstream, one elaborating on prase's question...)
point 1: I've never studied the kinds of tensor math that I'd need to use the usual relativistic equations; I only know the special relativistic equations and the symmetry considerations which constrain the general relativistic equations. But it seems to me that special relativity plus symmetry suffice to justify my claim that any reasonable mechanical apparatus you can build for reasonable-sized planets in your example will be practically indistinguis...
Relativity says that as motion becomes very much slower than the speed of light, behavior becomes very similar to Newton's laws. Everyday materials (and planetary systems) and energies give rise to motions very very much slower than the speed of light, so it tends to be very very difficult to tell the difference. For a mechanical experimental design that can accurately described in a nontechnical blog post and that you could reasonably imagine building for yourself (e.g., a Foucault-style pendulum), the relativistic predictions are very likely to be indist...
I live in Plano (i.e., for y'all far away, a bit north of Dallas). I might be interested in participating in a meatspace study group arrangement of some sort. I've never done something like this outside of university classes, dunno how it'd work out, except to guess that it probably depends strongly on individual personalities and schedules and such.
I've studied parts of the Jaynes book in the past. Recently I've been studying more specialized machine learning techniques, like support vector machines, but it seems clear that more time spent studying the more general and fundamental stuff would be time well spent in understanding specialized techniques, and the Jaynes book looks like a good candidate for such study.
I would add that it seems common for task difficulty distribution to be skewed in various idiosyncratic ways --- sufficiently common and sufficiently skewed that any uninformed generic intuition about the "noise" distribution is likely to be seriously wrong. E.g., in some fields there's important low-hanging fruit: the first few hours of training and practice might get you 10-30% of the practical benefit of the hundreds of hours of training and practice that would be required to have a comprehensive understanding. In other fields there are large clusters of skills that become easy to learn with once you learn some skill that is a shared prerequisite for the entire cluster.