Open Thread: March 2010
We've had these for a year, I'm sure we all know what to do by now.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (658)
It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.
Well, there's still intermittent fasting.
IF would get around
and would also work well with the musings about variability and duration:
(Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)
Where do you get that I thought I was wrong about CR? I'd like to lose weight but I had been aware for a while that the state of evidence on caloric restriction doing the purported job of extending lifespan in mammals was bad.
...huh.
The last thing I remember hearing from you about it was that it looked promising, but that the cognitive side effects made it impractical, so you'd settled on just taking the risk (which would, with that set of beliefs and values, be right in some ways, and wrong in others, and more right than wrong). But, for some reason the search bar doesn't turn up any relevant conversations for "calorie restriction Eliezer" or "caloric restriction Eliezer", so I couldn't actually check my memory. Sorry about that.
How do you introduce your friends to LessWrong?
Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?
Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.
For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of arguments; she later mentioned that she'd even told her mother about it. She also appreciated the concept of confirmation bias. She's started reading LessWrong, but she's not a native English speaker so it's going to be even more difficult than LessWrong already is.
the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.
I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down.
Edit: Related to The Lens That Sees Its Flaws.
I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers.
Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people.
If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).
It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.
Pigeons can solve Monty hall (MHD)?
Behind a paywall
But freely available from one of the authors' website.
Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.
I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but shouldn't affect our beliefs about later stages, so we have nothing to fear after all. Am I making a mistake or misunderstanding Bostrom's reasoning?
It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).
Mars dried out a while ago. Finding fossils there would prove very little about the great filter - since they would probably be distant relatives of ours whose planet gave out on them (since the solar system is one big melting pot for life). Basically, it is a bad example.
Call for examples
When I posted my case study of an abuse of frequentist statistics, cupholder wrote:
So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.
Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.
Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).
One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.
Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.
Based on those two lucid observations, I'd say you're doing well so far.
There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me.
Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components.
If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results?
Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.
Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma...
:)
A few more than five is an oligolemma; many more is a polylemma.
Many more is called perfect competition. :3
It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).
...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little.
I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.
A few principles that can help in such cases (major decision, very little direct data):
I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.)
Hope that helps.
The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.
I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events.
Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).
I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers).
If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do).
In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin!
(I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.)
If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.
I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.
I don't know whether I'm the only person who has this problem, but I think it's worth checking.
"Anti-logical rudeness" strikes me as a good bit better.
It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.
When I was considering a better name for the problem, I couldn't find a word for the process of seeking truth, which is what's actually being derailed by logical rudeness.
Unless I've missed something, the problem with logical rudeness isn't that there's no logical flaw in it.
The fact that I've got 4 karma points suggests (but doesn't prove) that I'm not the only person who has a problem with the term "logical rudeness". I should have been clearer that "anti-logical rudeness" was just an attempt at an improvement, rather than a strong proposal for that particular change.
I think you're complaining about the problem of people not updating on their evidence by using anti-epistemological techniques such as logical rudeness.
I still don't see the need for changing the name, but I'll defer to the opinion of the crowd if need be.
Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.
It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.
I think they are legitimate objections, but ones that have been partially addressed in this community. I take the principle objection to be, "Bayesian rationality can't justify induction." Admittedly true (see for instance Eliezer's take). Albert ignores sophisticated responses (like Robin's) and doesn't make a serious effort to explain why his alternative doesn't have the same problem.
I considered putting that link here in the open thread after I read about it on Marginal Revolution, but I read the paper and found it weak enough to not really be worth a lengthy response.
What annoyed me about it is how Albert's title is "Why Bayesian Rationality Is Empty," and he in multiple places makes cute references to that title (e.g. "The answer is summarized in the paper’s title") without qualificaiton.
Then later, in a footnote, he mentions "In this paper, I am only concerned with subjective Bayesianism."
Seems like he should re-title his paper to me. He makes references to other critiques of objective Bayesianism, but doesn't engage them.
Thoughts about intelligence.
My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.
I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.
It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?
Moving alone doesn't count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn't begging the question.
Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.
Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.
I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.
I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.
1) The pencil kept going to the same spot as if it had a "goal"
2) The pencil was able to respond to "obstacles" in ways not predicted by my original simply theory of pencil behavior.
I believe that I would say the pencil is more intelligent if it could pass through more "complicated" obstacles.
Here are some of my basic problems
1) What is a "goal" beyond what my intuition says
2) Similarly what is an "obstacle"
3) And what is "complicated"
I have some sense that "obstacle" is related to reducing the probability that the goal will be reached
I have some s that complicated has to do with the degree to which the probability is reduced.
Thoughts? Suggestions for readings?
If you don't mind a slightly mathy article, I thought Legg & Hutter's Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.
If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn't consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent.
Just my two cents.
I don't know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, "want." I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want.
At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don't know anything about the field so this could have already been tried and failed.
Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.
The problem with the self-referential from my perspective is that it presumes a self.
It seems to me that ideas like "I" and "want" graph humanness on to other objects.
So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.
Sure, that makes perfect sense. I haven't really given this a whole lot of thought; you are getting the fresh start. :)
The self in self-referential isn't implied to be me or you or any form of "I". Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.
A "want" can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps "goal" would work better in the end.
The main points I am working with:
Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven't really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.
Some food for philosophical thought, an oil drop that "solves" a maze.
TL;DR it follows a chemical gradient due to it changing surface tension.
I'd read something on the intentional stance.
So if something is capable, contrary to expectations, of achieving a constant state despite varying conditions, it's probably intelligent?
I guess that in space, everything is intelligent.
You are talking about control systems.
A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.
What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.
The answers to your questions are:
A "goal" is the reference input of a control system.
An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.
"Complicated" means "I don't (yet) understand this."
Suggestions for readings.
And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely."
-- William James, "The Principles of Psychology"
Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?
Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.
The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.
Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.
I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.
I had conceived of something like the Turing test but for intelligence period, not just general intelligence.
I wonder if general intelligence is about the domains under which a control system can perform.
I also wonder whether "minds" is a too limiting criteria for the goals of FAI.
Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.
Maybe this is a more general formulation?
I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer.
Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.
1. LessWrong, passim.
2. Marcus Hutter's Compression Prize.
3. AIXItl and the Gödel machine.
What programming language should I learn?
As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.
I'm thinking about starting with Processing and Lua. What do you think?
Personally, I'm a big fan of Haskell. It will make your brain hurt, but that's part of the point -- it's very good at easily creating and using mathematically sound abstractions. I'm not a big fan of Lua, though it's a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it's not a bad choice at all.
Thanks, I didn't know about Haskell, sounds great. Open source and all. I think you already convinced me.
I wouldn't recommend Haskell as a first language. I'm a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it's hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I'm not saying it's necessarily a bad idea to start with Haskell, but I think you'd have a much easier time getting started with Python.
Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a "training wheels" language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.
And above all, write code. This should go without saying, but you'd be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.
I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about.
I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It helps that both have excellent learning materials available.
Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some "mainstream" programming and wants to stretch their brain.
You make some good points, but I still disagree with you. For someone who's trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I've always thought that the quickest way to learn programming was to do programming, and until you've been doing it for a while, you won't understand it.
Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra.
Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.
I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I'm looking for.
I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?
We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.
Bear in mind that I'm not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren't for beginners.) I'd suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn't cover.
'Follow conversations', indeed. That's what I mean. Being able to grasp concepts that involve 'symbolic computation' and information processing by means of formal language. I don't aim at actively taking part in productive programming. I don't want to become a poet, I want to be able to appreciate poetry, perceive its beauty.
Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.
I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book - see below), as is The Art of Computer Programming by Knuth.
I have physically seen these books, but not studied any of them - I'm just communicating a secondhand impression of the conventional wisdom. Weight accordingly.
Merely an engineer? I've failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.
Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.
Yeah, you won't be able to be very productive regarding bottom-up groundwork. But you'll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You'll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.
Yeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven't heard about it before...
Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.
Agree with where you place Python, Scheme and Haskell. But I don't recommend C. Don't waste time there until you already know how to program well.
Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.
Thanks, I'll sure get into those languages. But I think I'll just try and see if I can get into Haskell first. I'm intrigued after reading the introduction.
If I get struck, I'll the route you mentioned.
What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I'm interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don't understand that simple statement. As I said, I already know the 'basics' of programming. I thoroughly understand it. Just so you get an idea.
In addition to reading up on all lesswrong.com sequences, I'm mainly into mathematics and physics right now. That's where I have the biggest deficits. I see my planned 'study' of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a 'Turing machine'.
And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.
Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that's a pretty good reason to choose it.
Even if you don't have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I'd recommend it. If you play with web pages, javascript...
At least that's my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your "Nevertheless, I want to start from the very beginning," which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don't work. That's not to disagree with wnoise about the value of Haskell either practical or educational, but I'm skeptical about it as an introduction.
ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.
Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.
The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.
So you end up with newcomers to Haskell trying to simultaneously:
And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.
But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.
The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.
I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg.
I'm always fascinated by data visualisation. I thought Processing might come in handy.
Thanks for mentioning SICP. I'll check it out.
As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.
After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.
Just keep in mind that when starting out learning programming, it's probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.
I recommend Haskell (more fun) or Ruby (more mainstream).
In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.
Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.
So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")
I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you're not coding something you find interesting, it's hard to stay motivated for long enough to learn the language.
My advice is to learn a language that a lot of people learn as a first language. You'll be able to take advantage of tutorials and support geared toward newbies. You can always learn "cooler" languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they'll be able to mentor/advise you.
Finally, I should give some of my background. I've been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I've played with but don't really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I'm probably still in the honeymoon phase since I've been using it for less than a year.
Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!
Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.
In other words, it's already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don't want to be one of them anymore.
I think the path outlined in ESR's How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn't a bad idea at all. Perl is no longer important; you probably need never learn it.
My first language was, awfully enough, GW-Basic. It had line numbers. I don't recommend anything like it.
My first real programming language was Perl. Perl is... fun. ;)
Those two seem great, Lua in particular seems to match exactly the purpose you describe.
I'd weakly recommend Python, it's free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages ...),is well-designed and has features you'll encounter in other languages (classes, functional programming ...), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.
But a lot of those advantages apply to languages like Ruby.
If you want to go into more exotic languages, I'd suggest Scheme over Haskell, it seems more beginner-friendly to me.
It mostly depends on what occasions you'll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who's a good teacher and a Java expert, go for Java.
"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/
Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."
Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.
Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.
I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/
There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:
"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."
The effect Andrew's text had on me reminded me of how excited I was when I first had read Alan Moore's famous Twilight of the Superheroes. (I'm not sure about how well "Twilight" stands the test of time but see Google or Wikipedia for links to the complete Moore proposal.)
I have two basic questions that I am confused about. This is probably a good place to ask them.
What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.
Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?
If you truly have no clue, .5 yes and .5 no.
Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".
But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.
All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).
Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.
Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.
More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.
Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question.
Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no".
Basically, your prior should be that everything is almost certainly false!
But it is true that you are not on a red couch.
Negation is a one-to-one map between true and false propositions.
Since you can understand the alien's question except for the nouns, presumably you'd be able to tell if there was a "not" in there?
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien's question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.)
ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that's not the point.)
This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
Well, let's say I ask you whether all "fnynznaqre"s are "nzcuvovna"s. Prior to using rot13 on this question (and hypothesizing that we hadn't had this particular conversation beforehand), would your prior really be as low as your previous comment implies?
(Of course, it should probably still be under 50% for the reference class we're discussing, but not nearly that far under.)
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all "fnynznaqre"s are "nzcuvovna"s. We're talking about aliens. Come on, now that it's occurred to you, wouldn't you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
Yes, but I might as easily choose a question whose answer was "Yes" if I thought that a trick question might be too predictable of a strategy.
1/4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can't see why the likelihood of "Yes" should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible.
Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?"
2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)
Yes, I should be more specific about 2.
So let's say the following are the first three questions you ask and their answers -
Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.
At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.
Q4. Do you believe in Modus Ponens?
or in other words,
Q4. Do you think that if A and A=>B are both true then B should also be true?
If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.
Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?
And I can go on and on...
So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.
I think that belief in modus ponens is a part of the definition of "rational", at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn't probably much helpful.
This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful.
The consensus of the comments was that the correct answer is .5.
Also of note is Bead Jar Guesses and its sequel.
For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?
For #2, I don't see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up.
In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the "aspiring rationalist" cluster and/or the "Bayesian" cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.
Suppose you're a hacker, and there's some information you want to access. The information is encrypted using a public key scheme (anyone can access the key that encrypts, only one person can access the key that decrypts), but the encryption is of poor quality. Given the encryption key, you can use your laptop to find the corresponding decryption key in about a month of computation.
Through previous hacking, you've found out how the encryption machine works. It has two keys, A and B already generated, and you have access to the encryption keys. However, neither of these keys is currently in use; one month from now, it will randomly choose one of the keys and start using it. You find that, through really complicated and difficult means, you can influence which of the keys the machine chooses, setting the probability to various things.
Needless to say, you might as well start cracking one of the keys now, but if the machine selects the other key, all the time you spent trying to crack the first key ends up being wasted.
Write your expected utility in terms of the probability that the machine chooses key A.
Can people ROT13 their answers so I get a chance to solve this on my own? Or will there be too much math for ROT13 to work well?
It's not a puzzle; it's supposed to make a point.
Oh.
Do we choose a probability p the machine picks A, or does the machine start with a probability p, which we adjust to p+q chance it picks A?
You choose a probability p that the machine picks A. I guess.
Smartass answer: use two computers, one for each of the keys. Computer time is cheap these days. If you don't have two computers, rent computation time from a cloud.
Why would you do that? If one key is more likely than the other, you should devote all your time toward breaking that key.
All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".
Even if there is a high probability of completing both decryptions and the probability the machine chooses A over B is only slightly over .5?
hidden answer
This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P
Perceived Change
Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the order of the deck because some players would not receive the cards they were supposed to be dealt. One of the friends happened to be majoring in Mathematics and understood probability as much as anyone else at the table. Even he thought what I did was wrong.
I explained that the cut didn’t matter because everyone still has the same odds of receiving any particular card from the deck. His retort was that it did matter because the card he was going to get is now near the middle of the deck. Instead of that particular random card he will get a different particular random card. As such, I should not have cut the deck.
During the ensuing arguments I found myself constantly presented with the following point: The fact of the game is that he would have received a certain card and now he will receive a different card. Shouldn’t this matter? People seem to hold grudges when someone swaps random chances of an outcome and the swap changes who wins.
The problem with this objection is illustrated if I secretly cut the cards. If they have no reason to believe I cut the deck, they wouldn’t complain. Furthermore, it is completely impossible to perceive the change by studying before and after states of the probabilities. More clearly, if I put the cards under the table and threatened to cut the cards, my friends would have no way of knowing whether or not I cut the deck. This implies that the change itself is not the sole cause of complaint. The change must be accompanied with the knowledge that something was changed.
The big catch is that the change itself isn’t actually necessary at all. If I simply tell my friends that I cut the cards when they were not looking they will be just as upset. They have perceived a change in the situation. In reality, every card is in exactly the same position and they will be dealt what they think they should have been dealt. But now even that has changed. Now they actually think the exact opposite. Even though nothing about the deck has been changed, they now think that the cards being dealt to them are the wrong cards.
What is this? There has to be some label for this, but I don’t know what it is or what the next step in this observation should be. Something is seriously, obviously wrong. What is it?
Edit to add:
The underlying problem here is not that they were worried about me cheating. The specific scenario and the arguments that followed from that scenario were such that cheating wasn't really a valid excuse for their objections.
To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.
No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly.
I initially had a second example as such:
It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.
I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.
Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.
(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.
EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense.
Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck.
(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)
Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior.
I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story:
Granted, there are a handful of obvious holes in this particular story. The list includes:
More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they shouldn't have traded tickets.
People do this all the time involving things like when they left for work. Decades ago, my mother-in-law put her sister on a bus and the sister died when the bus crashed. "What if?" has dogged her ever since. The connection between the random chance of that particular bus crashing on that particular day is associated with her completely independent choice to put her sister on the bus. While they are mathematically independent, it doesn't change the fact that her choice mattered. For some reason, people take this mattering and do things with it that makes no sense.
This topic can branch out into really weird places when viewed this way. The classic problem of someone holding 10 people hostage and telling you to kill 1 or all 10 die matches the pattern with a moral choice instead of random chance. When asking if it is more moral to kill 1 or let the 10 die people will argue that refusing to kill an innocent will result in 9 more people dying than needed. The decision matters and this mattering reflects on the moral value of each choice. Whether this is correct or not seems to be in debate and it is only loosely relevant for this particular topic. I am eagerly looking for the eventual answer to the question, "Are these events related?" But to get there I need to understand the simple scenario, which is the one presented by my original comment.
I am having trouble understanding this. Can you say it again with different words?
Have no fear - your comment is clear.
I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint.
Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting.
When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.
Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem.
But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?
When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without [suspicion of] malicious intent breaks a ritual associated with the game.
While in this instance, the ritual doesn't protect the integrity of the game, rituals can be very important in getting into and enjoying activities. Humans are badly wired, and Less Wrong readers work hard to control our irrationalities. One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.
We didn't until the people on TV did it. The ritual was only important in the sense that this is how they were predicting which card they were going to get. Their point was based entirely on the fact that the card they were going to get is not the card they ended up getting.
As a reminder to the ongoing conversation, we had arguments about the topic. They didn't say, "Do it because you are supposed to do it!" They said, "Don't change the card I am supposed to get!"
Sure, but this isn't one of those cases. In this case, they are complaining for no good reason. Well, I guess I haven't found a good reason for their reaction. The consensus in the replies here seems to be that their reaction was wrong.
I am not trying to say you shouldn't enjoy your coffee rituals.
RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.
I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.
I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have gone the other way. However, the same thought process doesn't occur on winning; people aren't inclined to analyze their successes in the say way that they analyze their failures, even if they are both random events. The negative emotion associated with feeling like a victory is stolen would be enough to preemptively object and prevent that from occurring in the first place.
However, even if what I said above is true, I don't think it really addresses the problem of adjusting their map to match the territory. That's another question entirely.
I agree with your comment and this part especially:
Very true. I see a lot of behavior that matches this. This would be an excellent source of the complaint if it happened after they lost. My friends complained before they even picked up their cards.
Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker.
When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.)
Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds.
I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.
I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments.
A summary:
It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride?
One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered.
Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?
The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated.
(Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)
Why can I override mine? What makes me different from my friends? The answer isn't knowledge of math or probabilities.
Depends, of course, on what exactly you would say and how much unpleasant the writing is for you.
I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters.
EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.
This behavior is repeated in scenarios where the rules are not being changed or there aren't "rules" in the sense of a game and its rules. These examples are significantly fuzzier which is why I chose the poker example.
The lottery ticket example is the first that comes to mind.
Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?
But this isn't a rule of the game - it's an implementation issue. The game is the same so long as cards are randomly selected without replacement from a deck of the appropriate sort.
(The first Google hit for "texas hold'em rules" in fact mentions burning cards.)
That the game has the same structure either way is recognized only at a more abstract mental level than the level that the negative reaction comes from; in most people, I suspect the abstract level isn't 'strong enough' here to override the more concrete/non-inferential/sphexish level.
The ideal decision algorithm used in the game remains the same, but people don't look at it this way. It is a rule, since it is how they have learned the game.
An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.
I don't think this is relevant. I responded in more detail to RobinZ's comment.
As you note, regular poker and poker with an extra cut mid-deal are completely isomorphic. In a professional game you would obviously care, because the formality of the shuffle and deal are part of a tradition to instill trust that the deck isn't rigged. For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong. We have two different pointers pointing to the same thing, and they are complaining because the pointers aren't the same, even though all that matters is what those pointers point to. It would be like complaining if you tried to change the name of Poker to Wallaboo mid-deal.
Sure, but the "wrong" in this case couldn't be shown to my friends. They perfectly understood probability. The problem wasn't in the math. So where were they wrong?
Another way of saying this:
The answer has nothing to do with me cheating and has nothing to do with misunderstanding probability. There is some other problem here and I don't know what it is.
There are rules for the game that are perceived as fair.
If one participant goes changing the rules in the middle of the game this 1) makes rule changing acceptable in the game, 2) forces other players to analyze the current (and future changes) to the game to ensure they are fair.
Cutting the deck probably doesn't affect the probability distribution (unless you shuffled the deck in a "funny" way). Allowing it makes a case for allowing the next changes in the rules too. Thus you can end up analyzing a new game rather than having fun playing poker.
To modify RobinZ's hypothesis:
Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so?
Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.
I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question.
Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently?
I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution.
As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why?
I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?
Based on my friends, the care/don't care dichotomy cuts orthogonally to the math/no math dichotomy. Most people, whether good or bad at math, can understand that the chances are the same. It's some other independent aspect of your brain that determines whether it intensely matters to you to do things "the right way" or if you can accept the symmetry of the situation. I hereby nominate some OCD-like explanation. I'd be interested in seeing whether OCD correlated with your friends' behavior.
As a data point, I am not OCD and don't care if you cut the deck.
I am more likely to be considered OCD than any of my friends in the example. I don't care if you cut the deck.
It might be because people conceive a loss more severely than a gain. There might be an evolutionary explanation for that. Because of that they would conceive their "lossed" card which they already thought would be theirs more severely than the card the "gained" after the cut. While you on the other hand might already be trained to think about it differently.
It's a side effect.
Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.
So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?
I can understand your answer if the scenario was more like:
"Hey! Don't do that!"
"But it doesn't matter. See?"
"Oh. Well, okay. But don't do it anyway because..."
And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem.
"Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong?
I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments?
How would you describe this heuristic in a few sentences?
I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic.
Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".
So I'm planning a sequence on luminosity, which I defined in a Mental Crystallography footnote thus:
Since I'm very attached to the word "luminosity" to describe this phenomenon, and I also noticed that people really didn't like the "crystal" metaphor from Mental Crystallography, I would like to poll LW about how to approach the possibility of a "light" metaphor re: luminosity. Karma balancer (linked for when it goes invisible).
Vote this comment up if you think I suck at metaphors and should avoid them like the plague.
Vote this comment up if you think only crystal metaphors in particular suck, while light metaphors are nifty.
Vote this comment up if it's okay to use metaphors but I should tone it way down.
Vote this comment up if you want to revisit the issue after I've actually posted the first luminosity sequence post, to see how it's going then.
I was tempted to add this comment:
But figured it wouldn't be nice to screw with your poll. :)
The point, though, is that I really don't understand the luminosity metaphor based on how you have currently described it. I would guess the following:
Am I close?
Edit: Terminology
The adjective is "luminous", not "luminescent", but yes! Thanks - it's good to get feedback on when I'm not clear. However, the word "luminosity" itself is only sort of metaphorical - it's a technical term I stole and repurposed from a philosophy article. The question is how far I can go with doing things like calling a post "You Are Likely To Be Eaten By A Grue" when decrying the hazards of poor luminosity.
Hm. Interesting, I don't think I ever realized those two words had slightly different meanings.
*Files information under vocab quirks.*
Ok, you just won my vote! ;)
Me too; I'm always fond of references like that one. ;)
My interpretation of your description had been that luminosity is like the bandwidth parameter in kernel density estimation.
Fortunately, my first post in the sequence will be devoted to explaining what luminosity is in meticulous detail. Spoiler: it's not like anything that is described in a Wikipedia article that makes my head swim that badly.
Can you elaborate on this? I suspect it's not what Alicorn was describing, but it may be interesting in its own right.
(For what it's worth, I understood the math in the Wikipedia article.)
Note: in such cases, you need to offer some options that aren't self-deprecating, in case some of your readers liked the crystal metaphors just fine.
(Er, although I personally fall into the category of your third option.)
Some people did like the crystal metaphors just fine, but I wouldn't expect them to tell me to do anything I wouldn't have naturally chosen to do with light metaphors, so their opinions are less informative. (I don't expect them to dislike reduced-metaphor or metaphor-free posts.)
TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.
Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviously good to me, but the fact that it didn't occur to much smarter people makes me wary.
Besides that, I also don't expect the idea to be implemented anywhere in this millennium, whether it's good or not.
Anyway, the idea. You have probably heard of people who think vaccines cause autism, or post on Rapture Ready forums, or that the Easter Bunny is real, and grumbled about letting these people vote. Stupid people voting was what the Electoral College was supposed to ameliorate (AFAICT), although I would be much obliged if someone explained how it's supposed to help.
I call my idea republican meritocracy. Under this system, before an election, the government would write a book consisting of:
Then, each citizen who wants to participate in the elections would read this book and take a test based on its contents. The score determines the influence you have on the election.
Admittedly, this will not eliminate all people with stupid ideas, but it might get rid of those who simply don't care, and reduce the influence of not-book-people.
A problem, though, is that literacy is correlated with wealth. Thus, a system that rewards literacy would also favor wealth. So my idea also includes classifying people into equal-sized brackets by wealth, calculating how much influence each one has due to the number of people in it who took the test and their average score, and adjusting the weight of each vote so that each bracket would have the same influence. Thus, although the opinions of deer stuck in headlights would be discounted, the poor, as a group, will still have a voice.
What do you think?
EDIT: ADDRESSED BY EDIT TO ABOVE
Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons. Say I am an uneducated black person living in the segregation era in a southern American state. All I know is one candidate supports passing a civil rights bill on my behalf and the other is a bitter racist. I vote for the non-racist. Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?
On the other hand, I could be capable of answering every question on that test correctly and still believe that the book is a lie and Barack Obama is really a secret Muslim. I can't tell you the number of people I've met who have taken Poli Sci, Econ (even four semsesters worth!), history and can recite candidate talking points verbatim who are still basically clueless about everything that matters.
"Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons."
So which is it?
"Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?"
Because the civil rights guy has pardoned a convicted slave trader who contributed to his gubernatorial campaign, and the "racist" is the victim of a smear campaign. Because the civil rights guy doesn't grok supply and demand. Because the racist supports giving veterans a pension as soon as they return, and the poor black guy is a decorated war hero.
Uh... both. That is my point. Your voting conditions are neither necessary nor sufficient.
Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote. It doesn't seem the least bit implausible that this would trump knowledge of economics, veterans pensions or even the other candidate being racist (but not running on a racist platform). But my point is just that it is highly plausible a voter could be justified in their vote while not having anything approaching the kind of knowledge on that exam.
There are lots of singles issue voters- why for example should someone whose only issue is abortion have to know the candidates other positions AND economics AND history AND political science etc.???
Edit: And of course your test is going to especially difficult for certain sets of voters. You're hardly the first person to think of doing this. There used to be a literacy test for voting... surprise it was just a way of keeping black people out of the polls.
"Your voting conditions are neither necessary nor sufficient."
That's not my goal. I merely want to have an electorate that doesn't elect young-earthers to congress.
"Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote."
I'm not sure why the examples I gave elicited this response. I gave reasons why even a single-issue voter would be well-advised to know whom ve's voting for. And besides, if an opinion is held only by people who don't understand history, that's a bad sign.
"Edit: And of course your test is going to especially difficult for certain sets of voters."
That's why I made the second modifier. And there could be things other than wealth factored in, if you like - race, sex, reading-related disabilities, being a naturalized citizen...
It seems you edited your comment after I responded, which indeed makes it look like a non-sequitur.
I posted it incomplete by mistake.
What your system actually does is make it less likely that unorganized people with fringe ideas will vote. If there's an organization promoting a fringe idea, it will offer election test coaching to sympathizers.
"What your system actually does is make it less likely that unorganized people with fringe ideas will vote."
Why's that?
Also, the curriculum I gave is the least important part of my idea. I threw in whatever seemed like it would matter for the largest number of issues.
What problem is this trying to address? Caplan's Myth of the Rational Voter makes the case that democracies choose bad policies because the psychological benefit from voting in particular ways (which are systematically biased) far outweigh the expected value of the individual's vote. To the extent that your system reduces the number of people that vote, it seems to me that a carefully designed sortition system would be much less costly, and also sidesteps all sorts of nasty political issues about who designs the test, and public choice issues of special interests wanting to capture government power.
The basic idea of a literacy test isn't really new, and as a matter of fact seems to have still been floating around the U.S. at late as the 1960s
And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?
Erm, from that link, I understood that "sortition" means "choosing your leaders randomly". Why would I want to do that? Is democracy really worse than random?
"And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?"
Probably because that word doesn't mean what I think it means. I assumed that "republican" means that people like you and me get to influence who gets elected. Which is part of my proposal.
This may be enough reason to dismiss the proposal. If something like that may exist, it would be better if someone who has at least some chance of being impartial in the election designs the test.
And how exactly do you plan you keep political biases out of the test? According to your point 2, the voters would be questioned about their opinion in a debate about several policy issues. This doesn't look like a good idea.
The correlation between literacy and wealth seems a little problem compared to the probability of abuse which the system has.
And why do you call it a meritocracy?
Say Omega appears to you in the middle of the street one day, and shows you a black box. Omega says there is a ball inside which is colored with a single color. You trust Omega.
He now asks you to guess the color of the ball. What should your probability distribution over colors be? He also asks for probability distributions over other things, like the weight of the ball, the size, etc. How does a Bayesian answer these questions?
Is this question easier to answer if it was your good friend X instead of Omega?
See also.
See http://wiki.lesswrong.com/wiki/I_don%27t_know
I don't know about "should", but my distribution would be something like
red=0.24 blue=0.2 green=0.09 yellow=0.08 brown=0.04 orange=0.03 violet=0.02 white=0.08 black=0.08 grey=0.02 other=0.12
Omega knows everything about human psychology and phrases it's questions in a way designed to be understandable to humans, so I'm assigning pretty much the same probabilities as if a human was asking. If it was clear that white black and grey are considered colors their probability would be higher.
I am curious as to why brazil84's comment has received so much karma? The way the questions were asked seemed to imply a preconception that there could not possibly be viable alternatives. Maybe it's just because I'm not a native English speaker and read something into it that isn't there, but that doesn't seem to me to be a rationalist mindset. It seemed more like »sarcasm as stop word« instead of an honest inquiry let alone an argument.
It seems entirely rational to me to ask what the envisioned alternative is when someone is criticizing something.
The following stuff isn't new, but I still find it fascinating:
Reverse-engineering the Seagull
The Mouse and the Rectangle
Neat!
"Are you a Bayesian of a Frequentist" - video lecture by Michael Jordan
http://videolectures.net/mlss09uk_jordan_bfway/
I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.
The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.
Also, I'd appreciate pointers on how to find out if the book is being translated to Finnish.
Edit: Fixed markdown and grammar.
I'm no fan of joke religions - even the serious joke religions - but the Church of the SubGenius promoted the idea of the "Short Duration Personal Savior" as a mind-hack. I like that one.
(No opinion on the book - haven't read it.)
TL;DR: Help me go less crazy and I'll give you $100 after six months.
I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.
I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).
One-time tricks to do one important thing are also welcome, but I'd offer less.
What do you do when you aren't doing anything?
EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.
I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.