Open Thread June 2010, Part 4
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread brought to you by quantum immortality.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (325)
A TED talk: "Laurie Santos: How monkeys mirror human irrationality "
"Why do we make irrational decisions so predictably? Laurie Santos looks for the roots of human irrationality by watching the way our primate relatives make decisions. A clever series of experiments in "monkeynomics" shows that some of the silly choices we make, monkeys make too."
Interesting speech. I wonder whether the monkeys had a safe way to save their tokens, and whether the experiment would play out the same way if it could be done with squirrels.
She implies that the amount of complexity in finance is just there. I agree with Scott Adams that a good bit of complexity is a deliberate effort to confuse people into making bad choices.
If things are complex you may need to meet with a financial advisor - and then they can try to sell you more stuff.
Yet another exchange regarding experts and the "difficulty of explaining" excuse. It's the kind of exchange normally found here, and with LW regulars, so given the topic matter, I thought folks here would be interested in they haven't seen it already.
You appear to be responding to a different point than the one Robin was making in the original post.
Robin's post centers on the "intellectually nutritious" metaphor (and has nothing to do with "difficulty of explaining"). Your reply conflates that with some argument about reverence for particular authorities, which Robin isn't making except insofar as it is implied by his use of the word "classic".
I don't think I am. He says, "You need to read the classics." I say, "No, I just need to know their key insights."
I further say that people who think you need to read a particular classic are typically wrong, as others have assimilated its insights -- and become capable of discussing the related issues -- without having to read it.
How is that not responsive? Where do I make an issue of reverence for authorities?
OK, "reverence for authorities" might a red herring here. Please disregard that and accept a fractional apology; I think my observation still stands.
Robin's saying "the expected value of your reading (something like) a classic is higher than the expected value of equivalent time spent reading (something like) my blog".
He isn't saying "you need to read the classics (and nothing else will do)", in spite of what the title says. You sound as if you're reacting to the title only - and an idiosyncratic reading of it at that.
Your point regarding a specific article - Coase's - may have merit. Some issues you need to consider are:
Another respondent on Robin's blog says "Pfui, blogs have led me to classics". Well, that point doesn't work if all you ever read are blogs, showing precisely how I suspect folks are misunderstanding Robin's point.
What Robin says is that there is a hierarchy of sources of knowledge, not all are worth the same, and it's unwise to spend all your time on secondary or tertiary (etc.) sources that (often) are lesser sources of intellectual nourishment. In short, there's a reason the classics are acknowledged as such.
No, I think I addressed the broader point he was making, not just the title: He's saying, don't just rely on blog posts and blog comment exchanges -- actually read the classic works. This would imply that these blog discussions suffer from lack of appreciation of certain classics that imparted Serious knowledge.
I disputed this diagnosis of the problem. The phenomenon Robin_Hanson describes is more due to experts not understanding their own topics, and not communicating the fruits of these classics. The proper response to this, I contend, is not to wade through classics, hoping to be able to sort the good from the bad. Rather, it's for those who are aware of the classics' insights to understand and present them where applicable.
In other words, not to do what Gene Callahan does in the (corrected) link.
This is why I challenged Robin_Hanson to say what he's doing about it: if people really are stumbling along, unaware of some classic writer's insight on the matter, a work that just completely enlightens and clarifies the debate, what is he doing to make sure these insights are applied to the relevant issue? That is how you establish the worth of classics, by repeated ability to obviate debates that people get into when they aren't familiar with them.
It's true that in reading works that draw from the classics, you have to separate the good from the bad, but you have to do that anyway -- and classics will typically have a lot of bad with the good.
If classics are higher up on the hierarchy, it is specific classics that are known for being completely good, or for because their bad part is known and articulated to the learner in advance. But that requires advising of specific classics, not telling someone to read classics in general.
Keep in mind, you were my example of someone failing to learn the best arguments against gay rights, despite a sincere effort to find them. The experts either didn't understand the arguments, or weren't able to apply them in discussions. How many (additional!) classics would you need to have read to be enlightened about this?
You'd need to spell out more precisely what he's doing that you think deserves criticism.
Interestingly I seem to have read quite a few of the "classics" that come up in that discussion on "what science does". Polanyi's Personal Knowledge, Feyerabend's Against Method, Lakatos' Proofs and Refutations, Kuhn's Structure of Scientific Revolutions. Not Popper however - I've read The Open Society but not his other works.
Given your stance on "explaining" those strike me as good examples of the kind of stuff you might want to have read because that would leave you in a better position to criticize what you're criticizing: less prone to misrepresenting it. (As for me, I'm now investing a lot of time and energy into this "Bayesian" stuff, which definitely is sort of a counterpoint to my prior leanings.)
Exactly what I referred to in the previous paragraph.
Callahan is, supposedly, aware of these classics' insights. Did he present them where applicable? Show evidence he understands them? No. Every time he drops the name of a great author or a classic, he fails to put the argument in his own words, sketch it out, or show its applicability to the arguments under discussion.
For example, he drops the remark that "Polanyi showed that crystallography is an a priori science [in the sense that Austrian economics is]" as if it were conclusively settled. Then, when I explain why this can't possibly be the case, Callahan is unable to provide any further elaboration of why that is (and I couldn't find a reference to it anywhere).
The problem, I contend, is therefore on his end. To the extent that Callahan's list of classics is relevant, and that he is a majestic bearer of this deep, hard-won knowledge, he is unable to actually show how the classics are relevant, and what amazing arguments are presented in them that obviate our discussion. The duty falls on him to make them relevant, not for everyone else to just go out and read everything he has, just because he thinks, in all his gullible wisdom, that it will totally convince us.
Note: I wasn't alone in noticing Callahan's refusal to engage. Another poster remarked:
With regard to the books you mention: what little I have read about them, they aren't impressive or promising. For example, Feyerabend seems to think he has some great insight that good scientific theories don't have to incorporate the old theory, but rather, the normally make progress by ignoring the old. But he's attacking a strawman: new theories aren't expected to incorporate the old theory, just to be able to make the same predictions. [EDIT: Sorry, original version didn't have the complete sentence.]
Also, people like to make a big deal about how clever Quine's holism argument is, but if you're at all familiar with Bayesianism, you roll your eyes at it. Yes, theories can't be tested in isolation, but Bayesian inference can tell you which beliefs are most strongly weakened by which evidence, showing that you have a basis for saying which theory was, in effect, tested by the observations.
Things like these make me skeptical of those who claim that these philosophers have something worthwhile to say to me about science. I would rather focus on reading the epistemology of those who are actually making real, unfakeable, un-groupthinkable progress, like Sebastian Thrun and Judea Pearl.
I think Lakatos, Proofs and Refutations is a fun book, but the chief thing I learned from it is that mathematical proofs aren't absolutely true, even when there is no error in reasoning. It's about mathematics, not science. It's also quite short, particularly if you skip the second, much more mathematically-involved dialogue.
I learned the opposite: that mathematical proofs can be and should be absolutely true. When they fall short, it is a sign that some confusion still remains in the concepts.
I see no contradiction between these interpretations. :P
You're basically doing the same when you name-drop "a Bayesian revival in the sciences". I've been here for months trying to figure out what the hell people mean by "Bayesian" and frankly feel little the wiser. It's interesting to me, so I keep digging, but clearly explained? Give me a break. :)
I found Polanyi somewhat obscure (all that I could conclude from Personal Knowledge was that I was totally devoid of spiritual knowledge), so I won't defend him. But one point that keeps coming up is that if you look closely, anything that people have so far come up with that purports to be a "methodological rule of science", can be falsified by looking at one scientist or another, doing something that their peers are happy to call perfectly good science, yet violates one part or another of the supposed "methodology".
As an example being impartial certainly isn't required to do good science; you can start out having a hunch and being damn sure your hunch is correct, and the energy to devise clever ways to turn your hunch into a workable theory lets you succeed where others don't even acknowledge there is a problem to be solved. Semmelweis seems to be a good example of an opinionated scientist. Or maybe Seth Roberts.
What's your take on string theorists? ;)
That's not remotely the same thing -- I wasn't bringing that up as some kind of substantiation for any argument, while Callahan was mentioning the thing about "a priori crystallography" (???) as an argument.
So? I was arguing about what deserves to be called science, not what happens to be called science. And yes, people practice "ideal science" imperfectly, but that's no evidence against the validity of the ideal, any more than it's a criticism of circles that no one ever uses a perfect one. Furthermore, every time someone points to one of these counterexamples, it happens to be at best a strawman view. Like what you do here:
The claim isn't that you have to be impartial, but that you must adhere to a method that will filter out your partiality. That is, there has to be something that can distinguish your method from groupthink, from decreeing something true merely because you have a gentleman's agreement not to contradict it.
Perhaps we're actually on the same page there. I don't think Robin was saying "read classics in general", so much as "go and spend some quality time with what you'd think is a truly awesome classic". If he had been saying "go and spend time reading classics just because they had the 'classic' label stamped on them" I'd also disagree with him.
One issue is that judgments of "intellectually nutritious" vary from person to person in extremely idiosyncratic ways. For instance I'm currently reading Wilson and Sperber's Relevance which comes heartily recommended by Cosma Shalizi but is more or less boring me to death. You never know in advance which book is going to shake your world-view to its foundations.
Maybe we need to make a distinction here between one-topic classics and broader-ranging, multi-topic classics. What I would need (and love) to read is the "Gödel, Escher, Bach" of moral theories. :)
But while I derived nourishment from Rawls Theory of Justice I wouldn't necessarily seek out "classics" of communautarism (or other traditions making a strong case against e.g. gay rights), because I don't feel that dire a need to expose my ideas on moral theories to contradiction. I'd be keen to get that contradiction in smaller and more pre-digested doses.
Usually when I have identified a topic as really, really important I find it worthwhile to round out my understanding of it by going back to primary or early sources, if only because every later commentator is implicitly referring back to them, even if "between the lines".
I also seek out the "classic" in a field when my own ideas stand in stark opposition to those attributed to that field. For instance I read F.W. Taylor's original "Scientific Management" book because I spent quite a bit of energy criticizing "Taylorism", and to criticize something effectively it's judicious to do everything you can not to misrepresent it.
Well, I'm not sure where we agree or don't now. We certainly agree here:
Yes, yes you should learn about these contradictions of your worldview from summaries of the insights that go against it.
But you also say:
But what's the difference? If I'm already so lacking as to need to read (more) classics, how would I even know which classics are worth it? He gives no advice in this respect, and if he did, I wouldn't be so critical. But then it would be an issue about whether people should read this or that book, not about "classics" as such.
Did you regard gay rights as really, really important?
And at times we also discover that the eponymous mascot's actual ideas are quite a lot different to those that we are rejecting. Then at least we know to always direct the criticisms at "Taylorism" and never "Taylor" (depending whether the mascot in question shares the insanity.)
It would astound me if this reason was that they were the optimal source of educational. That would completely shake my entire understanding of the fairness of the universe.
Better than the classics are the later sources that cover the same material once the culture has had a chance to fully process the insights and experiment with the best way to understand them. You pick the sources that become popular and respected despite not having the prestige of being the 'first one to get really popular in the area'. You want the best, not the 'first famous' and shouldn't expect that to be the same source. After all, the author of the Classic had to do all the hard work of thinking of the ideas in the first place... we can't expect him to also manage to perfect the expression of them and teach them in the most effective manner. Give the poor guy a break!
As an example,
You seem to be missing the examples at the moment, but I'll give one... it's damn hard to learn relativity by reading Einstein's original papers. Your average undergraduate textbook gives a much better explanation of special relativity.
On the other hand, when it comes to studying history, sometimes classics are still the best sources. For example, when it comes to the Peloponnesian War, everything written by anyone other than Thucydides is merely footnotes.
Reading Dawkins may be more effective than reading Darwin, to appreciate descent with modification and differential survival as an optimization algorithm.
Reading Darwin may be more effective than reading Dawkins, to appreciate what intellectual work went into following contemporary evidence to that conclusion, in the face of a world filled with bias and confusion.
Reading Dawkins OR Darwin is - and I think that is Robin's point - more valuable than the same time spent reading blogs expounding shaky speculations on evolution.
I'm underlining your point about Darwin-- just getting the insights doesn't give you information about the process of thinking them out.
Also, a "just the insights" version will probably leave out any caveats the originator of the insights included.
Spectacularly so in the case of the Waterfall software development process. It's as if the "classic" in question had said "Drowning kittens" at the end of page 1, and of course the beginning of page 2 goes right on to say "...is evil, don't do it". But everyone reads page one which has a lovely diagram and goes, "Oh yeah; drowning kittens. Wonderful idea, let's make that the official government norm for feline management."
100% agree that is Robin's point and another 100% with Robin's point. Hmm. Wrong place to throw 100% around. Let's see... 99.5% and 83% respectively. Akrasia considerations and the intrinsic benefits of the social experience of engaging with a near-in-time social network account for the other 17%.
http://arxiv.org/abs/1006.3868
I guess everyone here already understands this stuff, but I'll still try to summarize why "model checking" is an argument against "naive Bayesians" like Eliezer's OB persona. Shalizi has written about this at length on his blog and elsewhere, as has Gelman, but maybe I can make the argument a little clearer for novices.
Imagine you have a prior, then some data comes in, you update and obtain a posterior that overwhelmingly supports one hypothesis. The Bayesian is supposed to say "done" at this point. But we're actually not done. We have only "used all the information available in the sample" in the Bayesian sense, but not in the colloquial sense!
See, after locating the hypothesis, we can run some simple statistical checks on the hypothesis and the data to see if our prior was wrong. For example, plot the data as a histogram, and plot the hypothesis as another histogram, and if there's a lot of data and the two histograms are wildly different, we know almost for certain that the prior was wrong. As a responsible scientist, I'd do this kind of check. The catch is, a perfect Bayesian wouldn't. The question is, why?
Model checking is completely compatible with "perfect Bayesianism." In the practice of Bayesian statistics, how often is the prior distribution you use exactly the same as your actual prior distribution? The answer is never. Really, do you think your actual prior follows a gamma distribution exactly? The prior distribution you use in the computation is a model of your actual prior distribution. It's a map of your current map. With this in mind, model checking is an extremely handy way to make sure that your model of your prior is reasonable.
However, a difference in the data and a simulation from your model doesn't necessarily mean that you have an unreasonable model of your prior. You could just have really wrong priors. So you have to think about what's going on to be sure. This does somewhat limit the role of model checking relative to what Gelman is pushing.
You shouldn't need real-world data to determine if your model of your own prior was reasonable or not. Something else is going on here. Model checking uses the data to figure out if your prior was reasonable, which is a reasonable but non-Bayesian idea.
Well, if you're just checking your prior, then I suppose you don't need real data at all. Make up some numbers and see what happens. What you're really checking (if you're being a Bayesian about it, i.e. not like Gelman and company) is not whether your data could come from a model with that prior, but rather whether the properties of the prior you chose seems to match up with the prior you're modeling. For example, maybe the prior you chose forces two parameters, a and b, to be independent no matter what the data say. In reality, though, you think it's perfectly reasonable for there to be some association between those two parameters. If you don't already know that your prior is deficient in this way, posterior predictive checking can pick it up.
In reality, you're usually checking both your prior and the other parts of your model at the same time, so you might as well use your data, but I could see using different fake data sets in order to check your prior in different ways.
I thought that what I'm about to say is standard, but perhaps it isn't.
Bayesian inference, depending on how detailed you do it, does include such a check. You construct a Bayes network (as a directed acyclic graph) that connects beliefs with anticipated observations (or intermediate other beliefs), establishing marginal and conditional probabilities for the nodes. As your expectations are jointly determined by the beliefs that lead up to them, then getting a wrong answer will knock down the probabilities you assign to the beliefs leading up to them.
Depending on the relative strengths of the connections, you know whether to reject your parameters, your model, or the validity of the observation. (Depending on how detailed the network is, one input belief might be "i'm hallucinating or insane", which may survive with the highest probability.) This determination is based on which of them, after taking this hit, has the lowest probability.
Pearl also has written Bayesian algorithms for inferring conditional (in)dependencies from data, and therefore what kinds of models are capable of capturing a phenomenon. He furthermore has proposed causal networks, which have explicit causal and (oppositely) inferential directions. In that case, you don't turn a prior into a posterior: rather, the odds you assign to an event at a node are determined by the "incoming" causal "message", and, from the other direction, the incoming inferential message.
But neither "model checking" nor Bayesian methods will come up with hypotheses for you. Model checking can attenuate the odds you assign to wrong priors, but so can Bayesian updating. The catch is that, for reasons of computation, a Bayesian might not be able to list all the possible hypotheses and arbitrarily restrict the hypothesis space, and potentially be left with only bad ones. But Bayesians aren't alone in that either.
(Please tell me if this sounds too True Believerish.)
This sounds like a confusion between a theoretical perfect Bayesian and practical approximations. The perfect Bayesian wouldn't have any use for model checking because from the start it always considers every hypothesis it is capable of formulating, whereas the prior used by a human scientist won't ever even come close to encoding all of their knowledge.
(A more "Bayesian" alternative to model checking is to have an explicit "none of the above" hypothesis as part of your prior.)
NOTA is addressed in the paper as inadequate. What does it predict?
See here.
I don't see how that's possible. How do you compute the likelihood of the NOTA hypothesis given the data?
NOTA is not well-specified in the general case, but in at least one specific case it's been done. Jaynes's student Larry Bretthorst made a useable NOTA hypothesis in a simplified version of a radar target identification problem (link to a pdf of the doc).
(Somewhat bizarrely, the same sort of approach could probably be made to work in certain problems in proteomics in which the data-generating process shares the key features of the data-generating process in Bretthorst's simplified problem.)
If I'm not mistaken, such problems would contain some enumerated hypotheses - point peaks in a well-defined parameter space - and the NOTA hypothesis would be a uniformly thin layer over the rest of that space. Can't tell what key features the data-generating process must have, though. Or am I failing reading comprehension again?
Yep.
I think the key features that make the NOTA hypothesis feasible are (i) all possible hypotheses generate signals of a known form (but with free parameters), and (ii) although the space of all possible hypotheses is too large to enumerate, we have a partial library of "interesting" hypotheses of particularly high prior probability for which the generated signals are known even more specifically than in the general case.
But my sense is that the "substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such", as well as Eliezer's OB persona, is talking more about a prior implicit in informal human reasoning than about anything that's written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn't write down. Is that wrong?
I don't think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean "what informal human reasoning should be". In that case I'd like a formal description of what it should be (ahem).
Solomonoff induction, mebbe?
Wei Dai thought up a counterexample to that :-)
It seems to me that Wei Dai's argument is flawed (and I may be overly arrogant in saying this; I haven't even had breakfast this morning.)
He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don't fundamentally see why "measure zero hypothesis" is equivalent to "impossible;" for example the hypothesis of "they're making it up as they go along" having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other "humans have been wrong forever" should have a consistent probability mass which will grow in comparison to the other hypothesis "they are making it up."
Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI's prior distribution to give "impossible" things low but nonzero probability.
Wei Dai's argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such "impossible" things positive probability, but still sum to 1.0 over all hypotheses, then by all means let's hear it.
Yeah well it is certainly a good argument against that. The title of the thread is "is induction unformalizable" which point I'm unconvinced of.
If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for "things I haven't thought up yet." On the other hand I'm not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.
There's no general way to have a "none of the above" hypothesis as part of your prior, because it doesn't make any specific prediction and thus you can't update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.
Gelman/Shalizi don't seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.
That check should be part of updating your prior. If you updated and got a hypothesis that didn't fit the data, you didn't update very well. You need to take this into account when you're updating (and you also need to take into account the possibility of experimental error: there's a small chance the data are wrong).
Hopefully the Book Club will get around to covering that as part of Chapter 4.
I can't recall that it has anything to do with "updating your prior"; Jaynes just says that if you get nonsense posterior probabilities, you need to go back and include additional hypotheses in the set you're considering, and this changes the analysis.
See also the quote (I can't be bothered to find it now but I posted it a while ago to a quotes thread) where Jaynes says probability theory doesn't do the job of thinking up hypotheses for you.
Apologies if this has already been covered elsewhere, but isn't a prior just a belief? The prior is by definition whatever it was rational to believe before the acquisition of new evidence (assuming a perfect Bayesian, anyway). I'm not quite sure what you mean when you propose that a prior could be wrong; either all priors are statements of belief and therefore true, or all priors are statements of probability that must be less accurate than a posterior that incorporates more evidence.
I suspect that there are additional steps I'm not considering.
Nope, this isn't part of the definition of the prior, and I don't see how it could be. The prior is whatever you actually believe before any evidence comes in.
If you have a procedure to determine which priors are "rational" before looking at the evidence, please share it with us. Some people here believe religiously in maxent, others swear by the universal prior, I personally rather like reference priors, but the Bayesian apparatus doesn't really give us a means of determining the "best" among those. I wrote about these topics here before. If you want the one-word summary, the area is a mess.
Thanks for the links (and your post!), I now have a much clearer idea of the depths of my ignorance on this topic.
I want to believe that there is some optimal general prior, but it seems much more likely that we do not live in so convenient a world.
But if you can evaluate how good a prior is, then there has to be an optimal one (or several). You have to have something as your prior, and so whichever one is the best out of those you can choose is the one you should have. As for how certain you are that it's the best, it's (to some extent) turtles all the way down.
Instead of using "optimal general prior", I should have said that I was pessimistic about the existence of a standard for evaluating priors (or, more properly, prior probability distributions) that is optimal in all circumstances, if that's any clearer.
Having thought about the problem some more, though, I think my pessimism may have been premature.
A prior probability distribution is nothing more than a weighted set of hypotheses. A perfect Bayesian would consider every possible hypothesis, which is impossible unless hypotheses are countable, and they aren't; the ideal for Bayesian reasoning as I understand it is thus unattainable, but this doesn't mean that there are benefits to be found in moving toward that ideal.
So, perfect Bayesian or not, we have some set of hypotheses which need to be located before we can consider them and assign them a probabilistic weight. Before we acquire any rational evidence at all, there is necessarily only one factor that we can use to distinguish between hypotheses: how hard they are to locate. If it is also true that hypotheses which are easier to locate make more predictions and that hypotheses which make more predictions are more useful (and while I have not seen proofs of these propositions I'm inclined to suspect that they exist), then we are perfectly justified in assigning a probability to a hypothesis based on it's locate-ability.
This reduces the problem of prior probability evaluation to the problem of locate-ability evaluation, to which it seems maxent and its fellows are proposed answers. It's again possible there is no objectively best way to evaluate locate-ability, but I don't yet see a reason for this to be so.
Again, if I've mis-thought or failed to justify a step in my reasoning, please call me on it.
This doesn't sound right to me. Imagine you're tossing a coin repeatedly. Hypothesis 1 says the coin is fair. Hypothesis 2 says the coin repeats the sequence HTTTHHTHTHTTTT over and over in a loop. The second hypothesis is harder to locate, but makes a stronger prediction.
The proper formalization for your concept of locate-ability is the Solomonoff prior. Unfortunately we can't do inference based on it because it's uncomputable.
Maxent and friends aren't motivated by a desire to formalize locate-ability. Maxent is the "most uniform" distribution on a space of hypotheses; the "Jeffreys rule" is a means of constructing priors that are invariant under reparameterizations of the space of hypotheses; "matching priors" give you frequentist coverage guarantees, and so on.
Please don't take my words for gospel just because I sound knowledgeable! At this point I recommend you to actually study the math and come to your own conclusions. Maybe contact user Cyan, he's a professional statistician who inspired me to learn this stuff. IMO, discussing Bayesianism as some kind of philosophical system without digging into the math is counterproductive, though people around here do that a lot.
I'm in the process of digging into the math, so hopefully some point soon I'll be able to back up my suspicions in a more rigorous way.
I was talking about the number of predictions, not their strength. So Hypothesis 1 predicts any sequence of coin-flips that converges on 50%, and Hypothesis 2 predicts only sequences that repeat HTTTHHTHTHTTTT. Hypothesis 1 explains many more possible worlds than Hypothesis 2, and so without evidence as to which world we inhabit, Hypothesis 1 is much more likely.
Since I've already conceded that being a Perfect Bayesian is impossible, I'm not surprised to hear that measuring locate-ability is likewise impossible (especially because the one reduces to the other). It just means that we should determine prior probabilities by approximating Solomonoff complexity as best we can.
Thanks for taking the time to comment, by the way.
Then let's try this. Hypothesis 1 says the sequence will consist of only H repeated forever. Hypothesis 2 says the sequence will be either HTTTHHTHTHTTTT repeated forever, or TTHTHTTTHTHHHHH repeated forever. The second one is harder to locate, but describes two possible worlds rather than one.
Maybe your idea can be fixed somehow, but I see no way yet. Keep digging.
I'm not sure I'm willing to grant that's impossible in principle. Presumably, you need to find some way of choosing your priors, and some time later you can check your calibration, and you can then evaluate the effectiveness of one method versus another.
If there's any way to determine whether you've won bets in a series, then it's possible to rank methods for choosing the correct bet. And that general principle can continue all the way down. And if there isn't any way of determining whether you've won, then I'd wonder if you're talking about anything at all (weird thought experiments aside).
The next advances in genomics may happen in China
http://www.economist.com/node/16349434?story_id=16349434
A question for LW regulars: is there a rule of thumb for how often it is acceptable to make top-level posts?
Not sure what the others will say, but for me it depends on the quality. I'd be overjoyed to see a new post by Yvain, Nesov or Wei Dai every morning. (Yep, I consider these three posts to be the gold standard for LW. Not to say that there weren't others like them, of course.) Your own first post was exceptionally good for a first post, but the topic is kinda controversial, so I'd be extra cautious and wait another day or two to avoid being seen as "spamming" or "hijacking the agenda".
Why thank you (I'm blushing like a schoolgirl). I don't imagine I'll have anything ready for at least another day or two, but it seemed like a good question to ask just in case.
My next post will hopefully be a little less controversial and a little more practical. Managing jealousy isn't simple by any means, but it's a little less tied up with people's value systems.
There was some discussion of that here. Suggestions include once or twice per week, let karma be your guide, and don't worry about posting too much.
Fabulous, thanks.
I don't know if I'm exactly a regular, but I'd naively think that if one makes posts that are well-written, relevant, and not redundant, the total number won't be an issue.
The sting of poverty
What bees and dented cars can teach about what it means to be poor - and the flaws of economics
http://www.boston.com/bostonglobe/ideas/articles/2008/03/30/the_sting_of_poverty/?page=full
and lots of Hacker News comments: http://news.ycombinator.com/item?id=1467832
Another economics WTF:
A lot of you may remember my criticism of mainstream economics, that they become so detached from what is meant by a "good economy", that they advocate things that are positively destructive in this original, down-to-earth sense.
Scott Sumner, I find to be particularly guilty of this. His sound economic reasoning has led him to believe that what the economy vitally needs right now is for banks to make bad (or at least wasteful) loans, just to get money circulating and prop up nominal GDP -- a measure known to be meaningless because it's an artifact of the money supply and has to be adjusted for interpretation.
Fed up with him saying this kind of thing, I sarcastically posted this remark:
And in his immediately following comment, he said,
Huh?
You did actually paraphrase his position, so his agreement is a sign of self consistency even when things are not presented with his preferred framing. This much at least is a positive in my book.
As for the position itself... it is idiotic. What is the phrase? "Lost Purpose"?
Yeah, I'm thinking of writing an article on this issue with the title "Lost Economy", both a play on that Yudkowsky article, and having the meaning "lost ability to economize".
A blogger I read made a point that I will incorporate: that people of a certain ideology were screaming bloody murder at how destructive it is to nationalize of this or that part of the economy, but also believe "the economy" will "recover" in just a few years. This blogger remarked that, "um, guys, if you can nationalize sectors of the economy and only cause a few years of pain, then what the hell were we fighting for this whole time? The worst that can come from doing the opposite of what we want is four years of sub-par growth? I thought the consequences would be worse than that ..."
As for Sumner's position: I just don't see by what standard "lots of shoddy loans to prop up fake numbers" constitutes a "good economy".
Indeed. While I find the general arguments about market efficiency persuasive, there's a big blind spot in the view that "The economy will always operate efficiently despite interference, unless that interference is by something we call a 'government'".
Sure, you'd need to be able to replace the symbol (government) with the substance of what causal mechanisms you believe are responsible for damage to the economy, and why they're associated with the government.
Just to clarify, though, I wasn't criticizing a particular anti-government view, just a particular combination of views. I can understand if someone says, "Nationalization isn't that bad, the economy won't be hurt much by it."
Or if someone said, "Nationalization is devastating, and it will take ages for the economy to recover from one, if it ever does!"
But I see a big problem with someone who wants to believe both that nationalization is devastating, and that "the economy" will recover after one in just a few years. No, if it really is devastating, your definition of "the economy" and its "goodness" need to reflect that somehow.
About the Rumsfeld quote mentioned in the most recent top-level post:
Why is it that people mock Rumsfeld so incessantly for this? Whatever reason you might have not to like him, this is probably the most insightful thing any government official has said at a press conference. And yet he's ridiculed for it by the very same people that are emphasizing, or at least should be emphasizing, the imporance of the insight.
Heck, some people even thought it was clever to format it into a poem.
What gives? Is this just a case of "no good deed goes unpunished"?
ETA: In your answer, be sure to say, not just what's wrong with the quote or its context, but why people don't make that as their criticism instead of just saying, ha ha, the quote sure is funny.
I agree that it's a brilliant idea, and that's why I cited him. He does the best job of describing that particular idea that I know of, and I'm amazed, as you are, that he said it at a press conference. I vehemently disagree with his politics, but that doesn't make him stupid or incapable of brilliance.
If the tone of my post came across as mocking, that was not at all my intention.
I didn't mean to imply you were mocking him; I just mentioned your post because that's what reminded me to ask what I've been wondering about -- and you saved me some effort in finding something to cut-and-paste ;-)
I agree that the quote is insightful and brilliant.
I think it was seen by certain (tribally liberal) people as somehow euphemistic or sophistic, as though he were trying to invent a whole new epistemology to justify war.
Politics is the mind-killer.
Wow, really? I honestly didn't know that quote ever provoked ridicule! Of course I also don't know how Rumsfeld is and didn't know he was a politician.
I am surely not the first to recognise the similarity to this poem.
ETA: no, I'm not.
Hm, those are superficially similar, maybe, but I'm glad that someone, at least, was asking "er, what's the deal with the Rumsfeld quote?" back in '03.
Some ideas.
People didn't/don't like Rumsfeld.
In the quote's original context, Rumsfeld used it as the basis of a non-answer to a question:
People think Rumsfeld's particular phrasing is funny, and people don't judge it as insightful enough to overcome the initial 'hee hee that sounds funny' reaction.
However insightful the quote is, Rumsfeld arguably failed to translate it into appropriate action (or appropriate non-action), which might have made it seem simply ironic or contrary rather than insightful.
(Edit to fix formatting.)
So what would be the non-funny way to say? IMHO, Rumsfeld's phrasing is what you get if you just say it the most direct way possible.
This is what always bothers me: people who say, "hey, what you said was valid and all, but the way you said it was strange/stupid". Er, so what would be the non-strange/stupid way to say it? "Uh, implementation issue."
In the exchange, it looks like the reporter's followup question is nonsense. It only makes sense to ask if it's a known unknown, since you, er, never know the unknown unknowns. (Hee hee! I said something that sounds funny! Now you can mock me while also promoting what I said as insightful!)
See also the edit to my original comment.
I'm not sure I'm capable of a good answer for the edited version of the question. I would guess (even more so than I'm guessing in my grandparent comment!) that once someone's 'ha ha' reaction kicks in (whether it's a 'ha ha his syntax is funny,' 'ha ha how ironic those words are in that context,' or a 'ha ha look at him scramble to avoid that question' kind of 'ha ha'), it obscures the perfectly rational denotation of what Rumsfeld said.
I don't know of a way to make it less funny without losing directness. I think the verbal (as opposed to situational) humor comes from a combination of saying the word 'known' and its derivatives lots of times in the same paragraph, using the same kind of structure for consecutive clauses/sentences, and the fact that what Rumsfeld is saying appears obvious once he's said it. And I can't immediately think of a direct way of expressing precisely what Rumsfeld's saying without using the same kind of repetition, and what he's saying will always sound obvious once it's said.
Things that are obvious once thought of, but not before, are often funny when pointed out, especially when pointed out in a direct and pithy way. That's basically how observational comedians operate. (See also Yogi Berra.) It's one of those quirks of human behavior a public speaker just has to contend with.
Strictly speaking that's true, although for Rumsfeld to avoid the question on that basis is IMO at best pedantic; it's not hard to get an idea of what the reporter is trying to get at, even though their question's ill-phrased.
(Belated edit - I should say that it would be pedantic, not that it is pedantic. Rumsfeld didn't actually avoid the question based on the reporter's phrasing, he just refused to answer.)
Right, that would make sense, except that the very same people, upon shifting gears and nominally changing topics, suddenly find this remark insightful -- "but ignore this when we go back to mocking Rumsfeld!"
Wow, you have got to see Under Siege 2. It has this exchange (from memory):
Bad guy #2: What's that? [...]
Bad guy #1: It's a chemical weapons plant. And we know about it. And they know that we know. But we make-believe that we don't know, and they make-believe that they believe that we don't know, but know that we know. Everybody knows.
Yes, "damned if you do, damned if you don't" is fun, but ultimately to be avoided by respectable people.
Right, but aren't they typically followed by the appreciation of the insight rather than derision of whoever points it out?
True, but it's not really Rumsfeld's job to improve reporters' questions. I mean, he might be a Bayesian master if he did, but it's not really to be expected.
I imagine the people who used the quote to mock Rumsfeld were already inclined to treat the quote uncharitably, and used its funniness/odd-soundingness as a pretext to mock him.
Yeah, that got a giggle from me. Makes me wonder why some kinds of repetition are funny and some aren't!
Agreed - I didn't mean to condone simultaneously mocking Rumsfeld's quote while acknowledging its saneness, just to explain why one might find it funny.
It is (well, was) his job to make a good faith effort to try and answer their questions. (At least on paper, anyway. If we're being cynical, we might argue that his actual job was to avoid tough questions.) If I justified evading otherwise good questions in a Q&A because of minor lexical flubs, that would make the Q&A something of a charade.
It's possibly a matter of people being already disposed to dislike Rumsfeld, combined with a feeling that if he had so much understanding of ignorance, he shouldn't have been so pro-war.
News and mental focus
I think Derren Brown uses this as a mind hack a lot.: http://www.youtube.com/watch?v=3Vz_YTNLn6w (notice specific diversion into spatial memory, it's probably been tried and tested as the best distraction from the color of money in hand)
I feel that mental focus if VERY weak and very exploitable.
As a side note, I think there is another, less obious, mental hack going on, on the audience. Derren claims (in the intro to this TV series) that there is no acting here, but a lot of misdirection. I believe it. I think when he shows this trick work 2 out of 3 times, it's probably more like 2 out of 30. My guess is that he biases the sample quite cleverly, showing 3 cases is exactly the minimum that you can show giving the impression that a) reporting is honest (see - I showed a failure!) and b) the 'magic' works in most cases. Also I think getting caught/embarrassed by a hot dog vendor evokes certain associations that yeah, he can be beat which prevent you from thinking how much he can be beat.
Here is to you Derren, Master of Dark Arts.
Note however that Derren Brown's tricks have turned out to be staged in at least one instance. This makes me extremely skeptical towards the rest of them too.
Oh. I thought the point of the subway anecdote jnf gb unir na rkphfr gb fyvc va n cerfhccbfvgvba, va gur sbez "Gnxr vg [gur zbarl], vg'f svar".
Yes, I missed it, largely due to lack of knowledge of NLP. I wouldn't be surprised if the spatial thing is true also, (and possibly intended) making people picture something is supposed to make them look up IIRC.
I started writing something but it came up short for an article, so I'm posting it here:
Title: On the unprovability of the omni*
Our hero is walking down the street, thinking about proofs and disproofs of the existence of a god. This is no big coincidence as our hero does this often. Suddenly, between one step and the next, the world around her fades out, and she finds herself standing on thin air, surrounded by empty space. Then she hears a voice. "I am Omega. The all-powerful, all-knowing, all-good, ever-present being. I see you have been debating my existence with true purity of heart, so I have decided to provide you with any evidence you request". Once the shock wears off, our hero runs through the list of possible requests she could make. Healing the sick? Perhaps the reanimation of a dead person? Some time-travel? Maybe this could still be doubted. How about creation of a solar system? Or a universe? Maybe a proof of P vs. NP? Alas, our hero realises that any evidence she could request would only be proof of the power of Omega to produce just that thing, not an inclusive proof.
What's more, our hero knows that her thinking is subject to the operation of her mind and the readings of her senses, something she cannot trust in the presence of a vastly overpowering entity. The lower bound of power required of Omega to produce any experience for our hero is much lower than the power to create universes. It is the ability to control only the senses of our hero, become a kind of hypervisor, and simulate all requests. While this is great power indeed, the distance from there to omnipotence is great indeed. Similarly for omniscience, omnipresence, and omnibenevolence.
Our hero does not ask anything of Omega, and their meeting ends uneventfully, at least in terms of new universes being created, or problems thought unsolvable being solved. She does realise though, that omnipotence, omniscience, omnipresence, and omnibenevolence are not properties that can be verified by a human. If this is the definition of a god that theists are working with, then it is not only undisprovable, it is also unprovable. Taking knowledge to be 'justified true belief', a belief in an omni* god can never be justified, putting if firmly in the territory of the unknowable. The strongest claims that can be reasonably made are that of a being that is very powerful, very knowledgeable, etc. But that is not nearly as interesting.
Now, I have posted a question along those lines in this thread before, with little response. What I would like your feedback on is whether this is a reasonable argument, whether I've gotten something completely wrong in my epistemology, and whether there have been similar arguments made by others. All help appreciated, cheers.
Can we not get around this by using randomly chosen questions? And then we have IP=PSPACE, so anything that's in PSPACE, he can relatively quickly convince us he can solve. Obligatory Scott Aaronson link.
Thanks for that link, it was quite good. Any chance you could elaborate a bit on the IP=PSPACE identity?
No, I don't really know complexity theory at all, so I couldn't really tell you any more than Wikipedia could.
Nothing is provable to the level you demand (well, pretty much nothing, cogito ergo sum and all that). Given that none of the omni* are well defined, the question doesn't mean much either.
Are you saying that it's an inference problem and after enough pieces of evidence we should just accept omnipotence (for instance) as the best hypothesis with a high degree of confidence, as we trust gravity now? How about the mind control problem?
Also, what you say about the omni* being not well defined sounds interesting. can you elaborate?
That's exactly what I'm saying, and you're right to point out that mind control will always be a more probable explanation than omnipotence (as will mental illness). If I knew that something would continue to apeear omnipotent, I would just treat it as omnipotent (which equates to "accepting the simulation" if the actual explanation is mind control).
Omnipotence is badly defined because it leads to questions like "Can Omega create a rock so heavy that Omega cannot lift it?", can omnipotent beings create logical contradictions? Can they make 2+2=3? Omniscience leads to similar problems, can Omega answer the halting problem for programs that can call Omega as an oracle? Omnibenevolence is the least paradox ridden, but the hardest to define. Whose version of good is Omega working toward?
Wait... a being which, while possibly not omni-anything, is likely very powerful, offers to provide her any evidence she likes, and she considers and rejects the "healing the sick" and "resurrecting the dead" plans?
A super-powerful agent who is desperate to prove itself to her! That's the perfect opportunity! Unless she messes up the requested 'proof' she has can become a demi-god, just below the Omega (until Omega cracks it with her).
That should result in an exponentially growing multiverse of universes, with each universe self-replicating on a sub-nano-second time frame while simultaniously expanding in size and neg-entropy, all arranged for maximum Fun. Still not proof of Omnipotence but hey, it'll do.
That's a good point. Any ideas on how to mend the hole?
Have Omega offer to provide the proof but then will ask for an answer to the question of whether he is actually omni*. If the answer is incorrect he will destroy the world, if correct he will let the world continue with whatever changes were made by the "wish". There is also the choice not to play.
You would have to make him non omni benevolent though.
Not to mention a solution to the P=NP problem (or the Riemann Hypothesis)?
What if your hero asks to be made omniscient, including the capacity to still be able to think well in the face of all that knowledge?
Throw in omnibenevolence if you like, but I think you get some contradictions if you ask omnipotence. Either that, or you and Omega coalesce.
How could you test your omniscience to be sure it's the real thing?
Asking to modify yourself may be a useful strategy, (or maybe not, as you note) but it's not something that's available to philosophers trying to prove the existence of a god. As far as we know that is :)
It's possible that looking at how you'd test something which claims to be omniscience would give some pointers to finding unknown unknowns and unknown knowns.
Or also show you if there are unknowable unknowns?
I'm pretty sure unknowability would have to be proven rather than shown.
An unknowable unknown: I shot a rocket across the cosmic horizon. On the rocket was a qGrenade set to detonate on a timer. Did my Schrödinger's rocket explode when the timer went off in my Everett branch?
I don't see that decoherence would occur in that case.
This once again explains why "reality" is a largely meaningless concept.
Whether or not it's meaningful, it's certainly useful, especially by Phillip K. Dick's definition: "Reality is that which, when you stop believing in it, doesn't go away."
Wow. I maybe understand where you are alluding to, but I'm not sure I'm reverse engineering the thoughts right. Explain for me?
Your logic is ok. By the way Thomas Aquinas thought along this lines, but in different direction. However discussing scholastic here doesn't make much sense (if it can make sense at all).
Stating P=NP Without Turing Machines
http://rjlipton.wordpress.com/2010/06/26/stating-pnp-without-turing-machines/
Statisticians Andrew Gelman and Cosma Shalizi have a new preprint out, 'Philosophy and the practice of Bayesian statistics.' The abstract:
Mindfulness meditation improves cognition: Evidence of brief mental training
Chimps copy high status individuals in their groups
Has anybody looked into OpenCog? And why is it that the wiki doesn't include much in the way of references to previous AI projects?
If making a Friendly AI is compared to landing on the moon, I'd say OpenCog is something like the scaffolding for a backyard rocket. It still needs something extra - the rocket - and even then it won't achieve escape velocity. But a radically scaled-up version of OpenCog - with a lot more theory behind it, and tailored to run at the level of a whole data center rather than on a single PC - is the sort of toolset that could make a singularity.
If a being presented itself to you and claimed to be omni(potent/scient/present/benevolent), what evidence would you require to accept its claim?
(EDIT: On a second reading, this sounds like a typical theist opening a conversation. I assure you, this is not the case. I am genuinely interested in the range of possible answers to this question.)
Schroedinger Cat is dead. Maybe it's time to update plausibility of classic many worlds interpretation is spite of "Einselection, Envariance, Quantum Darwinism".
I am not sufficiently competent to analyze work of W.H. Zurek, but I think that work can be a great source of insights.
Edit: Abstract. Zurek derived Born's rule.
The "derivation" is on page 12.
The repeated problem for many worlds is that if the quantum state is 1/2 |dead cat> + sqrt(3)/2 |live cat>, then (squaring the coefficients) the probability of dead cat is 1/4, the probability of live cat is 3/4, and so there should be three times as many live cats compared to dead cats (for such a wavefunction); but the decomposition into wavefunction components just produces one dead-cat world and one live-cat world, which naively suggests equal probabilities. The problem is, how do you interpret a superposition like that, in terms of coexisting, equally real worlds, so as to give the right probabilities.
It looks like part of what Zurek does is to pick a basis (Schmidt decomposition) where the components all have the same amplitude - which means they all have the same probability, so the naive branch-counting method works! A potential problem with this way of proceeding is that, expressed in the position basis, the branches end up being complicated superpositions of spatial configurations. (The space of quantum states, the Hilbert space, is a large abstract vector space with a coordinate basis formally labeled by spatial configurations, so the basis vectors of a different basis will be sums of those position-basis vectors.) Explaining complicated superpositions which don't look like reality by positing the existence of many worlds, each of which is itself a complicated superposition that doesn't look like reality, is not very promising. It's sort of okay to do this for microscopic entities because we don't have apriori knowledge about what their reality is like, and we might suppose that the abstract Hilbert-space vector is the actual reality; but somewhere between microscopic and macroscopic, you have to produce an actual live cat, and not just a live cat summed with an epsilon-amplitude dead cat. I have no idea how Zurek deals with this.
Actually, Zurek has a lot of background assumptions which make his reasoning obscure to me and I really don't expect it to make sense in the end, though it's impossible to be sure until you have decoded his outlook. His philosophy is a weird mixture of Bohr's antirealism and Everett's multirealism, and in other papers he says things like
(thanks to DZS for the quote). And of course it's nonsense to say that something doesn't exist until there are multiple copies of it (how many is the magic number? how can you make an existing copy of a nonexistent original?). Zurek is using the words "objective existence" in some twisted way. I'm sure the reason is that he doesn't have the answer to QM, but he wants to believe he does; that is how smart people end up writing nonsense. But I would have to understand his system to offer a more precise diagnosis.
There's a skepticism stack overflow site proposed. If enough follow it, it will go into beta. So if that's your thing, go here
Max More writes about biases that treat natural chemicals as safer than man-made chemicals, natural hazards as safer than man-made hazards, and the status quo as preferable to possible futures, in The proactionary principle.
There's an interesting article in the New York Times on warfare among chimpanzees. One problem, though, is that they attempt to explain the level of coordination necessary in warfare with group selection. This, of course, will not do. I'm under-read in evolutionary biology, but it seems like kin selection accounts for this phenomenon just fine. You are more likely to be related to members of your group than an opposing group, so taking territory from a rival group doesn't just increase your fitness directly, but indirectly through your shared genes among group members.
What do you think, LessWrong?
edit: Some commentary on the article.
Group selection has been vilified; but irrationally so. Group selection has been observed many times in human groups, so dismissing it is silly.
From the LWwiki:
So, can you point to one of these observations? (and if so, update the wiki!)
I can point to observations of groups being eliminated, and in some of these cases, it seems obvious that elimination was attributable to a behavior, a biological phenotype, or a social phenotype. For instance, there was a group of related tribes in South America, described IIRC in "Life among the Yanomamo", who were very aggressive and kept killing and raping members of neighboring tribes. Eventually, the neighboring tribes got together and killed every last man of the aggressive tribe that they could find. The book "Black Robe" fictionalizes a real-life account of another group selection incident, in which one North American tribe adopted Christianity, and (the book implies) as a result became less violent and were wiped out by neighboring non-Christian tribes. The villages of the Christianized natives of Papua New Guinea are at this moment being razed by the (Muslim) Indonesian army (not that you'll hear anything about it in the news), which you could relate to either the religious or the technological difference between the groups.
I don't know what counts as an "adaptation". When Spanish genes spread rapidly among the natives of central America due to the superior technology of Spain, was that an adaptation?
What I do know is that social norms lead to differential reproductive success. There is obvious group selection going on in the world right now that favors culture that place a high value on high birth rate, or that prohibit birth control.
But group selection is a more specific idea, the idea that a trait can become widespread due to it's positive effects on group success, regardless of the effects on individual fitness. An example of group selection would be a trait such that: (1) groups in which it is widespread win, (2) lacking the trait doesn't lower the reproductive success of an individual member of such a group. While your examples show (1) it is not clear that they satisfy (2).
Then I must admit confusion here: when human groups have norms that punish "defectors", genes that predispose someone to play a "tit for tat" strategy (or, to some extent, altruism) rather than defection are rewarded and spread through the gene pool faster. Is that not a case where group-favoring genes become widespread? To the extent it diverges from the definition you gave, that's because of pretty arbitrary caveats.
I thought that counted as group selection, but was regarded as a "special case" because it requires enforcement of norms to an extent that has only been observed in humans.
Edit: And what other species has anything like China's one-child policy?
The definition of group selection, from Wikipedia:
The key is that the benefit to the group is at least part of what is driving the adaptation. Now an adaptation (like tit-for-tat) can certainly benefit the group, but that doesn't mean there is group selection going on - the benefit to the group has to be part of the cause for the trait's spread, apart from the benefit from the individual.
Tit-for-tat is individually fitness maximizing in many situations. In fact, it's an Evolutionary Stable Strategy. So in a population of tit-for-tat players, it's fitness maximizing to play tit-for-tat. So tit-for-tat is not an example of group selection, or at least it's existence doesn't imply group selection has occurred.
That's a decision of a small group of people imposed on a much larger group of people. If each person was individually choosing to have only one child, then it might be group selection. With that being said, the changing birth patterns of developed countries is an interesting phenomena to consider. It's probably just a case of external conditions changing faster than evolution changes us though.
This is something interesting: Perceptions of distance seem to change depending on whether an object is desirable or undesirable.
A recent comment about Descartes inspired this thought: the simplest possible utility function for an agent is one that only values survival of mind, as in "I think therefore I am". This function also seems to be immune to the wireheading problem because it's optimizing something directly perceivable by the agent, rather than some proxy indicator.
But when I started thinking about an AI with this utility function, I became very confused. How exactly do you express this concept of "me" in the code of a utility-maximizing agent? The problem sounds easy enough: it doesn't refer to any mystical human qualities like "consciousness", it's purely a question about programming tricks, but still it looks quite impossible to solve. Any thoughts?
You want the program to keep running in the context of the world. To specify what that means, you need to build on top of an ontology that refers to the world. But figuring out such ontology is a very difficult problem and you can't even in principle refer to the whole world as it really is: you'll always have uncertainty left, even in a general ontological model.
The program will have to know what tradeoffs to make, for example whether it's important to survive in most possible worlds with fair probability, or in at least one possible world with high probability. These would lead to very different behavior, and the possibility of such tradeoffs exemplifies how much data such preference would require. If additionally you want to keep most of the world as it would be if the AI was never created, that's another complex counterfactual for you to bake in into its preference.
It's a very difficult problem, probably more difficult that FAI, since for FAI we at least have some hope of cheating and copying formal preference from an existing blueprint, and here you have to build that from scratch, translating your requirements from human-speak to formal specification.
An agent's "me" is its model of itself. This is already a fairly complicated thing for an agent to have, and it need not have one.
Why do you say that an agent can "directly perceive" its own mind? Or anything else? A perception is just a signal somewhere inside the agent: a voltage, a train of neural firings, or whatever. It can never be identical to the thing that caused it, the thing that it is a perception of. People can very easily have mistaken ideas of who they are.
Program must have something to preserve. My first thought is preservation of declarative memory: ensure that future contain chain of systems, implementing same goal, with overlapping declarative memory.
I haven't made an analysis, just first thought.
It refers to mystical human qualities like "me" and "think". Basically I put it in the exact same category as 'consciousness'.
No it doesn't. I'm not interested in replicating the inner experience of humans. I'm interested in something that can be easily noticed and tested from the outside: a program that chooses the actions that allow the program to keep running. It just looks like a trickier version of the quine problem, do you think that one's impossible as well?
If you want this to work in the real world, not a just much simpler computational environment, then for starters: what counts as a "program" "running"? And what distinguishes "the" program from other possible programs? These seem likely to be in the same category as (not to mention subproblems of) consciousness, whatever that category is.
Right now I'd be content with an answer in some simple computational environment. Let's solve the easy problem before attempting the hard one.
My observation is just that the process you're going through here in taking the "I think therefore I am" and making it into the descriptive and testable system is similar to the process others may go through to find the simplest way to have a 'conscious' system. In fact many people would resolve 'conscious' to a very similar kind of system!
I do not think either are impossible to do once you make, shall we say, appropriate executive decisions regarding resolving the ambiguity in "me" or "conscious" into something useful. In fact, I think both are useful problems to look at.
It's not hard to design a program with a model of the world that includes itself (though actually coding it requires more effort). The first step is to forget about self-modeling, and just ask, how can I model a world with programs? Then later on you put that model in a program, and then you add a few variables or data structures which represent properties of that program itself.
None of this solves problems about consciousness, objective referential meaning of data structures, and so on. But it's not hard to design a program which will make choices according to a utility function which refers in turn to the program itself.
Well, I don't want to solve the problem of consciousness right now. You seem to be thinking along correct lines, but I'd appreciate it if you gave a more fleshed out example - not necessarily working code, but an unambiguous spec would be nice.
Getting a program to represent aspects of itself is a well-studied topic. As for representing its relationship to a larger environment, two simple examples:
1) It would be easy to write a program whose "goal" is to always be the biggest memory hog. All it has to do is constantly run a background calculation of adjustable computational intensity, periodically consult its place in the rankings, and if it's not number one, increase its demand on CPU resources.
2) Any nonplayer character in a game which fights to preserve itself is also engaged in a limited form of self-preservation. And the computational mechanisms for this example should be directly transposable to a physical situation, like robots in a gladiator arena.
All these examples work through indirect self-reference. The program or robot doesn't know that it is representing itself. This is why I said that self-modeling is not the challenge. If you want your program to engage in sophisticated feats of self-analysis and self-preservation - e.g. figuring out ways to prevent its mainframe from being switched off, asking itself whether a particular port to another platform would still preserve its identity, and so on - the hard part is not the self part. The hard part is to create a program that can reason about such topics at all, whether or not they apply to itself. If you can create an AI which could solve such problems (keeping the power on, protecting core identity) for another AI, you are more than 99% of the way to having an AI that can solve those problems for itself.
This concept is extremely complex (for example, which "outside" are you talking about?).
You seem to be reading more than I intended into my original question. If the program is running in a simulated world, we're on the outside.
Yes, using a formal world simplifies this a lot.
For those of you who don't want to register to fanfic.com to receive notifications of new chapters to Harry Potter and the methods of rationality, I have added a Mailinglist. You can add yourself here: http://felix-benner.com/cgi-bin/mailman/listinfo/fanfic It is still untested so I don't know it will work, but I assume so.
A recent study found that one effective way to resist procrastination in future tasks is to forgive previous procrastination- because the negative emotions that would otherwise remain create an ugh field around that task.
I found the study recently, but I've personally found this to be effective previously. Forcing your way through an ugh field isn't sustainable due to our limited supply of willpower (this is hardly a new idea, but I haven't seen it referenced in my limited readings on LW.)
Some people have tried to emphasize that point but it isn't always universally understood.
Part one of a five part series on the Dunning-Kruger effect, by Errol Morris.
http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1/
Also note that Oscar winning director Morris's next project is a dark comedy that is a fictionalized version of the founding of Alcor!
Ooh, it's nice to see more details on the lemon juice bank robber. When I first heard about him I thought he was probably schizophrenic. Maybe he was, but the details make it sound like he may indeed have been just really stupid.
Isn't that a bad thing? I suspect a major source will be that recent book...
I thought that Morris's 30 minute interview with Saul Kent showed a favorable perspective on cryonics, or at least a true non-bias.
Watch and decide for yourself:
http://www.youtube.com/watch?v=HaHavhQllDI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=36
http://www.youtube.com/watch?v=Psm96dR1d1A&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=37
http://www.youtube.com/watch?v=gBYIzWblGTI&feature=PlayList&p=A6E863FB777124DD&playnext_from=PL&index=38
Statistical Analysis Overflow is trying to start up. If you'd be a regular contributor go over and commit, if enough commit it'll go into beta.
It's a "Proposed Q&A site for statistics, data analysis, data mining and data visualization", like Stack Overflow or Math Overflow.
On not being able to cut reality at the joints because you don't even know what a joint is: diagnosing schizophrenia
Gravitomagnetism -- what's up with that?
It's an phrasing of how gravity works with equations that have the same form as Maxwell's equations. And frankly, it's pretty neat: writing the laws for gravity this way gets you mechanics while approximately accounting for general relativity (how approximate and what it leaves off, I'm not sure of).
When I first found out about this, it blew my mind to know that gravity acts just like electromagnetism, but for different properties. We all know about the parallel between Coulomb's law and Newton's law of gravitation, but the gravitoelectromagnetism (GEM) equations show that it goes a lot deeper.
Besides being a good way to ease into an intuitive understanding of the Einstein field equations, to me, it's basically saying that gravity and EM are both obeying some more general law. Anyone know if work has been done in unifying gravity and EM this way? All I hear about is that it's easy to unify strong, weak, and EM forces, but gravity is the stumbling block, so this should be something they'd want to explore more.
Yet when you go investigate "gravitational induction" to find out how the gravitic parallel to magnetic fields works, you find that this gravitomagnetic field is called the torsion field, and its existence is (at least approximately) implied by general relativity, but then the Wikipedia page says that the torsion field is a pseudoscientific concept. Hm...
So, anyone have an understanding of the GEM analogy and can make sense of this? Does it suggest a way to unify gravity and EM? Or how to create a coil of mass flow that can "gravitize" a region (as a coil of current magnitizes a metal bar)?
I'd mostly like to echo what mindviews said - similar math is not unification - and point out that there was an actual attempt at unification in Kaluza-Klein theory. But I don't actually know anything about that, I should note...
Your link to "torsion field" talks about a completely different concept than the one in GEM. That concept is indeed a notorious example of pseudoscience here in Russia.
No, what's happening is that under certain approximations the two are described by similar math. The trick is to know when the approximations break down and what the math actually translates to physically.
No.
Keep in mind that for EM there are 2 charges while gravity has only 1. Also, like electric charges repel while like gravitic charges attract. This messes with your expectations about the sign of an interaction when you go from one to the other. That means your intuitive understanding of EM doesn't map well to understanding gravity.
True, but what got me the most interested is the gravitic analog of magnetic fields. It shows that masses can produce something analogous to magnetism by their rotation. Rotate one way, you drag the object closer; rotate the other way, you push it away. This allows both attraction and repulsion in the equations for gravity, and suggests something similar is going on that generates magnetism.
Be careful, you are near fringe science domain.
I'm intrigued by the notion and would like to hear more from someone who can tell me whether I can take this seriously. That 'approximately accounting for' part scares me. Is that just word chioce that makes it sound scary? Or perhaps an approximation in the way that Newtonian physics is an approximation? Or maybe it is only an approximation is as much as it suffers the same problem all our theories do of being unable to unify all of our physics at once... I'd need someone several levels ahead of me to figure that out.
It's definitely better of an approximation than Newtonian physics. This paper might help, as it derives the GEM equations from GR and specifically states what simplifying assumptions it uses, which look to be basically "for greater-than-subatomic distances". And that's exactly where you care about gravity anyway. (At subatomic distances, the other three forces dominate.)
genes, memes and parasites?
tl:dr:"People who suffer from schizophrenia are, in fact, three times more likely to carry T. gondii than those who do not."
"Over the last five years or so, evidence has been building that some human cultural shifts might be influenced, or even caused, by the spread of Toxoplasma gondii."
"In the United States, 12.3 percent of women tested carried the parasite, and in the United Kingdom only 6.6 percent were infected. But in some countries, statistics were much higher. 45 percent of those tested in France were infected, and in Yugoslavia 66.8 percent were infected!"
Wow. How is this parasite spread? Could those 'girly germs' that I avoided in primary school actually reduce my chances of getting schizophrenia?
wait, what's a girly germ? I googled it and it game me a link about a Micronesian island :/
Do young kids where you are come tease each other about the other sex? 'Cooties?' Whatever they call it.
My question is how the parasite is spread. What does that 12.3% mean for the rest of the population? Why did they only test women?
It's a major pregnancy risk.
Ick. My double posting browser bug again.
Have you tried using another browser? That might help you figure out if the problem is actually on the browser end and not something weird with the LW software.
I'm using a different browser (different computer same browser by name) now and it is working fine. My other browser seems to work fine for a while after I restart it until some event causes it to thereafter double post every time. My hunch is that I could identify the triggering of one of the plugins as the cause. Even then the symptom is outright bizarre. What kind of bug would make the browser double send all post requests?
Perhaps a failed attempt at spyware!
No matter. I don't like my other computer anyway.
A comic about the trolley problem.
upvoted-both-I remember yours this from a couple of years ago
See also
See also
That's one of the funniest things I've seen in a while. I wish I could upvote that more.
Some random thoughts about thinking, based mostly on my own experience.
I've been playing minesweeper lately (and I've never played before). For the uninitiated, minesweeper is a game that involves using deductive reasoning (and rarely, guessing) to locate the "mines" in a grid of identical boxes. For such an abstract puzzle, it really does a good job of working the nerves, since one bad click can spoil several minutes' effort.
I was surprised to find that even when I could be logically certain about the state of a box, I felt afraid that I was incorrect (before I clicked), and (mildly) amazed when I turned out to be correct. It felt like some kind of low level psychic power or something. So it seems that our brains don't exactly "trust" deductive reasoning. Maybe because problems in the ancestral environment didn't have clean, logical solutions?
I also find that when I'm stymied by a puzzle, if I turn my attention to something else for a while, when I come back, I can easily find some way forward. The effect is stunning, an unsolvable problem becomes trivial five minutes later. I'm pretty sure there is a name for this phenomenon, but I don't know what it is. In any case, it's jarring.
Another random thought. When I'm sad about something in my life, I usually can make myself feel much better by simply saying, in a sentence, why I'm sad. I don't know why this works, but it seems to make the emotion abstract, as though it happened to somebody else.
Arguably, problems in the modern environment don't have clean, logical solutions either! Note also that people get good at games like minesweeper and chess through learning. If the brain was primarily a big deductive logic machine, it would become good at these games immediately upon understanding the rules; no learning would be necessary.
I don't think that works for me. I often can't identify a specific cause of my sad feeling, and when I can, thinking about it often makes me feel worse rather than better.
Well I don't mean ruminating about the cause of the sad feeling. That is probably one of the worst things you can do. Rather I meant just identifying it.
For example, when a girlfriend and I broke up (this was a couple years ago) I spent maybe two days feeling really depressed. Eventually, I thought to myself, "You're sad because you broke up with your girlfriend."
That really put it in perspective for me. It made me think of all the cheesy teen movies where kids breakup with their sweethearts and act like it's the end of the world, when in the viewer sees it as a normal, even banal rite of passage to adulthood. I had always thought people who reacted like that were ridiculous. In other words, it feels like that thought put the issue in "far mode" for me.
That works if there is a specific cause, but like some other people have said, my sad feelings aren't caused by external events.
Same here. I also found that often there's not any cause in the sense of something specific upsetting me; it's just an automatic reaction to not getting enough social interaction.
I'm nitpicking, but maybe it was simple pleasure at getting the game?
Explicitly acknowledging emotions as things with causes is a huge chunk of managing them deliberately. (I have a post in the works on this, but I'm not sure when I'll pull it together.)
Lots of references to the CBT literature would be nice... no need to reinvent the wheel; CBT has a lot of useful things to say about NATs, and strategies to take care of them. (Then again this applies mostly to negative emotions, and deliberately managing positive emotions seems like a cool thing to do too.) That said, more instrumental rationality posts would be great.
What does NAT stand for?
A visual study guide to 105 types of cognitive biases
"The Royal Society of Account Planning created this visual study guide to cognitive biases (defined as "psychological tendencies that cause the human brain to draw incorrect conclusions). It includes descriptions of 19 social biases, 8 memory biases, 42 decision-making biases, and 36 probability / belief biases."
Deus Ex: Human Revolution
IGN Preview
It has been a while since I needed to buy a new computer to play a game.
In addition to being a sequel to Deus Ex and looking generally bad-ass, transhumanism is explicitly mentioned. From the FAQ:
I remember a post by Eliezer in which he was talking about how a lot of people who believe in evolution are actually exhibiting the same thinking styles that creationists use when they justify their belief in evolution (using buzz words like "evidence" and "natural selection" without having a deep understanding of what they're talking about, having Guessed the Teacher's Password ). I can't remember what this post was called - does anybody remember? I remember it being good and wanted to refer people to it.
I remember reading a post titled "Science as Attire," which struck me as making a very good point along these lines. It could be what you're looking for.
As a related point, it seems to me that people who do understand evolution (and generally have a strong background in math and natural sciences) are on average heavily biased in their treatment of creationism, in at least two important ways. First, as per the point made in the above linked post, they don't stop to think that the great majority of folks who do believe in evolution don't actually have any better understanding of it than creationists. (In fact, I would say that the best informed creationists I've read, despite the biases that lead them towards their ultimate conclusions, have a much better understanding of evolution than, say, a typical journalist who will attack them as ignorant.) Second, they tend to way overestimate the significance of the phenomenon. Honestly, if I were to write down a list of widespread delusions sorted by the practical dangers they pose, creationism probably wouldn't make the top fifty.
I'm extremely curious to hear both your list and JoshuaZ's list of the top 20 or so most harmful delusions. Feel free to sort by category (1-4, 5-10, 11-20, etc.) rather than rank in individual order.
I'll give you a big one: Dying a martyr's death gives you a one-way ticket to Paradise.
Mass_Driver:
I'm not sure if that would be a smart move, since it would mean an extremely high concentration of unsupported controversial claims in a single post. Many of my opinions on these matters would require non-obvious lengthy justifications, and just dumping them into a list would likely leave most readers scratching their heads. If you're really curious, you can read the comment threads I've participated in for a sample, in particular those in which I argue against beliefs that aren't specific to my interlocutors.
Also, it should be noted that the exact composition of the list would depend on the granularity of individual entries. If each entry covered a relatively wide class of beliefs, creationism might find itself among the top fifty (though probably nowhere near the top ten).
In this format that sounds like a good thing! At worst it would spark curiosity and provoke discussion. At best people would encounter a startling opinion that they had never seriously considered, think about for 60 seconds then form an understanding that either agrees with yours or disagrees, for a considered reason.
seconded, but a list of 20 seems too long/too much work, no?
I'd be thinking 5. :)
I've separated some forms of alternative medicine out when one might arguably put them closer together. Also, I'm including Young Earth Creationism, but not creationism as a whole. Where that goes might be a bit more complicated. There's some overlap between some of these (such as young earth creationism and religion). The list also does not include any beliefs that have a fundamentally moral component. I've tried to not include beliefs which are stupid but hard to deal with empirically (say that there's something morally inferior about specific racial groups). Finally, when compiling this list I've tried to avoid thinking too much about the overall balance that the delusion provides. So for example, religion is listed where it is based on the harm it does, without taking into account the societal benefits that it also produces.
1-4: Religion, Ayurveda, Homeopathy, Traditional Chinese medicine (as standardized post 1950s)
5-10 The belief that intelligence differences have no strong genetic component. The belief that intelligence differences have no strong environmental component. The belief that there are no serious existential threats to humans. The belief that external cosmetic features or national allegiances are strong indicators of mental superiority or inferiority. That human females have fundamentally less mental capacity and that this difference is enough to be a useful data point when evaluating humans. The belief that the Chinese government can be trusted to benefit its people or decide what information they should or should not have access to. (The primary reason this gets on the list is the sheer size of China. There are other governments which are much, much worse and have similar delusions by the people. But the damage level done is frequently much smaller.)
11-20 Vaccines cause autism. Young Earth Creationism. Invisible Hand of the Market solves everything. Government solves everything. Providence. That there are not fundamental limits on certain natural resources. That nuclear power is intrinsically worse than other forms of energy. The belief that large segments of the population are fundamentally not good at math or science. Astrology. The belief that antibiotics can deal with viral infections.
There were a few that I wanted to stick on for essentially emotional reasons. So for example Holocaust Denial almost got on the list and when I tried to justify it I saw myself engaging in what was clearly motivated cognition.
This list is very preliminary. The grouping is also very tentative and could likely be easily subject to change.
This one caught my eye, I don't think I've seen this listed as an obvious delusion before. Can you maybe expand more on this? I guess the idea is that a much larger number of people could make use of math or science if they weren't predisposed to think that they belong in an incapable segment?
I'm thinking of something like picking the quarter of population that scores in the bottom at a standard IQ test or the local SAT-equivalent as the "large segment of population" though. A test for basic science and mathematics skills could be being able to successfully figure out solutions for some introductionary exercises from a freshman university course in mathematics or science, given the exercise, relevant textbooks and prerequisite materials, and, say, up to a week to work out things from the textbook.
It doesn't seem obvious to me that such a test would end up with results that would make the original assertion go straight into 'delusion' status. My suspicions are somewhat based on the article from a couple of years back, which claimed that many freshman computer science students seem to simple lack the basic mental model building ability needed to start comprehending programming.
Yes. And more people would go into math and science.
That's a very interesting article. I think that the level of, and type of abstraction necessary to program is already orders of magnitude beyond where most people stop being willing to do math. My own experience in regards to tutoring students who aren't doing well in math is that one of the primary issues is one of confidence: students of all types think they aren't good at math and thus freeze up when they see something that is slightly different from what they've done before. If they understand that they aren't bad at math or that they don't need to be bad at math, they are much more likely to be willing to try to play around with a problem a bit rather than just panic.
I was an undergraduate at Yale which is generally considered to be a decent school that admits people who are by and large not dumb. And one thing that struck me was that even in that sort of setting, many people minimized the amount of math and science they took. When asked about it the most common claim was that they weren't good at it. Some of those people are going to end up as future senators and congressman and have close to zero idea of how science works or how statistics work other than at the level they got from high school. If we're lucky, they know the difference between a median and a mean.
Is it trust or fear that is the real problem in that case? What would you do as an average Chinese citizen who wanted to change the policy? (Then, the same question assuming you were an actual Chinese citizen who didn't have your philosophical mind, intelligence, idealism and resourcefulness.)