Open Thread: February 2010
Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (738)
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
There is a more recent open-thread if you want to post there.
I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?
At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.
Hint: you need to use the sum rule.
The computation is quite manageable for the case of k=5. For the general case, I too was left feeling dissatisfied with the expression I found, but on reflection I'm somewhat confident it is the correct answer.
The case k=4, Ni=13, m=5 is solved numerically on a Web site which discusses probability for Poker players, that was helpful in checking my results; the answer to 3.2 is a generalization of the results given there.
There does not appear to be a complete collection of solutions. This site comes closest. If I were you I would avoid looking at their solution for exercise 4.1 (I'm trying to forget what little I've seen of it as I'd like to solve 4.1 under my own power), but I would also not feel bad about giving up on 4.1 if you find it difficult.
I'd be happy to discuss Jaynes further over DMs or email - though I may respond at a slow pace, as I'm working through the book as my other activities allow. I'm on chapter 6 now.
Thanks, that was exactly the sort of hint I needed (i.e. of the half dozen different approaches I've been working on, focus on this one). On to 3.3.
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don't like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you'll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.
This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.
I would like to see this community adopting this approach
In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.
Upvoted. Although I am curious as to how you will measure the status hits that various people take from being wrong.
I'd assumed there was standard ways of measuring it along the lines of a typical psychology experiment: involve two groups of people in two different scenarios (wrong, and wrong with retreat). Then quiz the audience on their opinion of the person, their intelligence, work with them,whether you would trust them to perform their area of expertise, be their friend, etc.
However I can't find much with a bit of googling. I'll have a look into it later.
Thanks. That sounds good, but it is an experimental program, not something you'd observe on Less Wrong.
I expect that you could get more complex results than yes or no. Like with some primes or some observers preparing a retreat would help, with others it wouldn't, and in some contexts you'd lose status and credibility directly for trying to prepare a retreat.
True. We are interested in communities where truth-tracking is high status, so that cuts down the number of contexts. We would also probably need to evaluate it against other ways of coping with being incorrect (disassociation e.g. Eliezer(1999), apology etc) and see whether it is a good strategy on average.
I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.
I find myself nodding along in agreement to this until I get to "Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain" which at the same time seems to be both too specific given how little we know, and at the same time too vague, with absolutely no explanatory power. In particular "quantum" and "holistic" both seem like empty buzzwords in this context, along the lines of mysterious answers to mysterious questions, or along the lines that "consciousness is weird, quantum mechanics is weird, therefore quantum mechanics must be involved in consciousness".
Of course, this is being a little unfair -- a proposed solution needs to be more specific than what we as yet know, and a solution that is not fully worked out by necessity has vague areas. But the feel of each of these is towards the decidedly not useful portion of either side. You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't. And, well, given how warm, wet, and squishy the human nervous system is, I flatly would not expect any large scale quantum coherences. (Though the limits are often overstated). Again, "holistic" doesn't add much; heck, I'm not even sure what sorts of mechanisms it would rule out.
I posted here so my correspondent could see a second opinion, by the way, so thanks for that.
First proposition: if you try to bring consciousness into alignment with standard physical ontology, you get a dualistic parallelism at best. (Arguments here.)
Second proposition: the new factor in QM is entanglement. I defined my quantum holism here as "the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these."
I can explain technically what these "local wholes" might look like. You should think of a spacelike hypersurface consisting of numerous Hilbert spaces connected by mappings into a graph structure. Each Hilbert space contains a state vector. Then the whole thing evolves, the graph structure and the state vectors. This is, more or less, the QCH formalism for quantum gravity (discussed here).
The Hilbert spaces are the local wholes (the "monads" of a previous post). My version of quantum-mind theory is to say that the conscious mind is a single one of these, and that the series of experiences one has in life correspond to the evolution of its state vector. Now, although I started out by saying that standard physical ontology is irredeemably unlike what we actually experience, I'm certainly not going to say that a featureless vector jumping around an abstract multidimensional space is much better. Its advantage, in fact, is its radically structureless abstractness. It is a formalism telling us almost nothing about the nature of things in themselves; constructed only to be a predictively adequate black box. If we then treat conscious appearances as data about the inner nature of one thing, at least - ourselves, our minds, however you end up phrasing it - they can help us to interpret the formalism. What we had described formally as a state vector evolving in a certain way in Hilbert space would be understood as a mathematical representation of what was actually a conscious self undergoing a certain series of experiences.
In principle, you could hope to use experience to reveal the reality behind formal physical description at a much higher level - for example, computational neuroscience. But I think that non-quantum computational neuroscience presupposes an atomistic, spatialized ontology which is just mismatched to the specific nature of consciousness (see earlier remark about dualism resulting from that framework). So I predict that quantum coherence exists in the brain and is functionally relevant to conscious cognition. As you observe, it's a challenging environment for such effects, but evolution is ingenious and we keep finding new twists on what QM can do (the latest).
Thanks. Though I'm still highly skeptical, this gives me much more to engage with. This will take me some time to process though, and it might take me a while as I'm preparing for a conference this week.
XKCD hits a home run with its Valentine's Day comic.
Science Valentine
Given the alt text in particular I'd almost put this in the monthly quotes thread too. :)
You have been preempted.
So I have. Based of the (asterix free) dates it would seem CronoDAS was too, although maybe the rest of the comic is worth the double link from here.
I used the Google Custom Search bar and didn't find it.
Eliezer has a new fanfic available.
I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.
But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible 'quark' choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in. Or, more accurately, he won't be able to make a prediction that is true for all the branches.
I think Omega's capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.
Also, I'd say that our wetware brains probably aren't close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.
The world is deterministic at least to the extent that everything knowable is determined (but not necessarily the other way around). This is why you need determinism in the world in order to be able to make decisions (and can't use something not being determined as a reason for the possibility of making decisions).
So long as you make your Newcomb's choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box - if Robin Hanson is right about mangled worlds and there's a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.
Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer's that two-box are the ones that really win, as rare as they are.
The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.
The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.
And, also, we define winning as winning on average. A person can get lucky and win the lottery -- doesn't mean that person was rational to play the lottery.
Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.
Interesting. I was idly wondering about that. Along somewhat different lines:
I've decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.
Winning the extra amount in this way in a handful of worlds won't do anything to my average winnings-- it won't even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.
Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you're saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.
Why is this comment being down-voted? I thought it was rather clever to use Omega's one weak spot -- quantum uncertainty -- to optimize your winnings even over a set with measure zero.
Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.
When the star streaks across the sky, you think, "Ohmigosh! It happened! I'm about to get rich!" Then you open the boxes and get $1000.
Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.
The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.
Note: I am not up to speed on quantum mechanics. I could be off on a few things here.
OK, right: looking for a merging of stars would be a terrible anomaly to use because that's probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can't see it coming either. (He can accurately predict that I will two-box in this case, but he can't predict that the milk will unspill.)
I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.
Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.
I didn't down vote but I confess I don't really know what you're talking about in that comment. Why would you two box in that case? What really important thing is at stake? I don't get it.
OK. The way I've understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:
you two box --> you get $2,000 ($1000 in each box)
you one box --> you get 1M ($1M in one box, $1000 in the second box)
If Omega is not a perfect predictor, it's possible that you two box and you get 1,001,000. (Omega incorrectly predicted you'd one box.)
However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box -- so that you can't beat him).
My solution was to 1box almost always -- so that Omega predicts you will one box, but then 'cheat' and 2-box almost never (but sometimes). According to Greg, your 'sometimes' has to be over a set of measure 0, any larger than that and you'll be penalized due to Omega's arithmetic.
Nothing -- if only an extra thousand is at stake, I probably wouldn't even bother with my quantum caveat. One million dollars would be great anyway. But I can imagine an unfriendly Omega giving me choices where I would really want to have both boxes maximally filled ... and then I'll have to realize (rationally) that I must almost always 1 box, but I can get away with 2-boxing a handful of times. The problem with a handful, is that how does a subjective observer choose something so rarely? They must identify an appropriately rare quantum event.
So this job could even be accomplished by flipping a quantum coin 10000 times and only two-boxing when they come up tails each time. You're just looking for a decision mechanism that only applies in a handful of branches.
Yes, exactly.
The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb's problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a "random" but deterministic process, Omega knows how it turns out, even if you don't, so Pb=0 or 1). Let F be your expected return.
Regardless of what Omega does, you collect the contents of box A, and have a (1-Pb) probability of collecting the contents of box B. F(Po=1)= A + (1-Pb)B
F(Po=0)=(1-Pb)B
For the non-degenerate cases, these add together as expected. F(Po, Pb) = Po(A + (1-Pb)B) + (1-Po)[(1-Pb)B]
Suppose Po = Pb := P
F(P) = P(A + (1-P)B) + [(1-P)^2] B
=P(A + B - PB) + (1-2P+P^2) B
=PA + PB - (P^2)B + B - 2PB + (P^2)B
=PA + PB + B - 2PB
=B + P(A-B)
If A > B, F(P) is monotonically increasing, so P = 1 is the gives maximum return. If A<B, P=0 is the maximum (I hope it's obvious to everyone that if box B has MORE money than a full box A, 2-boxing is ideal).
Thank to everyone who replied. So I see that we don't really believe that the universe is deterministic in the way implied by the problem. OK, that's consistent then.
What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don't always go the same way. In fact, Omega doesn't even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb's problem to work as they're supposed to.
But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.
I'm sufficiently uninformed on how quantum mechanics would interact with determinism that so far I've been operating under the assumption that it doesn't. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found? (I'm not sure even in principle how you could prove that something is random. It'd be proving the negative on the existence of causation for a possibly-hidden cause.)
There is no special line where events become macro-level events. It's not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You're position right now is subject to indeterminacy. It just happens that you're big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).
In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.
IAWYC except, of course, for this:
As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.
Thats a fair point, but I don't think it is quite that easy. On one formulation a deterministic system is a system whose end conditions are set by the rules of the system and the starting conditions. Under this definition, MWI is deterministic. But often what we mean by determinism is that it is not the case that the world could have been otherwise. For one extension of 'world' that is true. But for another extension, the world not only could have been otherwise. It is otherwise. There are also a lot of confusions about our use of indexicals here: what we're referring to with "I", "You", "This", "That" My" etc. Determinism usually implies that ever true statement (including true statements with indexicals) is necessarily true. But it isn't obvious to me that many worlds gives us that. Also, a common thought experiment to glean people's intuitions about determinism is basically to say that we live in a universe where a super computer that can exactly predict the future is possible. MWI doesn't allow for that.
Perhaps we shouldn't try to fit our square-pegged physics into the round holes of traditional philosophical concepts. But I take your point.
Why would determinism have anything to say about indexicals? There aren't any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don't see what use such a concept of "determinism" would have.
Thinking about this it isn't a concern about indexicals but a concern about reference in general. When we refer to an object we're not referring to it's extension throughout all Everett branches but we're also referring to an object extended in time. So take a sentence like "The table moved from the center of the room to the corner." If determinism is true we usually think that all sentences like this are necessary truths and sentences like "The table could have stayed in the center" are false. But I'm not sure what the right way to evaluate these sentences is given MWI.
Voted down because my writing is confusing or because I said something stupid?
Yes; since many important macroscopic events (e.g. weather, we're quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.
Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there'd be no stable atoms.
No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell's inequalities. But that appears not to be the case.
Or, of course, the causes could be non-local.
But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.
Materialists must hope that in spite of Bell's inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.
Alicorn asked above:
In principle, you can't. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.
Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I'll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I'm worried about whether materialism is self-consistent, sometimes I'm worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.
In that case I am not a materialist. I don't believe in any entities that materialists don't believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking "which Everett branch are we in" can have nondeterministic answers.
Those sorts of question can arise in non-QM contexts too.
No worries -- you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that's what you just wrote -- sorry if I misunderstood something.)
Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can't say the direction was undetermined.
Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.
What are Bell's inequalities, and why do quantumly-behaving things with deterministic causes have to follow them?
Um... am I missing something or did no one link to, ahem:
http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/
Thank you, although I find this a little too technical to wrap my brain around at the moment.
Alicorn, if you're free after dinner tomorrow, I can probably explain this one.
Well, actually everything has to follow them because of Bell's Theorem.
Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.
The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest 'spooky action at a distance' because particles appear to share information instantaneously, at a distance, long after an interaction between them.
People applying "common sense" would like to argue that there is some way that the information is being shared -- some hidden variable that collects and shares the information between them.
Bell's Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works -- and deduces correlations between particles sharing information that is in contradiction with experiments.
* that is, mechanically rather than 'magically' at a distance
There's no good explanation anywhere. :(
Perfection is impossible, but a very, very accurate prediction might be possible.
Yes.
This is actually a damned good question:
http://www.scientificblogging.com/mark_changizi/why_doesn%E2%80%99t_size_matter%E2%80%A6_brain
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
I second Kevin: the nearest analogy that occurs to me is playing "kick the landmine" when the landmine is almost surely a dud.
Of course, the advantage of "kick the landmine" is that you don't take the rest of the world out in case it wasn't a dud.
Sounds fun. Though so far we don't have anything that you can "teach" in a general way.
Do you mean playing around with backprop? Or making your own algorithms.
Either.
If this is your state of knowledge then... how can I put this: it seems extremely likely that you'll start playing around with very simple tools, find out just how little they can do, and, if you're lucky, start reading up and rediscovering the world of AI.
Backprop is likely to be safe. Lots of AI students play around with it and it is well behaved mathematically. If it was going to kill us it would have done so already. More advanced stuff has to be evaluated individually.
What on Earth? When you say "may I" you presumably mean "is this a good idea" since obviously we're not in a position to stop you. But you're already aware of the arguments why it isn't a good idea and you don't address them here, so it's not clear that you have a good purpose for this comment in mind.
I interpreted as akin to a call to a suicide hot-line.
'This is sounding like a good idea...'
(Can you help / talk me out of it?)
If this is the case, we can probably give support. I certainly understand how curiosity can pull, and Warrigal may already be rationalizing that he probably won't make progress, and we can give advice that balances that. But then, is it true that Warrigal should be afraid of knowledge?
I don't think it's fear of knowledge that leads me to suggest you don't try to build a catapult to twang yourself into a tree.
I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you're so astronomically unlikely to succeed that it doesn't matter.
No.
What made you think you might get any other answer?
Well, I did get other answers. Ask Kevin and thomblake why they answered that way, if you like.
I wonder if physicists would admit the effect of genealogy on their interpretation of QM?
People who ask physicists their interpretation of QM: next time, if the physicist admits controversy, ask about genealogy and other forms of epistemic luck.
I'm a grad student of quantum information. My advisor doesn't really talk much about interpretations, going only so far as to point out how silly the Bohmians are. That's largely true of most in this group, though one is an avowed "quantum Bayesian": probability as conceptualized by humans is simply the specialization to commuting variables, but we need non-commuting variables to deal with the world. The laws of quantum mechanics tell you how to update your information under time evolution.
My interpretation of QM was formed as an undergrad, with no direct professorial contact. It was based mostly on how arbitrary the placing of the classical-quantum divide in treatments is, so long as you place it so enough stuff is quantum. I took that seriously, bit the bullet, and so am an Everettian.
What happens when you comment on an old pre-LW imported Overcoming Bias post? Does your comment go to the bottom or the top?
Just curious.
Obviously, the thing to do is to reply to an established comment so that the order of comments is maintained. Does voting on old comments now change their order? If so, I should stop doing that..
Best yet, it seems any new comments you want to make might best be exported to an open thread? For historical authenticity of the post and its original comments.
It doesn't when I do it.
Even if it doesn't go to the bottom under the default setting, you can choose "Old" from the dropdown menu next to "Sort By" to view comments in chronological order (this preserves threads).
It goes to the bottom. At least, it has in my experience.
I once asked about commenting on old posts. People seemed okay with it.
Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.
The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
These are excellent questions/ideas. I want a mentor too!
I thought about contacting you to see if you wanted to start a little study group reading through the sequences. (For example, I started reading through the metaethics sequence and it was useless. My kinds of questions are like, 'What do any of these words mean? What's the implied context? Etc., etc.) But I'm not very good at details, and couldn't imagine any way of doing so. Except maybe meeting somewhere like Second Life so we can chat...
Do consider not starting with the metaethics sequence...
Yeah, actually, I would be willing to do that.
Great! And we'll announce when we meet and invite whoever wants to come?
Let's start by doing it one time.
Cool. Does IRC work for you? I think I still have a client lurking about somewhere...
And I vaguely remember there being an LW channel at one point. Yep: #lesswrong. And there is a nifty web link in the wiki link. Cool.
EDIT: Yeah, I was wondering about the hhhhhhhhf1. I would have guessed a cat.
Countdown: 13 hours
IRC Meeting At Less Wrong:
MrHen and I are meeting at 8:15 p.m. Central for our first IRC Less Wrong study-group session. Please join us -- we will meet here a few minutes before the meeting.
Our topic today is evidence; to discuss the post, How Much Evidence Does it Take? and possibly supporting posts, What is evidence?. Our goal is to build a foundation for discussing Occam's Razor and Einstein's Arrogance.
I'll send out regular announcements closer to the session if there is no recent comment activity here. Please announce if you are planning to attend -- it will encourage others to attend too.
So I ended up at the game in person. How did this go? Any insights to share with those of us who weren't there?
This is a transcript of the chat log.
In the post, How Much Evidence Does It Take, Eliezer described the concept of 'bits' of information. For example, if you wanted to choose winning lottery numbers with a higher probability, you could have a box that beeps for the correct lottery number with 100% probability and only beeps for an incorrect number with 25% probability. Then the application of this box would represent 2 bits of information -- because it winnows your possible winning set by a factor of 4.
During the chat, we discussed this definition of "bits". MrHen brought in some mathematics to discuss the case where the box beeps with less than 100% probability for the correct number (reduced box sensitivity, with possibly the same specificity), and how this would affect the calculation of bits.
An interesting piece of trivia came up. Measuring information "base 2" is arbitrary of course and instead of measuring bits we could measure "bels" or "bans" (base 10).
Wow, I wish I'd been there for that (had to go to a trade group meeting) -- that's one of the topics that interests me!
Btw, I think you mean that a beep-for-incorrect gives you 2 bits of information. Just applying the box will usually (~75% of the time) not indicate either way. The average information gained from an application of the box (aka entropy of the box variable aka expected surprisal of using the box aka average information gain on using the box) would be ~0.5 bits.
And yes there's also nats (base e).
I believe the point was that a beep constitutes 2 bits of evidence for the hypothesis that the number is winning.
Super easy specifics on how to get where we will be: Click on this link and enter a nickname (hopefully something similar to your name here). And that should do it.
All are welcome and you can just lurk if you want. I am there now while I munch on some beans for dinner but the discussion should begin in about an hour.
We're about to begin our IRC meeting if anyone else wants to join us!
Countdown: 3 hours till our IRC meeting.
You're welcome to join us.
How does one access it? Link?
MrHen left these convenient instructions.
I plan to attend.
If I'm home I'll log in. But I'm going to be watching basketball at the same time so my participation might not be heavy.
How much evidence does it take for you to accept 3:2 odds that your team will win the match given your prior understanding of each team's performance at various stages of a game?
So I actually have this idea of doing a series (or just a couple) of top level posts about rationality and basketball (or sports in general). I'm partly holding off because I'm worried that the rationality aspects are too basic and obvious and no one else will care about the basketball parts.
But sports are great for talking about rationality because there is never an ambiguity about the results of our predictions and because there are just bucket-loads of data to work with. On the other hand, a surprising about of irrationality can be still be found even in professional leagues where being wrong means losing money.
Anyway, to answer your question: You get two kinds of information from play at the beginning of the game: First, you get information about what the final score will be from the points that have been scored already. So if my team is up 10 points the other team needs to score 11 more points over the remainder of the game in order to win. The less time remaining in the game the more significant this gets. The other kind of information is information about how the teams are playing that day. But if a team is playing significantly better or worse than you would have predicted coming in, their performance is most likely just noise. Regression to the mean is what should be expected. So my prediction of a team's performance for the remainder of some game is going to be dominated by my priors (which hopefully are pretty sophisticated and based on a lot of data, for college basketball I start here and then adjust for a couple things that can't be taken into account by that model (the way individual players match up against each other, injuries, any information about the teams' mental states, etc.)
If you have all this information you can actually give, at any point during a game, the odds for you winning (there are a couple other factors that need to be considered as well, in particular you need to estimate how many possessions there will be in the rest of the game because the information we have about team performance is per/possession not per minute). I've also ignored fan attendance in this comment but that is really important evidence as well. I ended up attending the game in person and when I arrived I realized the venue included at least as many fans of the other team as there were fans of my team-- and right there the probability my team was going to win dropped by 10%.
Scheduled IRC meetings?
Sounds good to me. I would enjoy being present at a meeting in order to discuss topics from this site.
I'm not an expert either - in fact I'm not sure there is exactly expertise in what you ask - but mail me anytime - Paul Crowley, paul at ciphergoth dot org. Anyone here is very welcome to mail me.
Cool. Will do.
Ace. If you don't get a response prompt me to check my spam filters!
While reading old posts and looking for links to topics in upcoming drafts I have noticed that the Tags are severely underutilized. Is there a way to request a tag for a particular post?
Example: Counterfactual has one post and it isn't one of the heavy hitters on the subject.
If you have specific articles in mind to be tagged, I'm sure just asking their authors would be fine. If you click on someone's name to go to their user page, you'll see a "Send message" button (though I have never actually used this feature).
What is the correct term for the following distinction:
Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.
If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.
Static? Unchanging? Complete (as far as definitions of the situation go)? Simple (as far as equations go - it lacks the dynamic variable representing the need to update)?
Thank you for responding! I was wondering if anyone ever would.
The best I could come up with was "Fixed" or "Confident." Your choices seem on par with those. Perhaps there is no technical term for this? I find that hard to believe.
Changing the original question slightly seems to be looking for a different but similar term:
Unfair coin A has been flipped 10^6 times and appears to be diverging on 60% in favor of HEADS
Unfair coin B has been flipped 10^1 times and appears to be diverging on 60% in favor of HEADS
If I flip coin A and it results in HEADS the estimation of 60% will move less than it would if I was flipping coin B. This makes coin A more [something] than coin B, but I don't know the right term.
More defined. You've reduced your uncertainty about its properties (unfairness) using more evidence.
I'm sorry, I avoid technical terms when thinking about such things.
Ah! That works wonderfully!
I'm pretty sure it makes your beliefs about coin A more [something] than coin B.
Okay, sure, I can deal with that. But I still need something to put in for [something]. :)
Left as an exercise for the reader.
Hey! That doesn't help...
Though, honestly, I am just looking for a word; a term that describes the behavior. I don't need the behavior explained.
It makes your beliefs about coin A more concentrated than your beliefs about coin B.
Yes! That feels like the term I was looking for, thanks.
Hi LessWrongers,
I'm aware that Newcomb's problem has been discussed a lot around here. Nonetheless, I'm still surprised that 1-boxing seems to be the consensus view here, contrary to the concensus view. Can someone point to the relevant knockdown argument? (I found Newcomb's Problem and Regret of Rationality but the only argument therein seems to be that 1-boxers get what they want, and that's what makes 1-boxing rational. Now, getting what one wants seems to be neither necessary nor sufficient, because you should get it because of your rational choice, not because the predictor rigged the situation?!)
Many thanks for any links, corrections and help!
Therefore, One Box.
The rest is just details. If it so happens that those 'details' tell you to only get the $10,000 then you have the details wrong.
I don't know what the consensus knock-down argument is, but this is how mine goes:
Usually, we optimize over our action choices to select the best outcome. (We can pick the blue box or the red box, and we pick the red box because it has the diamond.) Omega contrives a situation in which we must optimize over our decision algorithm for the best outcome. Choose over your decision algorithms (the decision algorithm to one-box, or the decision algorithm to two-box), just as you would choose among actions. You realize this is possible when you realize that choosing a decision algorithm is also an action.
(Later edit: I anticipated what might be most confusing about calling the decision algorithm an 'action' and have decided to add that the decision algorithm is an action that is not completed until you actually one box or two box. Your decision algorithm choice is 'unstable' until you have actually made your box choice. You "choose" the decision algorithm that one-boxes by one-boxing.)
If the solution were just to see that optimizing our decision algorithm is the right thing to do, the crucial difference between the original problem and the variant, where Omega tells you he will play this game with you some time in the future, seems to disappear. Hardly anyone denies 1-boxing is the rational choice in the latter case. There must be more to this.
I don't see a contradiction, just based on what you've written. (If a crucial difference disappears, then maybe it wasn't that crucial? Especially if the answer is the same, it's OK if the problems turn out to actually be more similar than you thought.) Could you clarify how you conclude that there there must be more to the problem?
My thinking goes like this: The difference is that you can make a difference. In the advance- or iterated case, you can causally influence your future behaviour, and so the prediction, too . In the original case, you cannot (where backwards causation is forbidden on pain of triviality). Of course that's the oldest reply. But it must be countered, and I don't see it.
Why can't you influence your future behavior in the original case? When you're trying to optimize your decision algorithm ('be rational'), you can consider Newcomblike cases even if Omega didn't actually talk to you yet. And so before you're actually given the choice, you decide that if you ever are in this sort of situation, you should one-box.
I'm sympathetic to some two-boxing arguments, but once you grant that one-boxing is the rational choice when you knew about the game in advance, you've given up the game (since you do actually know about the game in advance).
Alas, this comment really muddies the waters. It leads to Furcas writing something like this:
Underling asks: if the content of the boxes has already been decided, how can you retroactively effect the content of the boxes?
The problem with what you've written, thomblake, is that you seem to agree with Underling that he can't retroactively change the content of the boxes and thus suggest that the content of the boxes has already been determined by past events, such as whether he has been exposed to these problems before and has pre-committed. (This is only vapidly true to the extent that everything is determined by past events.)
Suppose that Underling has never thought of the Newcomb problem before. The content of the boxes still depends upon what he decides, and his decision is a 'choice' just as much as any choice a person ever makes: he can decide which box to pick. And his decision algorithm, which he chooses, will decide the contents of the box.
Explaining why this isn't a problem with causality requires pointing to the determinism of the system. While Underling has a choice of decision algorithms, his choice has already been determined and affects the contents of the box.
If the universe is not deterministic, this problem violates causality.
The predictor "rigged" the situation, it's true, but you have that information, and should take it into account when you decide which choice is rational.
We also have the information that our decision won't affect what's in the boxes, and we should also take that into account.
The only thing that our decision determines is whether we'll get X or X+1000 dollars. It does not determine the value of X.
If X were determined by, say, flipping a coin, should a rational agent one-box or two-box? Two-box, obviously, because there's not a damn thing he can do to affect the value of X.
So why choose differently when X is determined by the kind of brain the agent has? When the time to make a decision comes, there still isn't a damn thing he can do to affect the value of X!
The only difference between the two scenarios above is that in the second one the thing that determines the value of X also happens to be the thing that determines the decision the agent will make. This creates the illusion that the decision determines X, but it doesn't.
Two-boxing is always the best decision. Why wouldn't it be? The agent will get a 1000 dollars more than he would have gotten otherwise. Of course, it would be even better to pre-commit to one-boxing, since this will indeed affect the kind of brain we have, which will in turn affect the value of X, but that decision is outside the scope of Newcomb's problem.
Still, if the agent had pre-commited to one-boxing, shouldn't he two-box once he's on the spot? That's a wrong question. If he really pre-commited to one-boxing, he won't be able to choose differently. No, that's not quite right. If the agent really pre-commited to one-boxing, he won't even have to make the decision to stick to his previous decision. With or without pre-commitment, there is only one decision to be made, though at different times. If you have a Newcombian decision to make, you should always two-box, but if you pre-commmited you won't have a Newcombian decision to make in Newcomb's problem; actually, for that reason, it won't really be Newcomb's problem... or a problem of any kind, for that matter.
Right, but exactly this information seems to the 2-boxer to point to 2-boxing! If the game is rigged against you, so what? Take both boxes. You cannot lose, and there's a small chance the conman erred.
Mhm. I'm still far from convinced. Is this my fault? Am I at all right in assuming that 1-boxing is heavily favored in this community? And that this is a minority belief among experts?
What helps me when I get stuck in this loop (the loop isn't incorrect exactly, it's just non-productive) is to meditate on how the problem assumes that, for all my complexity, I'm still a deterministic machine. Omega can read my source code and know what I'm going to pick. If I end up picking both boxes, he knew that before I did, and I'll end up with less money. If I can convince myself -- somehow -- to pick just the one box, then Omega will have seen that coming too and will reward me with the bonus. So the question becomes, can your source code output the decision to one-box?
The answer in humans is 'yes' -- any human can learn to output 1-box -- but it depends sensitively upon how much time the human has to think about it, to what extent they've been exposed to the problem before, and what arguments they've heard. Given all these parameters, Omega can deduce what they will decide.
These factors have come together (time + exposure to the right arguments, etc.) on Less Wrong so that people who hang out at Less Wrong have been conditioned to 1-box. (And are thus conditioned to win in this dilemma.)
I agree with everything you say in this comment, and still find 2-boxing rational. The reason still seems to be: you can consistently win without being rational.
By rational, I think you mean logical. (We tend to define 'rational' as 'winning' around here.*)
... and -- given a certain set of assumptions -- it is absolutely logical that (a) Omega has already made his prediction, (b) the stuff is already in the boxes, (c) you can only maximize your payoff by choosing both boxes. (This is what I meant by this line of reasoning isn't incorrect, it's just unproductive in finding the solution to this dilemma.)
But consider what other logical assumptions have already snuck into the logic above. We're not familiar with outcomes that depend upon our decision algorithm, we're not used to optimizing over this action. The productive direction to think along is this one: unlike a typical situation, the content of the boxes depends upon your algorithm that outputs the choice, only indirectly on your choice.
You're halfway to the solution of this problem if you can see both ways of thinking about the problem as reasonable. You'll feel some frustration that you can alternate between them -- like flip-flopping between different interpretations of an optical illusion -- and they're contradictory. Then the second half of the solution is to notice that you can choose which way to think about the problem as a willful choice -- make the choice that results in the win. That is the rational (and logical) thing to do.
Let me know if you don't agree with the part where you're supposed to see both ways of thinking about the problem as reasonable.
* But the distinction doesn't really matter because we haven't found any cases where rational and logical aren't the same thing.
May I suggest again that defining rational as winning may be the problem?
(2nd reply)
I'm beginning to come around to your point of view. Omega rewards you for being illogical.
.... It's just logical to allow him to do so.
This is why I find it incomprehensible that anyone can really be mystified by the one-boxer's position. I want to say "Look, I've got a million dollars! You've got a thousand dollars! And you have to admit that you could have seen this coming all along. Now tell me who had the right decision procedure?"
My point of view is that the winning thing to do here and the logical thing to do are the same.
If you want to understand my point of view or if you want me to understand your point of view, you need to tell me where you think logical and winning diverge. Then I tell you why I think they don't, etc.
You've mentioned 'backwards causality' which isn't assumed in our one-box solution to Newcomb. How comfortable are you with the assumption of determinism? (If you're not, how do you reconcile that Omega is a perfect predictor?)
Only to rule it out as a solution. No problem here.
In general, very. Concerning Newcomb, I don't think it's essential, and as far as I recall, it isn't mentioned in the orginal problem.
I'll try again: I think you can show with simple counterexamples that winning is neither necessary nor sufficient for being logical (your term for my rational, if I understand you correctly).
Here we go: it's not necessary, because you can be unlucky. Your strategy might be best, but you might lose as soon as luck is involved. It's not sufficient, because you can be lucky. You can win a game even if you're not perfectly rational.
1-boxing seems a variant of the second case, instead of (bad) luck the game is rigged.
Around here, "rational" is taken to include in its definition "not losing predictably". Could you explain what you mean by the term?
Perhaps it will make sense if you view the argument as more of a reason to be the kind of person who one-boxes, rather than an argument to one-box per se.
That's too cryptic for me. Where's the connection to your first comment?
As i said in reply to byrnema, I don't dispute that wanting to be the kind of person who 1-boxes in iterated games or in advance is rational, but one-shot? I don't see it. What's the rationale behind it?
The one-shot game still has all of the information for the money in the boxes. If you walked in and picked both boxes you wouldn't be surprised by the result. If you walked in and picked one box you wouldn't be surprised by the result. Picking one box nets more money, so pick one box.
I deny that 1-boxing nets more money - ceteris paribus.
Then you're simply disagreeing with the problem statement. If you 1-box, you get $1M. If you 2-box, you get $1k. If you 2-box because you're considering the impossible possible worlds where you get $1.001M or $0, you still get $1k.
At this point, I no longer think you're adding anything new to the discussion.
I never said I could add anything new to the discussion. The problem is: judging by the comments so far, nobody here can, either. And since most experts outside this community agree on 2-boxing (ore am I wrong about this?), my original question stands.
Ceteris ain't paribus. That's the whole point.
You have the information that in Newcomblike problems, it is better to (already) be inclined to predictably one-box, because the game is "rigged". So, if you (now) become predictably and generally inclined to one-box, you can win at Newcomblike problems if you encounter them in the future. Even if you only ever run into one.
Of course, Omega is imaginary, so it's entirely a thought experiment, but it's interesting anyway!
Agree completely.
But the crucial difference is: in the one-shot case, the box is already filled or not.
Yes. But it was filled, or not, based on a prediction about what you would do. We are not such tricksy creatures that we can unpredictably change our minds at the last minute and two-box without Omega anticipating this, so the best way to make sure the one box has the goodies in it is to plan to actually take only that box.
I agree. I would add that situations can and do arise in real life where the other fellow can predict your behavior better than you can predict it yourself.
For example, suppose that your wife announces she is going on a health kick. She is joining a gym; she will go 4 or 5 times a week; she will eat healthy; and she plans to get back into the shape she was in 10 years ago. You might ask her what she thinks her probability of success is, and she might honestly tell you she thinks there is a 60 or 70% chance her health kick will succeed.
On the other hand, you, her husband know her pretty well and know that she has a hard time sticking to diets and such. You estimate her probability of success at no more than 10%.
Whose probability estimate is better? I would guess it's the husband's.
Well, in the Newcomb experiment, the AI is like the husband who knows you better than you know yourself. Trying to outguess and/or surprise such an entity is a huge uphill battle. So, even if you don't believe in backwards-causality, you should probably choose as if backwards causality exists.
JMHO
I do not anticipate ever becoming someone's husband.
If we rule out backwards causation, then why on earth should this be true???
Imagine a simple but related scenario that involves no backwards causation:
You're a 12 year old kid, and you know your mom doesn't want you to play with your new Splogomax unless an adult is with you. Your mom leaves you alone for an hour to run to the store, telling you she'll punish you if you play with the Splogomax, and that, whether there's any evidence of it when she returns, she knows you well enough to know if you're going to play with it, although she'll refrain from passing judgement until she has just gotten back from the store.
Assuming you fear punishment more than you enjoy playing with your Splogomax, do you decide to play or not?
Edit: now I feel stupid. There's a much simpler way to get my point across. Just imagine Omega doesn't fill any box until after you've picked up one or two boxes and walked away, but that he doesn't look at your choice when filling the boxes.
Quantum Criticality in an Ising Chain: Experimental Evidence for Emergent E8 Symmetry
http://www.sciencemag.org/cgi/content/abstract/sci;327/5962/177?maxtoshow=&hits=10&RESULTFORMAT=&fulltext=e8&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT
Ah, emergence...
Popular summary http://plus.maths.org/latestnews/jan-apr10/e8/index.html
Random thought:
If someone objects to cryonics because they are worried they wouldn't be the same person on the other side believes in an eternal resurrection with a "new body" as per some Christian belief systems they should have the same objection. I would expect a response akin to, "God will make sure it is me," but the correlation still amuses me.
Here's a long, somewhat philosophical discussion I had on Facebook, much of which is about cryonics. The major participants are me (Tanner Swett, pro-cryonics), Ian (a Less Wronger, pro-cryonics), and Cameron (anti-cryonics). The discussion pretty much turned into "There is no science behind cryonics!" "Yes there is!" "No there isn't!"
As you can see, nobody changed their minds after the discussion, showing that whatever the irrationality was, we were unable to identify it and repair it.
Daniel Varga wrote
What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?
Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. However, the whole equation might change if a would-be criminal thinks there's a p% chance they won't get caught, and a (1-p)% chance that one of their identities will have to go to jail...
Even a death penalty would be meaningless to someone who knows they could upload themselves to another vessel at any time. (If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent.)
(I am posting this comment here because it is off-topic with respect to the thread, which was about whether we're in a simulation or not.)
In a world with an FAI Singleton, actions that would violate another individual's rights might be simply unavailable, making the concept of a legal/justice system obsolete.
In other scenarios, uploading/splitting would still take resources, which might be better used than in absorbing a criminal punishment. A legal/justice system could apply punishments to multiple instances of the criminal, and could be powerful enough to likely track them down.
I am not convinced that the upload would be innocent. Maybe, if the upload was rolled back to before the criminal intentions. Any attempt by the upload to profit from the crime would definitely make it complicit.
Criminal punishment could also take the form of torture, effective if the would be criminal fears any of its instances being tortured, even if some are not.
I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.
The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.
I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.
The remarkable and depressing thing to me is that most people are not able to see it at a glance. To me it just seems like a string of obvious bluffs and non-sequiturs. Do you remember what was going on in your head when you didn't see it at a glance?
It's difficult for me to remember how I used to think, even a few years ago. Hell, when there's a drastic change in the way I think about something, I have trouble remembering how I used to think mere days after the change.
Anyway, one thing I remember is that I kept giving Lanier the benefit of the doubt. I kept telling myself, "Well, maybe I don't understand what he's really trying to say." So the reason I didn't see the obvious would be... lack of self-confidence? Or maybe it's only because my own thoughts weren't all that clear back then. Or maybe because the way I used to parse stuff like Lanier's piece was a lot more, um, holistic than it is now, by which I mean that I didn't try to decompose what is written into more simple parts in order to understand it.
It's hard to tell.
Measure your risk intelligence, a quiz in which you answer questions on a confidence scale from 0% to 100% and your calibration is displayed on a graph.
Obviously a linear probability scale is the Wrong Thing - if we were building it, we'd use a deciban scale and logarithmic scoring - but interesting all the same.
My mom saw a mouse running around our kitchen a couple of days ago, so she had my father put out some traps. The only traps he had were horrible glue traps. I was having trouble sleeping, so I got out of bed to play video games, and I heard a noise coming from the kitchen. A mouse (or possibly a rat, I don't know) was stuck to one of the traps. Long story short, I put it out of its misery by drowning it in the toilet.
I feel sick.
That sounds unpleasant. If I were you I'd go pick up some snap traps tomorrow. Also, wear ear plugs or headphones.
My father used to trap mice in garbage cans baited with peanut butter, and release them some distance from our house. (They probably still died, but they had as much chance as any other wild mouse.)
I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?
Something to understand about Malcolm Gladwell is that he is an exceptionally talented writer that can turn a pseudo-theory into hundreds of pages of pleasant, entertaining non-fiction writing. He's not an evolutionary psychologist, though I bet he could write a really interesting and thought provoking non-fiction piece on evolutionary psychology.
http://en.wikipedia.org/wiki/The_Tipping_Point#The_three_rules_of_epidemics
His pseudo-theory from The Tipping Point has not made advertisers any more money. It's an example of something that really does sound kind of true when you read it, but what he says doesn't explain much in the way of meaningful phenomena. Advertising companies tried to take advantage of his pseudo-theory of social influence, and they still make some efforts to target influential users, but it's a token effort compared towards marketing as broadly as possible. Superbowl advertisements still work.
Oh, by no means did I want to suggest that Gladwell has a forte in evolutionary psychology; if he does, there's nothing to indicate it in what I've read. It's clear that he glosses over many of the details in his work, perhaps dangerously so. And the entire point of Outliers is that social environment is important to success; not exactly an earth-shattering insight, there's a negative Times review that's spot on.
That said, Gladwell says he originally got the idea for 10000 hours from Ericsson and Levitin. At worst, at this point, I think it's somewhat plausible. I still have a lot more searching to do on the subject, but I am interested in what evolutionary psychology might say about the idea -- alas, I'm also not a evolutionary psychologist, so I don't know that either.
Edit: Of course, what I'm really interested in is "Is the idea that it takes 10000 hours to master a skill set true in enough circumstances to make it a useful guideline?" I'm not interested in the viewpoint of evolutionary psychologists on skill acquisition per se.
The '10000' hours approximation seems surprisingly well founded, based on the research that Ericsson et. al. reviewed in their works. Obviously this is to obtain 'expert' level performance and you can still get 'good enough' levels from far less time. Also note that they specify that many of the hours must be deliberate practice and not just performance.
Graphene transistors promise 100GHz speeds
http://arstechnica.com/science/2010/02/graphene-fets-promise-100-ghz-operation.ars
100-GHz Transistors from Wafer-Scale Epitaxial Graphene
http://www.sciencemag.org/cgi/content/abstract/sci;327/5966/662?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=graphene&searchid=1&FIRSTINDEX=0&resourcetype=HWCIT
I find that article title misleading. Having transistors that operate at 100 GHz does not give you a CPU with a clock rate of 100 GHz. If I remember correctly, that very article states that current transistors operate at 30 GHz.
Sure, this is discussed in more detail on Hacker News. http://news.ycombinator.com/item?id=1104461
Is this sort of thing on-topic, even for the Open Thread here?
ETA: This question is not merely rhetorical.
To the extent that FAI will depend on the continued exponential growth of computing capacity, I'd say yes.
Are you sure you don't mean uFAI? Friendliness isn't a hardware problem.
Maybe I should just have said AI, or AGI. I suspect we will need further advances in computing power to achieve greater than human intelligence, friendly or otherwise.
I've always thought FAI was only tangentially on-topic here (more of a mutual interest than anything). This community is explicitly about rationality.
That's the umbrella topic, but I do not think that topic is in any way meant to exclude science. I mean... it's science. How many thousands of words has Eliezer written on quantum physics?
Surely there are worse things that could happen to a community of rationalists than links to scientific discoveries of strong mutual interest. It's not even a slippery slope towards bad off-topic stuff.
Edit: And I'm going to continue mostly contextless link sharing in the Open Thread until a link sharing subreddit is enabled.
I rather disagree. There are plenty of places online to find links to interesting scientific discoveries. And the sense in which Eliezer wrote about quantum physics is entirely different from the sense in which these links were "about science".
That said, I didn't mean to suggest in my question that the comment was off-topic, but rather wanted to know what folks thought about it.
LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).
A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)
This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.
Apparently even specific users have their own rss feeds, so I've settled with a feed aggregated from the feeds of a few people. It'd be better if the "friend" functionality worked (maybe it even does, but I don't know it!), so that the same could be done within the site, with voting and parent/context links.
An easier to implement feature that would also help alleviate this problem is to have the system remember the last comment read, and then have an option to display all new comments since then in a threaded fashion on one big page, so we can skip whole threads of new comments at once. (I have been thinking about this, and started writing a PHP script to scrape Less Wrong and build the threaded view, but gave up due to technical difficulties.)
Also, I think comments on one's posts should activate the red envelope, but don't right now. Should we private message you if we answer one of your posts and want a reply?
It's worth sending me requests for access like that. Trike is short on time, but very keen on any time we can spend with a string multiplier on your time. What do you want in an API?
Basically, I think what's needed is an API to retrieve a list of comments satisfying some query as an XML document. I'm not sure what kind of queries the system supports internally, so I'll just ask for as much generality and flexibility as possible. For example, I'd like to be able to search by a combination of username, post ID, date, points (e.g., all comments above some number of points), and comment ID (e.g., retrieve a list of comments given a list of IDs, or all comments that come after a certain ID).
If that's too hard, or development time is limited, I would settle now for just a way to retrieve all comments that come after a certain comment ID, and doing additional filtering on the client side.
Also, while I have your attention, where can I find some documentation about the Less Wrong codebase? I tried to read it once, and found it quite hard to understand, and was wondering if there's a guide to it somewhere.
http://github.com/tricycle/lesswrong - see "Resources" at bottom of page, mostly this (which is a wiki, so if you learn more, please share).
There is an API for that, but it's broken. this (rss) should get you the 40 comments later than comment number 1000, but it gives 50, regardless of how many you ask for. Also, it rarely gives a link to go to the later comments (only for earlier ones). but if you've been walking these things, you probably knew that.
ETA: I misinterpreted the API. "count" is not supposed to control the number of comments, but as a hint to the server about how far back. If that hint is missing or wrong, it leaves out prev/next. Especially prev. You can make prev appear by adding &count=60 (anything over 50), but every time you click prev, it will decrease this number by 50 and eventually not give the prev. You could make it very large.
Would I modify this, or something else, to get the first comment of a particular user?
You can stick ?before=t1_1 onto the end of a user page to get the first comment. yours
Awesome! I occasionally want to skim through someone's posts chronologically, or at least read their first few comments, to see how their views might have changed over time, and see to what extent I can tell the state of mind they were in when they arrived here.
Since this interface is broken, it's not so easy to skim. The page is supposed to have a "prev"[1] link at the bottom, but it doesn't.
ETA: better for skimming is to add not just ?before=t1_1 to the user page, but also &count=100000
[1] I hate the use of prev/next, at least because it isn't standard (eg, it's opposite to livejournal). "earlier" and "later" would be clear.
I find myself once again missing Usenet.
Perhaps if LW had an API we could get back to writing specially-designed clients, which could do all the aggregation magic we might hope for?
"Recent comments" page has a feed.
I was hoping for a rather richer API than that. "Recent comments" doesn't even include scores.
That's a trivial mod that Trike has time for. Do you want to specify what data you would like in an API, or try and get the code working yourself?
I really should try and do it myself - for one thing, that means I can develop server and client in parallel.