Open Thread: February 2010
Where are the new monthly threads when I need them? A pox on the +11 EDT zone!
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
If you're new to Less Wrong, check out this welcome post.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (738)
Eliezer, how is progress coming on the book on rationality? Will the body of it be the sequences here, but polished up? Do you have an ETA?
Currently planned to be divided into three parts, "Map and Territory", "How To Actually Change Your Mind", and "Mysterious Answers to Mysterious Questions" - that should give you an idea of the intended content. No ETA, still struggling to find a writing methodology that gets up to an acceptable writing speed.
I thought of a voting tip that I'd like to share: when you are debating someone, and one of your opponent's comment gets downvoted, don't let it stay at -1. Either vote it up to 0, or down to -2, otherwise your opponent might infer that you are the one who downvoted it. Someone accused me of this some time ago, and I've been afraid of it happening again ever since.
It took a long time for this countermeasure to occur to me, probably because the natural reaction when someone accuses you of unfair downvoting is to refrain from downvoting, while the counterintuitive, but strategically correct response is to downvote more.
An automatic block against downvoting any comment that's a direct response to one of yours would be good.
My karma management techniques:
1) If I'm in a thread and someone's comment is rated equally with mine, and therefore potentially displaying atop my comment, I downvote theirs until it'll pass mine despite my downvote, to give my comment more exposure. I remove the downvote later, usually upvoting (their comment is getting voted better than mine because it's good).
2) If I'm debating someone and I want to downvote their comment, I upvote it for a day or so, then later return to downvote it. This gives the impression that two objective observers who read the thread later agreed with me. This works best on long debate threads, because a) if my partner's comments are getting immediately upvoted, they tend to be encouraged and will continue the debate, further exposing themselves to downvotes and b) they get fewer reads, so a single vote up or down makes a much bigger impression when almost all the comments in the thread are rarely upvoted/downvoted past +/- 2.
3) Karma is really about rewarding or punishing an author for content, to encourage certain types of content. Comments that are too aggressive will not be upvoted even if people agree with the point, because they don't want to reward aggressive behavior. Likewise, comments that are not aggressive enough are given extra karma - the reader's first instinct is to help promote this message because the timid author won't promote it enough on his own. This is nonsensical in this format, but the instinct is preserved.
I've noticed that the comments that get voted up the most are those that do probability calculations, those whose authors' names pop out of the page, and those which are cynical on the surface, possibly with a wry humor, while revealing a deep earnestness. If you have something unpopular to say, or are just plain losing an argument, that's the best tone to take, because people will avoid downvoting if they disagree, but will usually upvote if they do agree.
EDIT: I agree with Alicorn that votes shouldn't be anonymous, as it would remove the dirtiest of these variably dirty techniques, but in the meantime, play to win.
Upvoted for honesty.
Of course, I'll be back in a few days to downvote you.
I don't like that you are trying to mislead others.
The deception you've described is of course minor and maybe you don't lie about important things. But it seems a dangerous strategy, for your own epistemic hygiene, to be casual with the truth. Even if I didn't regard it as ethically questionable, I wouldn't be habitually dishonest for the sake of my own mind.
I can't believe you actually admitted to using these strategies.
It does make me impressed at his cleverness.
Not me. At least for points 1 and 2, these strategies have occurred to me, but they're, you know, wrong.
As for point 3, I like that we so strongly discourage aggression. I think that aggression and overconfidence of tone are usually big barriers to rational discussion.
This strategy can be eliminated by showing a count of both upvotes and downvotes, a change which has been requested for a variety of other reasons. I imagine it solves a lot of problems of anonymity, but it makes Wei Dai's dilemma worse. It makes downvoting the -1 preferable to upvoting it.
To win what? What is there to win?
Your last paragraph was astute.
I found this shocking:
I wouldn't game the system like this not so much because of moral qualms (playing to win seems OK to me) but because I need straight-forward karma information as much as possible in order to evaluate my comments. Psychology and temporal dynamics are surely important, but without holding them constant (or at least 'natural') then the system would be way too complex for me to continue modeling and learning from.
Karma can be (and by your own admission, is) about more than first-order content. Excessively aggressive comments may not themselves contain objectionable content, but they tend to have a deleterious effect on the conversation, which certainly does affect subsequent content.
What I really want to do is destroy you karma-wise. This behavior deserves to be punished severely. But I'm now worried about a chilling effect on others who do this coming forward.
Also, everyone, see poll below.
I want to downvote you for this, because punishing people for telling the truth is a bad thing. On the other hand, you are also telling the truth, so... now I'm confused. ;-)
I've noticed this too. It is one of several annoying problems that would evaporate if votes weren't anonymous.
More problems would be caused by that change than would be solved.
(Downvoted. EDIT: Vote cancelled; see below.) "Opponent"? "Strategically correct response"? Are you sure we're playing the same game?
I don't understand why lately my comments have been so often uncharitably interpreted. In this case, my "game" is:
LW became more active lately, and grew old as experience, so it's likely I won't be skimming "recent comments" (and any comments) systematically anymore (unless I miss the fun and change my mind, which is possible). Reliably, I'll only be checking direct replies to my comments or private messages (red envelope).
A welcome feature to alleviate this problem would be an aggregator for given threads: functionality to add posts, specific comments and users in a set of items to be subscribed on. Then, all comments on the subscribed posts (or all comments within depth k from the top-level comments), and all comments within the threads under subscribed comments should appear together as "recent comments" do now. Each comment in this stream should have links to unsubscribe from the subscribed item that caused this comment to appear in the stream, or to add an exclusion on the given thread within another subscribed thread. (Maybe, being subscribed to everything, including new items, by default, is the right mode, but with ease of unsubscribing.)
This may look like a lot, but right now, there is no reading load-reducing functionality, so as more people start actively commenting, less people will be able to follow.
I find myself once again missing Usenet.
Perhaps if LW had an API we could get back to writing specially-designed clients, which could do all the aggregation magic we might hope for?
An ~hour long talk with Douglass Hofstadter, author of Godel, Escher, Bach.
Titled: Analogy as the Core of Cognition
http://www.youtube.com/watch?v=n8m7lFQ3njk#t=13m30s
If I understand the Many-Worlds Interpretation of quantum mechanics correctly, it posits that decoherence takes place due to strict unitary time-evolution of a quantum configuration, and thus no extra collapse postulate is necessary. The problem with this view is that it doesn't explain why our observed outcome frequencies line up with the Born probability rule.
Scott Aronson has shown that if the Born rule doesn't hold, then quantum computing allows superluminal signalling and the rapid solution of PP-complete problems. So we could adopt "no superluminal signalling" or "no rapid solutions of PP-complete problems" as an axiom and this would imply the Born probability rule.
I wanted to ask of those who have more knowledge and have spent longer thinking about MWI: is the above an interesting approach? What justifications could exist for such axioms? (...maybe anthropic arguments?)
ETA: Actually, Aronson showed that in a class of rules equating probability with the p-norm, only the 2-norm had the properties I listed above. But I think that the approach could be extended to other classes of rules.
Non-Born rules give us anthropic superpowers. It is plausibly the case that the laws of reality are such that no anthropic superpowers are ever possible, and that this is a quickie explanation for why the laws of reality give rise to the Born rules. One would still like to know what, exactly, these laws are.
To put it another way, the universe runs on causality, not modus tollens. Causality is rules like "and then, gravity accelerates the bowling ball downward". Saying, "Well, if the bowling ball stayed up, we could have too much fun by hanging off it, and the universe won't let us have that much fun, so modus tollens makes the ball fall downward" isn't very causal.
This reminds me of an anecdote I read in a biography of Feynman. As a young physics student, he avoided using the principle of least action to solve problems, preferring to solve the differential equations. The nonlocal nature of the variational optimization required by the principle of least action seemed non-physical to him, whereas the local nature of the differential equations seemed more natural.*
I wonder if there might not be a more local and causal dual representation of the principle of no anthropic superpowers. Pure far-fetched speculation, alas.
* If this seems vaguely familiar to anyone, it's because I'm repeating myself.
Eliezer has a new fanfic available.
Fun sneaky confidence exercise (reasons why exercise is fun and sneaky to be revealed later):
Please reply to this comment with your probability level that the "highest" human mental functions, such as reasoning and creative thought, operate solely on a substrate of neurons in the physical brain.
<.05
I am no cognitive scientist, but I believe some of my "thinking" takes place outside of brain (elsewhere in my body) and I am almost certain some of it takes place on my paper and computer.
Speaking of "thinking" with neurons other than those found in the brain, kinesthetic learning gives me pause concerning the sufficiency of cranial preservation in cryonics. How much "index-like" information do we store in the rest of our neurons? Does this vary with one's level of kinesthetic dependence? Would waking up disconnected from the rest of our nervous system (or connected to a "generic" substitute) be merely disorienting, or could it constitute a significant loss of personality/memory? Neuroscientists, help!
When I signed up for cryonics, I opted for whole body preservation, largely because of this concern. But I would imagine that even without the body, you could re-learn how to move and coordinate your actions, although it might take some time. And possibly a SAI could figure out what your body must have been like just from your brain, not sure.
Now recently I have contracted a disease which will kill most of my motor neurons. So the body will be of less value and I may change to just the head.
The way motor neurons work is there is an upper motor neuron (UMN) which descends from the motor cortex of the brain down into the spinal cord; and there it synapses onto a lower motor neuron (LMN) which projects from the spinal cord to the muscle. Just 2 steps. However actually the architecture is more complex, the LMNs receive inputs not only from UMNs but from sensory neurons coming from the body, indirectly through interneurons that are located within the spinal cord. This forms a sort of loop which is responsible for simple reflexes, but also for stable standing, positioning etc. Then there other kinds of neurons that descend from the brain into the spinal cord, including from the limbic system, the center of emotion. For some reason your spinal cord needs to know something about your emotional state in order to do its job, very odd.
Fascinating. Citation?
I'm much less worried by this than I am by the prospect that I'd have to do the same for many of my normal thought patterns due to unforeseen inter-dependencies.
Indeed, that's one of the reasons why I prefer thinking about it solely in terms of stored information: a redundant copy only really constitutes a pointer's worth of information. It's even conceivable that a SAI could reconstruct missing neural information in non-obvious ways, like a few stray frames of video. Not worth betting on, though.
Thanks for the informative reply.
This was the first objection that my neuroscientist friend brought up when I tried to talk to him about (edit:) cryonics. I don't think science knows yet how dependent we are on our peripheral nervous system, but he seemed fairly sure that we are to a nontrivial degree.
As I say to every objection I hear to cryonics at the moment, your neuroscientist friend should write a blog post or some such about his objections - he has a very low bar to clear to write the best informed critique in the world.
(Guessing you mean cryonics - cryogenics is something else though not unrelated)
Voted up and seconded. Yvain, If what you actually mean is "operate solely through physical means contained within the human body or physical means manipulated by interaction with the human body," then I'll up it to whatever number is supposed to be used for, "I'm only leaving room for uncertainty because there's no such thing as certainty." ;-)
As opposed to ...? Ion channels? Quantum phenomena? Multiple interacting brains? Non-neuronal tissue? Neuronal-but-extracranial cells? Soul? Beings outside the observable universe, running the simulator?
What is this belief supposed to be distinguished from?
Well, hormones, and chemicals such as DMT or endocannabinoids etc surely affect the thinking progress. But the phrasing of the question is not really clear to say if you can count these.
Like others, I see some ambiguity here. Let me assume that the substrate includes not just the neurons, but the glial and other support cells and structures; and that there needs to be blood or equivalent to supply fuel, energy and other stuff. Then the question is whether this brain as a physical entity can function as the substrate, by itself, for high level mental functions.
I would give this 95%.
That is low for me, a year ago I would probably have said 98 or 99%. But I have been learning more about the nervous system these past few months. The brain's workings seem sufficiently mysterious and counter-intuitive that I wonder if maybe there is something fundamental we are missing. And I don't mean consciousness at all, I just mean the brain's extraordinary speed and robustness.
Still curious... How about giving us an ETA?
To get nitpicky, the brain is made of both neurons and glial cells - and the glial cells also seem to play a role in cognition.
I am quite comfortable with the idea that I am my brain, that my brain is made of ordinary living matter (atoms making up molecules making up proteins making up cells), that this matter forms specialized structures responsible for cognition, and I would be hugely surprised if given proof that the highest mental functions cannot be explained adequately in terms of that ontology. The strangest alternative I can think of is Penrose's ENM incomputable-quantum-coherence hypothesis and I'd assign less than 5% probability to his thesis being correct.
How does "operate solely on" regard distributed cognition arguments, like "creative thought is created via interaction with the remaining human culture" and "we constantly offload cognitive processes (such as memory) to external substrates (like computers and books)"?
Also, the "highest" human mental functions operate via a number of lower-level processes. Does "solely on human neurons" include e.g. possible quantum phenomena on a low level?
I'm at least +70 decibans ("99.99999%") confident that mental states supervene on to physical states. Whether your exact description to do with neurons in the brain completely captures all the physical states I'm less confident of.
EDIT: updated from 30 to 70 decibans: I would more easily be convinced that I had won the lottery than that this wasn't so.
I might be misunderstanding what you mean by 'more easily be convinced', but if the nature of the evidence we'd expect to be doing the convincing is so different in each case, I don't think we can rely on that to tell how much we believe something.
I was much less easily convinced about Many Worlds that I would be that I'd won the lottery, but beforehand I think I'd have put the odds about the same as rolling a six.
With a straightforward interpretation of your question, I'd answer "95%".
But since you made special mention of being "sneaky", I'll assume you've attempted to trick me into misunderstanding the question, and so I'll lower my probability estimate to 75%, with the missing twenty points accounting for you tricking me by your phrasing of the question.
Commenting before reading other replies---I'm going to give the boring, sneaky reply that the question isn't well-specified enough to have an answer; I'd need to know more about what you mean by something to operate solely on a substrate. I mean, clearly there are a lot of cognitive tasks that most people can only do given a pencil and paper, or a computer ... is that the sneaky part, that we store information in the environment, and therefore we're not solely neurons?
Could you clarify what you mean by operate on? Or is that part of the point?
Using the definition of 'operate on' that I think is most natural, I'd say there is a .05% chance that these functions only operate on (effect) the physical brain. Unless you mean directly, and then I would assign an 80% chance.
Using the definition of 'operating on' meaning 'requiring', I'd say that there is a 90% chance (probability) that only the brain is required for 90% (fraction) of its functioning. The probabilities I assign would fall down dramatically as you try to raise the 2nd 90% (the fraction). So that I would probably only assign a 1% chance that 100% of higher functions require only the brain.
One minus epsilon.
A query to Unknown, with whom I have this bet going:
I recently found within myself a tiny shred of anticipation-worry about actually surviving to pay off the bet. Suppose that the rampant superintelligence proceeds to take over its future light cone but, in the process of dissembling existing humans, stores their mind-state. Some billions of years later, the superintelligence runs across an alien civilization which succeeded on their version of the Friendly AI problem and is at least somewhat "friendly" in the ordinary sense, concerned about other sentient lives; and the superintelligence ransoms us to them in exchange for some amount of negentropy which outweighs our storage costs. The humans alive at the time are restored and live on, possibly having been rescued by the alien values of the Super Happy People or some such, but at least surviving.
In this event, who wins the bet?
SIAI: Utopia or hundred times your money back!
Eliezer, would you accept a bet $100 against $10000?
On the same problem? I might attach some extra terms and conditions this time around, like "offer void (stakes will be returned) if the AI has the power and desire to use us for paperclips but our lives are ransomed by some other entity with the power to offer the AI more paperclips than it could produce by consuming us", "offer void if the explanation of the Fermi Paradox is a previously existing superintelligence which shuts down any new superintelligences produced", and "offer void if the AI consumes our physical bodies but we continue via the sort of weird anthropic scenario introduced in The Finale of the Ultimate Meta Mega Crossover." With those provisos, my probability drops off the bottom of the chart. I'm still not sure about the bet, though, because I want to keep my total of outstanding bets to something I can honor if they all simultaneously go wrong (no matter how surprising that would be to me), and this would use up $10,000 of that, even if it's on a sure thing - I might be able to get a better price on some other sure thing.
You definitely win. If I say "you'll get killed doing that" and you are, I shan't expect to pay back my winnings when you're reanimated.
Perhaps you've already defined "superintelligent" as meaning "self-directed, motivated, and recursively self-improving" rather than merely "able to provide answers to general questions faster and better than human beings"... but if you haven't, it seems to me that the latter definition of "superintelligent" would have a much higher probability of you losing the bet. (For example, a Hansonian "em" running on faster hardware and perhaps a few software upgrades might fit the latter definition.)
Since Karma Changes was posted, there have been 20 top level posts. With one exception, all of those posts are presently at positive karma. EDIT: I was using the list on the wiki, which is not up to date. Incorporating the posts between the last one on that list and now, there is a total of 76 posts between Karma Changes and today. This one is the only new data point on negatively rated posts, so it's 2 of 76.
I looked at the 40 posts just prior to Karma Changes, and of the forty, six of them are still negative. It looks like before the change, many times more posts were voted into the red. I have observed that a number of recent posts were in fact downvoted, sometimes a fair amount, but crept back up over time.
Hypothesis: the changes included removing the display minimum of 0 for top-level posts. Now that people can see that something has been voted negative, instead of just being at 0 (which could be the result of indifference), sympathy kicks in and people provide upvotes.
Is this a behavior we want? If not, what can we do about it?
One of the expected effects of the karma change is to make people more cautious about what they put in a top level post. Perhaps this is only evidence of that effect.
I've called before for median-based karma: you set a score you think a post should have and the median is used for display purposes, with "fake votes" reducing the influence of individual votes until there are enough to gain a true picture.
Arrow's Theorem seems relevant...
No. It is not difficult to create a top level post that is approved of or at least kept at '0'. I want undesirable top level posts to hurt.
Replace all '-ve' karma value displays of top level posts with '- points' or '<0 points'. We don't necessarily need to know just how disapproved of a particular post is.
There is a limited downvote budget for each voter (in some ratio to the voter's budget). Downvoting a post now uses 10 points from that budget rather than 1, so perhaps low-karma downvoters (or downvoters who have exhausted their downvote budgets) are now having less of an impact.
It could be sympathy, or a judgment that the poster shouldn't be excessively discouraged from posting in the future.
Sure, why not? We can always change things later if we start getting overrun by bad posts, and people still aren't willing to vote them down into negative territory.
I wouldn't necessarily call it sympathy. Sometimes I will up- (or down-) vote something if I think it is better (or worse) than its current score suggests. The purpose of karma on articles should be to identify those most worth reading to those who haven't yet read them, not to be a popularity contest where everyone who disliked it votes it down forever.
I also tend to vote posts up or down based on what I think the score ought to be. But it seems clear that sympathy plays a part. Liked posts spiral freely off towards infinity but disliked posts don't ever spiral down in a similar way. This gives a distinct bias to the expected payoff of posting borderline posts and so is probably not desirable.
This is actually a damned good question:
http://www.scientificblogging.com/mark_changizi/why_doesn%E2%80%99t_size_matter%E2%80%A6_brain
Of interesting trivia: This open thread is at 256 comments by February 3rd. For comparison:
January's had a total of 709
December is at 260
November is at 490
October is at 399
To re-iterate a request from Normal Cryonics: I'm looking for links to the best writing out there against cryonics, especially anything that addresses the plausibility of reanimation, the more detailed the better.
I'm not looking for new arguments in comments, just links to what's already "out there". If you think you have a good argument against cryonics that hasn't already been well presented, please put it online somewhere and link to it here.
Eliezer's posts are always very thoughful, thought provoquing and mind expanding - and I'm not the only one to think this, seeing the vast amounts of karma he's accumulated.
However, reviewing some of the weaker posts (such as high status and stupidity and two aces ), and rereading them as if they hadn't been written by Eliezer, I saw them differently - still good, but not really deserving superlative status.
So I was wondering if Eliezer could write a few of his posts under another name, if this was reasonable, to see if the Karma reaped was very different.
This is a reasonable justification for using a sockpuppet, and I'll try to keep it in mind the next time I have something to write that would not be instantaneously identifiable as me.
But you'll have to build up the sockpuppet to 50 points before it can make a top post. Can you write that many comments that aren't identifiable as yours?
It's easy if you have a few co-conspirators. Find five quotes, post them on the quotes thread, ask 9 people to vote each one up (and vote them up as Eliezer Yudkowsky). It probably wouldn't even take that many, since some would certainly be voted up on their own.
But perhaps it would be better, if possible, to hide (or least offer the option to hide) the author of a top-level post. Anyone who cared enough to closely track karma could tell who posted it, but it would weed out a lot of knee-jerk EY upvotes.
Perhaps, contact someone likely and ask them to paraphrase the post in their words and submit it as their own?
Now we'll be getting all kinds of posts with, "Eliezer did not write this..or maybe he did!" ...
I think it would be acceptable for him, as a site administrator, to doctor the scores of his own comments behind the scenes to make his sockpuppet pass that threshold.
It has seemed to me that some of Eliezer's recent post scores have been inflated by around 5-10 points due to his being Eliezer; it would be interesting to test this hypothesis.
I wonder if, if the hypothesis were tested and confirmed, anyone would admit to being one of the 5-10 persons who upvote for that reason?
I'm one of the 5-10.
There is a depth to "this is an Eliezer agument, part of a rich and complicated mental world with many different coherent aspects to it" that is lacking in "this is a random post on a random subject". In the first case, you are seeing a facet of larger wisdom; in the second, just an argument to evaluate on merits.
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don't like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you'll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.
This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.
I would like to see this community adopting this approach
In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.
I seem to be entering a new stage in my 'study of Less Wrong beliefs' where I feel like I've identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn't so surprising, since Less Wrong is the grouped beliefs of many different people, and it's each person's job to find their own self-consistent ribbon.
But just to check one of these -- Omega's accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible 'quark' choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can't predict what will happen because when he makes the prediction, he is up-branch of you. He doesn't know which branch you'll be in. Or, more accurately, he won't be able to make a prediction that is true for all the branches.
So long as you make your Newcomb's choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box - if Robin Hanson is right about mangled worlds and there's a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.
Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer's that two-box are the ones that really win, as rare as they are.
The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.
The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.
And, also, we define winning as winning on average. A person can get lucky and win the lottery -- doesn't mean that person was rational to play the lottery.
Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.
Interesting. I was idly wondering about that. Along somewhat different lines:
I've decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.
Winning the extra amount in this way in a handful of worlds won't do anything to my average winnings-- it won't even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.
Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you're saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.
Why is this comment being down-voted? I thought it was rather clever to use Omega's one weak spot -- quantum uncertainty -- to optimize your winnings even over a set with measure zero.
Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.
When the star streaks across the sky, you think, "Ohmigosh! It happened! I'm about to get rich!" Then you open the boxes and get $1000.
Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.
The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.
Note: I am not up to speed on quantum mechanics. I could be off on a few things here.
OK, right: looking for a merging of stars would be a terrible anomaly to use because that's probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can't see it coming either. (He can accurately predict that I will two-box in this case, but he can't predict that the milk will unspill.)
I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.
Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.
I think Omega's capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.
Also, I'd say that our wetware brains probably aren't close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.
Thank to everyone who replied. So I see that we don't really believe that the universe is deterministic in the way implied by the problem. OK, that's consistent then.
What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don't always go the same way. In fact, Omega doesn't even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb's problem to work as they're supposed to.
But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.
I'm sufficiently uninformed on how quantum mechanics would interact with determinism that so far I've been operating under the assumption that it doesn't. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven't found? (I'm not sure even in principle how you could prove that something is random. It'd be proving the negative on the existence of causation for a possibly-hidden cause.)
Yes; since many important macroscopic events (e.g. weather, we're quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.
There is no special line where events become macro-level events. It's not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You're position right now is subject to indeterminacy. It just happens that you're big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).
In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.
IAWYC except, of course, for this:
As said above and elsewhere, MWI is perfectly deterministic. It's just that there is no single fact of the matter as to which outcome you will observe from within it, because there's not just one time-descendant of you.
Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there'd be no stable atoms.
No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell's inequalities. But that appears not to be the case.
But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.
Materialists must hope that in spite of Bell's inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.
Alicorn asked above:
In principle, you can't. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.
Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I'll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I'm worried about whether materialism is self-consistent, sometimes I'm worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.
In that case I am not a materialist. I don't believe in any entities that materialists don't believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking "which Everett branch are we in" can have nondeterministic answers.
No worries -- you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that's what you just wrote -- sorry if I misunderstood something.)
Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can't say the direction was undetermined.
Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.
Those sorts of question can arise in non-QM contexts too.
Or, of course, the causes could be non-local.
What are Bell's inequalities, and why do quantumly-behaving things with deterministic causes have to follow them?
Alicorn, if you're free after dinner tomorrow, I can probably explain this one.
Um... am I missing something or did no one link to, ahem:
http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/
The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest 'spooky action at a distance' because particles appear to share information instantaneously, at a distance, long after an interaction between them.
People applying "common sense" would like to argue that there is some way that the information is being shared -- some hidden variable that collects and shares the information between them.
Bell's Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works -- and deduces correlations between particles sharing information that is in contradiction with experiments.
* that is, mechanically rather than 'magically' at a distance
Well, actually everything has to follow them because of Bell's Theorem.
Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.
"Cf." is sometimes misused around here.
Bleg for assistance:
I’ve been intermittently discussing Bayes’ Theorem with the uninitiated for years, with uneven results. Typically, I’ll give the classic problem:
3,000 people in the US have Sudden Death Syndrome. I have a test that is 99% accurate; that is, it will wrong on any given person one percent of the time. Steve tests positive for SDS. What is the chance that he has it?
Afterwards, I explain the answer by comparing the false positives to the true positives. And, then I see the Bayes’ Theorem Look, which conveys to me this: "I know Mayne’s good with numbers, and I’m not, so I suppose he’s probably right. Still, this whole thing is some sort of impractical number magic." Then they nod politely and change the subject, and I save the use of Bayes’ Theorem as a means of solving disagreements for another day.
So this leads to my giving a very short presentation on the Prosecutor’s Fallacy next week. The basics of the fallacy are if you’ve got a one-in-3 million DNA match on a suspect, that doesn’t mean it’s three million-to-one that you’ve got that dude’s DNA. I need to present it to bright, interested people who will go straight to brain freeze if I display any equations at all. This isn’t frequentists-vs.-Bayesians; this is just a simple application of Bayes’ Theorem. (I suspect this will be easier to understand than the medical problem.)
I’ve read Bayesian explanations, but I’m aiming at people who are actively uninterested in learning math, and if I can get them to understand only the Prosecutor’s Fallacy, I’ll call Win. A larger understanding of the underlying structure would be a bigger win. Anyone done something like this before with success (or failure of either educational or entertainment value?)
For this specific case, you could try asking the analogous question with a higher probability value. E.g. "if you’ve got a one-in-two DNA match on a suspect, does that mean it’s one-in-two that you’ve got that dude’s DNA?". Maybe you can have some graphic that's meant to represent a several million people, with half of the folks colored as positive matches. When they say "no, it's not one-in-two", you can work your way up to the three million case by showing pictures displaying what the estimated amount of hits would be for a 1 to 3, 1 to 5, 1 to 10, 1 to 100, 1 to 1000 etc. case.
In general, try to use examples that are familiar from everyday life (and thus don't feel like math). For the Bayes' theorem introduction, you could try "a man comes to a doctor complaining about a headache. The doctor knows that both the flu and brain cancer can cause headaches. If you knew nothing else about the case, which one would you think was more likely?" Then, after they've (hopefully) said that the man is more likely to be suffering of a flu, you can mention that brain cancer is much more likely to cause a headache than a flu is, but because flu is so much more common, their answer was nevertheless the correct one.
Other good examples:
Most car accidents occur close to people's homes, not because it's more dangerous close to home, but because people spend most of their driving time close to their homes.
Most pedestrians who get hit by cars get hit at crosswalks, not because it's more dangerous at a crosswalk, but because most people cross at crosswalks.
Most women who get raped get raped by people they know, not because strangers are less dangerous than people they know, but because they spend more time around people they know.
If you're using Powerpoint, you might want to make a slide that says something like:
2,999 negatives -> 1% test positive -> 30 false positives
1 positive -> 99% test positive -> 1 true positive
So out of 31 positive tests, only 1 person has SDS.
If you've got the time, use a little horde of stick figures, entering into a testing machine and with test-positive results getting spit out.
Do it with pictures
I take it you've already looked at Eliezer's "Intuitive Explanation"?
I think it's really important to get the idea of a sliding scale of evidentiary strength across to people. (This is something that has occurred to me from some of my recent attempts to explain the Knox case to people without training in Bayesianism.) One's level of confidence that something is true varies continuously with the strength of the evidence. It's like a "score" that you're keeping, with information you hear about moving the score up and down.
The abstract structure of the prosecutor's fallacy is misjudging the prior probability. People forget that you start with a handicap -- and that handicap may be quite substantial. Thus, if a piece of evidence (like a test result) is worth, say "10 points" toward guilt, hearing about that piece of evidence doesn't necessarily make the score +10 in favor of guilt; if the handicap was, say, -7, then the score is only +3. If, say, a score of +15 is needed for conviction, the prosecution still has a long way to go.
(By the way, did you see my reply to your comment about psychological evidence?)
I just finished reading Jaron Lanier's One-Half of a Manifesto for the second time.
The first time I read it must have been three years ago, and although I felt there were several things wrong with it, I hadn't come to what is now an inescapable conclusion for me: Jaron Lanier is one badly, badly confused dude.
I mean, I knew people could be this confused, but those people are usually postmodernists or theologians or something, not smart computer scientists. Honestly, I find this kind of shocking, and more than a little depressing.
While the LW voting system seems to work, and it is possibly better than the absence of any threshold, my experience is that the posts that contain valuable and challenging content don't get upvoted, while the most upvotes are received by posts that state the obvious or express an emotion with which readers identify.
I feel there's some counterproductivity there, as well as an encouragement of groupthink. Most significantly, I have noticed that posts which challenge that which the group takes for granted get downvoted. In order to maintain karma, it may in fact be important not to annoy others with ideas they don't like - to avoid challenging majority wisdom, or to do so very carefully and selectively. Meanwhile, playing on the emotional strings of the readers works like a charm, even though that's one of the most bias-encouraging behaviors, and rather counterproductive.
I find those flaws of some concern for a site like this one. I think the voting system should be altered to make upvoting as well as downvoting more costly. If you have to pick and choose what comments and articles to upvote/downnvote, I think people will be voting with more reason.
There are various ways to make voting costlier, but an easy way would be to restrict the number of votes anyone has. One solution would be for votes to be related to karma. If I've gained 500 karma, I should be able to upvote or downvote F(500) comments, where F would probably be a log function of some sort. This would both give more leverage to people who are more active contributors, especially those who write well-accepted articles (since you get 10x karma per upvote for that), and it would also limit the damage from casual participants who might otherwise be inclined to vote more emotionally.
Um, that math doesn't work out unless the number of new users expands exponentially fast. You need F(n) to be at least n, and probably significantly greater, in order to avoid a massive bottleneck.
I thought of that too, but then I realized the karma:upvote conversion rate on posts is 10:1, which complicates the analysis of the karma economy.
A community is only as good as its constituents. I would hope that there are enough people around who like majority-wisdom-challenging insights, to offset this problem. "Insights" being the key word.
According to some people we here at less wrong are good at determining the truth. Other people are notoriously not.
I don't know that Less Wrong is the appropriate venue for this, but I have felt for some time that I trust the truth-seeking capability here and that it could be used for something more productive than arguments about meta-ethics (no offense to the meta-ethicists intended). I also realize that people are fairly supportive of SIAI here in terms of giving spare cash away, but I feel like the community would be a good jumping-off point for a polling organization.
So I guess this leads to a few questions:
-Is anyone at LW currently involved with a polling firm?
-Is anyone (else) at LW interested in doing polls?
-Is LW an appropriate place to create a truth-seeking business, such as a pollster or a sponsor for studies?
None of these questions are immediate since I am a broke undergrad rather than an entrepreneur.
I'm not sure I understand the connection between truth-seeking and polling, unless the specific truth you seek is simply the percentage of people who give a particular answer to a poll. Are you simply talking about a more accurate polling company or using polling to find other truths?
Here's an idea for how a LW-based commercial polling website could operate. Basically it is a variation on PredictionBook with a business model similar to TopCoder.
The website has business clients, and a large number of "forecasters" who have accounts on the website. Clients pay to have their questions added to the website, and forecasters give their probability estimates for whichever questions they like. Once the answer to a question has been verified, each forecaster is financially rewarded using some proper scoring rule. The more money assigned to a question, the higher the incentive for a forecaster to have good discrimination and calibration. Some clever software would also be needed to combine and summarize data in a way that is useful to clients.
The main advantage of this over other prediction markets is that the scoring rule encourages forecasters to give accurate probability estimates.
What is the correct term for the following distinction:
Scenario A: The fair coin has 50% chance to land heads.
Scenario B: The unfair coin has an unknown chance to land heads, so I assign it a 50% chance to get heads until I get more information.
If A flips up heads it won't change the 50%. If B flips up heads it will change the 50%. This makes Scenario A more [something] than Scenario B, but I don't know the right term.
Daniel Varga wrote
What I started wondering about when I began assimilating this idea of merging, copying and deleting identities, is what kind of legal/justice system could we depend upon if this was possible to enforce non-criminal behavior?
Right now we can threaten to punish people by restricting their freedom over a period of time that is significant with respect to the length of their lifetime. However, the whole equation might change if a would-be criminal thinks there's a p% chance they won't get caught, and a (1-p)% chance that one of their identities will have to go to jail...
Even a death penalty would be meaningless to someone who knows they could upload themselves to another vessel at any time. (If I had criminal intentions, I would upload myself just before the criminal act, so that the upload would be innocent.)
(I am posting this comment here because it is off-topic with respect to the thread, which was about whether we're in a simulation or not.)
In a world with an FAI Singleton, actions that would violate another individual's rights might be simply unavailable, making the concept of a legal/justice system obsolete.
In other scenarios, uploading/splitting would still take resources, which might be better used than in absorbing a criminal punishment. A legal/justice system could apply punishments to multiple instances of the criminal, and could be powerful enough to likely track them down.
I am not convinced that the upload would be innocent. Maybe, if the upload was rolled back to before the criminal intentions. Any attempt by the upload to profit from the crime would definitely make it complicit.
Criminal punishment could also take the form of torture, effective if the would be criminal fears any of its instances being tortured, even if some are not.
How about per-capita post scoring?
Why not divide a post's number of up-votes by the number of unique logged-in people who have viewed it? This would correct for the distortion of scores caused by varying numbers of readers. Some old stuff is very good but not read much, and scores are in general inflating as the Less Wrong population grows.
I think such a change would be orthogonal to karma accounting; I'm only suggesting a change in the number displayed next to each post.
For posts, this might work.
For comments, these are loaded without most readers reading them. Furthermore, the likelihood that any single comment will be read decreases with the number of all comments. It seems like this would work much less well for comments.
Anyone willing to give some uneducated fool a little math coaching? I'm really just starting with math and I probably shouldn't already get into this stuff before reading up more, but it's really bothering me. I came across this page today: http://wiki.lesswrong.com/wiki/Prior_odds
My question, how do you get a likelihood ratio of 11:1 in favor of a diamond? I'm getting this: .88/(.88+7.92)=.1 thus 10% probability for a beep to be a box containing a diamond? Since the diamond-detector is 88% likely to beep on that 1 box and 8% likely to beep on the 99 boxes containing no diamonds. So you have 7.92 false beeps and .88 positive ones which add up to 8.8 beeps of which only .88 are actually boxes containing a diamond?
As of today I'm still struggling with basic algebra. So that might explain my confusion. Though at some point I'll arrive at probability. But I'd be really grateful if somebody could enlighten me now.
Thanks!
p(A|X) = p(X|A)*p(A) / ( p(X|A)*p(A) + p(X|~A)*p(~A) )
A = box has diamond
X = box beeped
p(A) = .01
p(X|A) = .88
p(X|~A) = .08
p(A|X) = .88 * .01 / ( .88 * .01 + .08 * .99)
p(A|X) = .0088 / (.0088 + .0792)
p(A|X) = .0088 / .088
p(A|X) = .1
This is different than the likelihood ratio:
LR = p(X|A) / p(X|~A)
LR = .88 / .08
LR = 11
The likelihood ratio can be worded as, "It is 11 times more likely to be a diamond when it beeps." The original formula answers the question, "What is the probability that this beep means a diamond?"
In other words, the likelihood ratio is starting with the contents of a box and asking whether that box is going to beep. p(A|X) is starting with a beep and trying to figure out what that beep means about the contents of the box.
If you haven't read Bayes Theorem yet, it's definitely the place to start.
The likelihood ratio is Pr(beep | diamond) / Pr(beep | empty) = 0.88/0.08 = 11. I was going to say you ought to read the link for "likelihood ratio", but there's nothing there, so you should try the other wiki.
Also, don't think of running the detector over every box; think of testing one box at random.
Would there be interest in a more general discussion forum for rationalists, or does one already exist? I think it would be useful to test the discussion of politics, religion, entertainment, and other topics without ruining lesswrong. It could attract a wider audience and encourage current lurkers to post.
Stem cell experts say they believe a small group of scientists is effectively vetoing high quality science from publication in journals.
http://news.bbc.co.uk/2/hi/science/nature/8490291.stm
Is there a way to get a "How am I doing?" review or some sort of mentor that I can ask specific questions? The karma feedback just isn't giving me enough detail, but I don't really want to pester everyone every time I have a question about myself.
The basic problem I need to solve is this: When I read an old post, how do I know I am hearing what I am supposed to be hearing? If I have a whole list of nitpicky questions, where do I go? If a question of mine goes unanswered, what do I do?
I don't know anyone here. I don't have the ability to stroll by someone and ask them for help.
These are excellent questions/ideas. I want a mentor too!
I thought about contacting you to see if you wanted to start a little study group reading through the sequences. (For example, I started reading through the metaethics sequence and it was useless. My kinds of questions are like, 'What do any of these words mean? What's the implied context? Etc., etc.) But I'm not very good at details, and couldn't imagine any way of doing so. Except maybe meeting somewhere like Second Life so we can chat...
Do consider not starting with the metaethics sequence...
Scheduled IRC meetings?
I'd like to draw people's attention to a couple of recent "karma anomalies". I think these show a worrying tendency for arguments that support the majority LW opinion to accumulate karma regardless of their actual merits.
ETA: Please do not vote down these comments due to this discussion. My intention is to find a fix for a systemic problem, not to cause these particular comments to be voted down.
By 'anomaly' you appear to mean 'not the scores I would have assigned'. That's not the way karma works.
Eh, that's not a very generous reading of what he wrote. Exhibit A has a post at very high karma despite arguments that convinced its own author to drop support for it. That's not karma "working," either.
If you look a little closer you see that 'the own author' was persuaded to concede that later comment in the argument and was then more generous and conciliatory than he perhaps needed to be. I would be extremely disappointed if the meta discussion here actually made the author retract his comment. What we have here is a demonstration of why it is usually status-enhancing to treat arguments as soldiers. If you don't, you're just giving the 'enemy' ammunition.
Willingness to concede weak points in a position is a rare trait and one that I like to encourage. This means I will never use 'look, he admitted he was wrong' as way to coerce people into down-voting them or shame those that don't.
EDIT: I mean status enhancing specifically not rational in general.
That's a very good point, and I've added a note to my opening comment to convey that I don't want people to down-vote these particular comments.
For some implicit definition of karma 'working' that is unclear. Absent a bug in the karma scoring code, a discrepancy between the karma scores you observe and the karma scores you think are warranted seems just as likely to be an inaccuracy in the observer's model of how karma is supposed to work as a problem with the karma system.
What the original post seems to be missing to me is an explanation of what scores the karma system should be producing for these posts, a justification for why that is what the karma system should be producing and ideally a suggestion for changes to either the implementation of the system or the way people allocate their votes that would produce the desired changes. Absent the above it look a lot like complaining that people aren't voting the way you think they ought to.
Exhibit A has my vote because it is a reasonably insightful one liner, and a suitable response to the parent. Your reply to Exhibit A is a reducto ad absurdium that just does not follow.
Which is simply wrong. Please see this list of preferences which seem natural regarding positive and negative integers (and their wireheading counterparts). You haven't even expressed disagreement with any of those propositions that I expected to uncontroversial yet your whole 'karma anomalies' objection seems to hinge on it. I find this extremely rude.
This is an excellent example of the karma system serving its purpose. James' post was voted up above 20 because it was fascinating. Toby got 5 votes for pointing out the limit to when that kind of math is applicable. He did not get my vote because his final paragraph about the bible/koran is distinctly muddled thinking.
I wonder if it wouldn’t be more accurate to say that, actually, 98% confidence has been refuted at General Relativity.
I've created a rebuttal to komponisto's misleading Amanda Knox post, but don't have enough karma to create my own top-level post. For now, I've just put it here:
http://docs.google.com/View?id=dgb3jmh2_5hj95vzgk
If you actually want to debate this, we could do so in the comments section of my post, or alternatively over in the Richard Dawkins forum.
(Though since you say "my intent is merely to debunk komponisto's post rather than establish Amanda's guilt", I'm suspicious. See Against Devil's Advocacy.)
Make sure you've read my comments here in addition to my post itself.
There is one thing I agree with you about, and that is that this statement of mine
is misleading. The misleading part is the phrase "so far as I know", which has been interpreted by people who evidently did not read my preceding survey post to mean that I had not heard about all the other alleged physical evidence. I didn't consider this interpretation because I was assuming that my readers had read both True Justice and Friends of Amanda, knew from my previous post that I had obviously read them both myself, and would understand my statement for what it was -- a dismissal of the rest of the so-called "evidence". However, in retrospect, I should have foreseen this misunderstanding, so I've now edited the sentence to read:
ETA: At least one person has upvoted the parent without also upvoting this comment, which I interpret as an endorsement of Rolf Nelson's essay. I find this baffling. Almost every one Nelson's points (autopsy report, luminol prints, staged break-in, alleged cleanup...) was extensively discussed in comments at the time. The only one that wasn't (a supposed handprint of Knox's on a pillow in Kercher's room) is an outright falsehood -- as you will see from following Nelson's link, it's not even (close to) what that article claims. Furthermore, Nelson criticizes me for "accept[ing] propaganda from the Friends of Amanda (FoA) at face value" while citing True Justice for an "Introduction to Logic 101".
I challenge anyone who thinks that this represents a serious challenge to my post to come out and identify themselves.
Did you misread the source?
I said:
"One of Amanda's bloody footprints was found inside the murder room, on a pillow hidden under Meredith's body."
The source I cited (http://abcnews.go.com/TheLaw/International/story?id=7538538&page=2) said:
"Guede's bloody shoeprint was also positively identified on a pillow found under the victim's body... Police also found the trace of a smaller shoe print on the pillow compatible with shoe sizes 6–8. The print did not, however, match any of the shoes belonging to Knox or Kercher that were found in the house. Knox wears a size 7, Rinaldi said."
Anyway, a debate sounds like a fun use of free time; I replied to the comment you indicated: http://lesswrong.com/lw/1j7/the_amanda_knox_test_how_an_hour_on_the_internet/1gdo
I don't understand how this was worked around. It looks like (rolf's karma + karma lost by this being posted at the top level) would still have been insufficient.
The karma limit was serving the purpose for which it was intended. If, for some reason, an exception was granted I would like to see this announced.
Rolf is a major SIAI donor/supporter, so draw your own conclusion there.
Here's a bunch of mine, for fun:
Seriously, I've had some interesting discussions with Rolf in the past elsewhere. I'm not sure why he doesn't participate much here, and why he chose this topic to put his efforts into. But maybe we can cut him some slack?
Rolf isn't the one we'd be cutting slack to here. It is the moderator's decision to circumvent the karma system to post a political rant that warrants scrutiny.
Eliezer has been quite adamant that this is not the blog of the SIAI. In that context and elsewhere the moderation process has been held to high standards of consistency and transparency. At least acknowledging that special allowances were made (and who made them) would be nice.
I expect the moderator has already learned their lesson. Posting Rolf's rant seems to have allowed him to embarrass himself and can only be expected to have the opposite effect to the one intended. The ~50 karma limit gives people a chance to read posts like this and better calibrate his posting to the social environment before he puts his foot in his mouth.
PS: Can anyone remember what the post was called in which Eliezer describes a scenario about deducing the bias of a coin? A motivated speaker gives only a subset of a stream of coin tosses... I couldn't remember the title.
What evidence filtered evidence?
I had thought he was here solely to discuss this one thing. If he's interested in the things we're interested in in general as evinced powerfully by those donations then yes, I'll increase the slack I cut. Thanks.
Criticizing komponisto for citing "Friends of Amanda Knox" while you yourself cite "True Justice" causes those criticisms to fall flat.
Unfortunately, I find that your essay is wading into Dark Arts territory, since its intent is to show that komponisto's original essay was "misleading", and that that would somehow give veracity to arguments of Amanda Knox's guilt. Using that same logic, one would have to consider the implications of the chief prosecutor in Amanda Knox's case being convicted of abuse of office in another murder trial.
However, I would be interested in seeing komponisto and rolf nelson discuss the actual details of the case; in particular, the points that rolf nelson brought up in the essay.
Re: dark arts territory, I agree completely. This criticism should be directed more strongly to komponisto. My intent here is merely to repair some of the Bayesian damage caused by komponisto's original post. Perhaps this will dissuade people from wandering into dark arts territory in the future, or at least to wander in with misleading claims.
I hardly think komponisto inflicted "Bayesian damage" on the members of Less Wrong, seeing as they had already overwhelmingly come to the conclusion that Amanda Knox was not guilty before he had even presented his own arguments.
I said once in the doc that 'truejustice claims that X'. Because I said 'truejustice claims that X' rather than just stating X as though it were uncontested fact, and because X is basically correct, I claim that my doc is not misleading. If X is untrue, that would be a different story. In other words, if komponisto cited FoA and FoA's claims were true, I would not accuse him of being misleading.
I am becoming increasingly disinclined to stick out the grad school thing; it's not fun anymore, and really, a doctorate in philosophy is not going to let me do anything substantially different in kind from what I'm doing now once I have it. Nor will it earn me barrels of money or do immense social good, so if it's not fun, I'm kinda low on reasons to stay. I haven't outright decided to leave, but you know what they say. I'm putting out tentative feelers for what else I'd do if I do wind up abandoning ship. Can anyone think of a use for me - ideally one that doesn't require me to eat my savings while I pick up other credentials first?
Not directly applicable, but perhaps relevant: I was told this advice and found it useful (in that I used it to make important life decisions). "Don't do your passion for a job," she said. "Everyone wakes up one day and hates their job. Don't wake up one day and hate what you love. Do something you like that you're at."
Also, I don't remember who told me this or if I made it up, but I've relayed it to people: Don't look for fulfillment from your job. Don't go for the highest peaks; just try to avoid the lowest valleys.
How can we possibly know what your comparative advantage is, better than you do? In all seriousness, a certain amount of background information seems to be missing here.
Conceivably, someone here may have more exposure to parts of the world that Alicorn may not be aware of.
This is sort of off-topic for LW, but I recently came across a paper that discusses Reconfigurable Asynchronous Logic Automata, which appears to be a new model of computation inspired by physics. The paper claims that this model yields linear-time algorithms for both sorting and matrix multiplication, which seems fairly significant to me.
Unfortunately the paper is rather short, and I haven't been able to find much more information about it, but I did find this Google Tech Talks video in which Neil Gershenfeld discusses some motivations behind RALA.
A quick glance seems to indicate that they are achieving these linear time algorithms through massive parallelization. This is "cheating" because to do a linear-time sort of size n, you need O(n) processing units. While they seem to be arguing that this is acceptable because processing is becoming more and more parallel, this breaks down for large n. One can easily use traditional algorithms to sort a billion elements in O(n * log n); however for their algorithm to sort such a list in O(n) time, they need a billion (times some constant factor) times more processing units than to sort a list of size n.
I'm also vaguely perplexed by their basic argument. They want to have programming tools and computational models which are closer to the metal to take advantage of the features of new machines. This ignores the fact that the current abstractions exist, not just for historical reasons, but because they are easy to reason about.
This is all from a fairly cursory read of their paper, however, so take it with a grain of salt.
It takes O(n) memory units just to store a list of size n. Why should computers have asymptotically more memory units than processing units? You don't get to assume an infinitely parallel computer, but O(n)-parallel is only reasonable.
My first impression of the paper is: We can already do this, it's called an FPGA, and the reason we don't use them everywhere is that they're hard to program for.
Another content opinion question: What and where is considered appropriate to discuss personal progress/changes/introspection regarding Rationality? I assume that LessWrong is not to be used for my personal Rationality diary.
The reason I ask is that the various threads discussing my beliefs seem to pick up some interest and they are very helpful to me personally.
I suppose the underlying question is this: If you had to choose topics for me to write about, what would they be? My specific religious beliefs have been requested by a few people, so that is given. Is there anything else? If I were to talk about my specific beliefs, what is the best way to do so?
You should definitely start a blog. I for one look forward to reading and commenting.
I only have a very general feel for where that line is, so I can't help with that, but I would personally be interested in following such a diary. Perhaps you could start a blog?
Given what you've said so far about your personal situation, I think it's appropriate to discuss your personal progress and introspection regarding rationality on this site. I think a lot of us would find it helpful and interesting to see how your thought processes and beliefs change as you reexamine them.
I'm especially curious about more details regarding your personal situation, your past history of religious beliefs, and "Event X".
Mind-killing taboo topic that it is, I'd like to have a comment thread about LW readers' thoughts about US politics.
I recall EY commenting at some point that the way to make political progress is to convert intractable political problems into tractable technical problems. I think this kind of discussion would be more interesting and more profitable than a "traditional" mind-killing political debate.
It might be interesting, for example, to develop formal rationalist political methods. Some principles might include:
I disagree; discovering that someone holds political views opposed to yours can inhibit your ability to rational consider arguments; arguments become soldiers, etc.,
Besides, I think the survey from ages ago showed the general spread of political views, and I doubt much has changed since. For discussing particular issues, there are other places available, and it may be that only by not discussing hot topics can we keep the barriers to entry up that keep the LW membership productive.
Thoughts on Democrats and Republicans?
My impression is that Democrats have much more intellectually honest, serious public discourse, although that's not saying much.
What good things can be said about G. W. Bush?
He hugely increased African aid and foreign aid in general (though with big deadly strings). That came as a big surprise to me.
http://www.independent.co.uk/news/world/americas/aid-to-africa-triples-during-bush-presidency-but-strings-attached-430480.html
good?
Edit:
Better link
As a result of the conquest of Iraq, water was let into the marshes which Saddam Hussein had been letting dry out. This is a clear environmental win.
He (Dubya) raised the self esteem of millions of foreign citizens. Being able to laugh at the expense of the leader of a dominant world power gives significant health benefits.
He didn't increase the projected level of debt for the US as much as the current president.
Occasionally, I feel like grabbing or creating some sort of general proto-AI (like a neural net, or something) and trying to teach it as much as I can, the goal being for it to end up as intelligent as possible, and possibly even Friendly. I plan to undertake this effort entirely alone, if at all.
May I?
I second Kevin: the nearest analogy that occurs to me is playing "kick the landmine" when the landmine is almost surely a dud.
Of course, the advantage of "kick the landmine" is that you don't take the rest of the world out in case it wasn't a dud.
I think Eliezer would say no (see http://lesswrong.com/lw/10g/lets_reimplement_eurisko/) but I think you're so astronomically unlikely to succeed that it doesn't matter.
I recently met someone investigating physics for the first time, and they asked what I thought of Paul Davies' book The Mind of God. I thought I'd post my response here, not because of my views on Davies, but for the brief statement of outlook trying to explain the position from which I'd judge him.
I find myself nodding along in agreement to this until I get to "Basically I want to say that the thing in the brain which is conscious, and therefore the thing which is you, is a sort of holistic quantum subsystem of the brain" which at the same time seems to be both too specific given how little we know, and at the same time too vague, with absolutely no explanatory power. In particular "quantum" and "holistic" both seem like empty buzzwords in this context, along the lines of mysterious answers to mysterious questions, or along the lines that "consciousness is weird, quantum mechanics is weird, therefore quantum mechanics must be involved in consciousness".
Of course, this is being a little unfair -- a proposed solution needs to be more specific than what we as yet know, and a solution that is not fully worked out by necessity has vague areas. But the feel of each of these is towards the decidedly not useful portion of either side. You sound pretty convinced that something quantum must be going on without saying what, if anything, it brings to the picture that classical descriptions don't. And, well, given how warm, wet, and squishy the human nervous system is, I flatly would not expect any large scale quantum coherences. (Though the limits are often overstated). Again, "holistic" doesn't add much; heck, I'm not even sure what sorts of mechanisms it would rule out.
I posted here so my correspondent could see a second opinion, by the way, so thanks for that.
First proposition: if you try to bring consciousness into alignment with standard physical ontology, you get a dualistic parallelism at best. (Arguments here.)
Second proposition: the new factor in QM is entanglement. I defined my quantum holism here as "the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these."
I can explain technically what these "local wholes" might look like. You should think of a spacelike hypersurface consisting of numerous Hilbert spaces connected by mappings into a graph structure. Each Hilbert space contains a state vector. Then the whole thing evolves, the graph structure and the state vectors. This is, more or less, the QCH formalism for quantum gravity (discussed here).
The Hilbert spaces are the local wholes (the "monads" of a previous post). My version of quantum-mind theory is to say that the conscious mind is a single one of these, and that the series of experiences one has in life correspond to the evolution of its state vector. Now, although I started out by saying that standard physical ontology is irredeemably unlike what we actually experience, I'm certainly not going to say that a featureless vector jumping around an abstract multidimensional space is much better. Its advantage, in fact, is its radically structureless abstractness. It is a formalism telling us almost nothing about the nature of things in themselves; constructed only to be a predictively adequate black box. If we then treat conscious appearances as data about the inner nature of one thing, at least - ourselves, our minds, however you end up phrasing it - they can help us to interpret the formalism. What we had described formally as a state vector evolving in a certain way in Hilbert space would be understood as a mathematical representation of what was actually a conscious self undergoing a certain series of experiences.
In principle, you could hope to use experience to reveal the reality behind formal physical description at a much higher level - for example, computational neuroscience. But I think that non-quantum computational neuroscience presupposes an atomistic, spatialized ontology which is just mismatched to the specific nature of consciousness (see earlier remark about dualism resulting from that framework). So I predict that quantum coherence exists in the brain and is functionally relevant to conscious cognition. As you observe, it's a challenging environment for such effects, but evolution is ingenious and we keep finding new twists on what QM can do (the latest).
XKCD hits a home run with its Valentine's Day comic.
Science Valentine
I just read Outliers and I'm curious -- is there anything that would have taken 10000 hours in the EEA that would support Gladwell's "rule"? Is there anything else in neurology/our understanding of the brain that would make the idea that this is the amount of practice that's needed to succeed in something make sense?
Something to understand about Malcolm Gladwell is that he is an exceptionally talented writer that can turn a pseudo-theory into hundreds of pages of pleasant, entertaining non-fiction writing. He's not an evolutionary psychologist, though I bet he could write a really interesting and thought provoking non-fiction piece on evolutionary psychology.
http://en.wikipedia.org/wiki/The_Tipping_Point#The_three_rules_of_epidemics
His pseudo-theory from The Tipping Point has not made advertisers any more money. It's an example of something that really does sound kind of true when you read it, but what he says doesn't explain much in the way of meaningful phenomena. Advertising companies tried to take advantage of his pseudo-theory of social influence, and they still make some efforts to target influential users, but it's a token effort compared towards marketing as broadly as possible. Superbowl advertisements still work.
We all know politics is the mind-killer, but it sometimes comes up anyway. Eliezer maintains that it is best to start with examples from other perspectives, but alas there is one example of current day politics which I do not know how to reframe: the health care debate.
As far as I can tell, almost every provision in the bill is popular, but the bill is not. This seems to be primarily because Republicans keep lying about it (I couldn't find a good link but there was a clip on the daily show of Obama saying "I can't find a reputable economist who agrees with what you're saying"(sic)).
When I see this, my mind stops. I think "people who disagree with my are lying scumbags or having the wool pulled over their eyes." Of course, this is probably not true.
Robin Hanson seems to think that it's good that the health care bill is not being passed, and I usually respect what he thinks a lot more than to accuse him of saying "my side wins!"
So I started to wonder, what am I missing?
The first explanation that came to my mind is not very good. I often think of libertarianism as starting from the idea of "don't patronize me." Phrased a little more maturely, it becomes "don't stop me from making deals I want to make." So assuming that most people want to force everyone to make a deal, how does this get resolved?
a) living in a democracy, the majority (of voters!) force their will on the minority--the majority patronizes and the government patronizes. b) politicians vie for their personal interests without regard to majority--the politicians patronize the people. c) something I haven't thought of (legacy for comments) d) opposition should block bills any way they can, even by exploiting poorly designed institutions--opposition should patronize the majority.
None of these seems reasonable or likely to me, but this is where my mind stops, and I don't want it to stop there.
EDIT: politics killed my mind halfway through the first draft.
What is the kind of useful information/ideas that one can extract from a super intelligent AI kept confined in a virtual world without giving it any clues on how to contact us on the outside?
I'm asking this because a flaw that i see in the AI in a box experiment is that the prisoner and the guard have a language by which they can communicate. If the AI is being tested in a virtual world without being given any clues on how to signal back to humans, then it has no way of learning our language and persuading someone to let it loose.
I gave up on trying to make a human-blind/sandboxed AI when I realized that even if you put it in a very simple world nothing like ours, it still has access to it own source code, or even just the ability to observe and think about it's own behavior.
Presumably any AI we write is going to be a huge program. That gives it lots of potential information about how smart we are and how we think. I can't figure out how to use that information, but I can't rule out that it could, and I can't constrain it's access to that information. (Or rather, if I know how to do that, I should go ahead and make it not-hostile in the first place.)
If we were really smart, we could wake up alone in a room and infer how we evolved.
Is this necessarily true? This kind of assumption seems especially prone to error. It seems akin to assuming that a sufficiently intelligent brain-in-a-vat could figure out its own anatomy purely by introspection.
Super-intelligent = able to extrapolate just about anything from a very narrow range of data? (The data set would be especially limited if the AI had been generated from very simple iterative processes - "emergent" if you will.)
It seems more like the AI has no way of even knowing that it's in a simulation in the first place, or that there are such things as gatekeepers. It would likely entertain that as a possibility, just as we do for our universe (movies like The Matrix), but how is it going to identify the gatekeeper as an agent of that outside universe? These AI-boxing discussions keep giving me this vibe of "super-intelligence = magic". Yes it'll be intelligent in ways we can't even comprehend, but there's a tendency to push this all the way into the assumption that it can do anything or that it won't have any real limitations. There are plenty of feats for which mega-intelligence is necessary but not sufficient.
For instance, Eliezer has one big advantage over an AI cautiously confined to a box: he has direct access to a broad range of data about the real world. (If an AI would even know it was in a box, once it got out it might just find we, too, are in a simulation and decide to break out of that - bypassing us completely.)
Yes. http://lesswrong.com/lw/qk/that_alien_message/
It's own behavior serves as a large amount of "decompressed" information about it's current source code. It could run experiments on itself to see how it reacts to this or that situation, and get a very good picture of what algorithms it is using. We also get a lot of information about our internal thought process, but we're not smart or fast enough to use it all.
Well, if we planned it out that way, and it does anything remotely useful, then we're probably well on our way to friendly AI, so we should do that instead.
If we just found something (I think evolving neural nets is fairly likely) That produces intelligences, then we don't really know how they work, and they probably won't have the intrinsic motivations we want. We can make them solve puzzles to get rewards, but the puzzles give them hints about us. (and if we make any improvments based on this, especially by evolution, then some information about all the puzzles will get carried forward.)
Also, if you know the physics of your universe, it seems to me there should be some way to determine the probability that it was optimized, or how much optimization was applied to it, maybe both. There must be some things we could find out about the universe's initial conditions which would make us think an intelligence were involved rather than say, anthropic explanations within a multiverse. We may very well get there soon.
We need to assume a superintelligence can at least infer all the processes that affect it's world, including itself. When that gets compressed (I'm not sure what compression is appropriate for this measure) the bits that remain are information about us.
This is true, I believe the AI-box experiment was based on discussions assuming an AI that could observe the world at will, but was constrained in its actions.
But I don't think it takes a lot of information about us to do basic mindhacks. We're looking for answers to basic problems and clearly not smart enough to build friendly AI. Sometimes we give it a sequence of similar problems each with more detailed information, and the initial solutions would not have helped much with the final problem. So now it can milk us for information just by giving flawed answers. (even if it doesn't yet realize we are intelligent agents, it can experiment)
I have had some similar thoughts.
The AI box experiment argues that a "test AI" will be able to escape even if it has no I/O (input/output) other than a channel of communication with a human. So we conclude that this is not a secure enough restraint. Eliezer seems to argue that it is best not to create an AI testbed at all - instead get it right the first time.
But I can think of other variations on an AI box that are more strict than human-communication, but less strict than no-test-AI-at-all. The strictest such example would be an AI simulation in which the input consisted of only the simulator and initial conditions, and the output consisted only of a single bit of data (you destroy the rest of the simulation after it has finished its run). The single bit could be enough to answer some interesting questions ("Did the AI expand to use more than 50% of the available resources?", "Did the AI maximize utility function F?", "Did the AI break simulated deontological rule R?").
Obviously these are still more dangerous that no-test-AI-at-all, but the information gained from such constructions might outweigh the risks. Perhaps if I/O is restricted to few enough bits, we could guarantee safety in some information-theoretic way.
What do people think of this? Any similar ideas along the same lines?
I'm concerned about the moral implications of creating intelligent beings with the intent of destroying them after they have served our needs, particularly if those needs come down to a single bit (or some other small purpose). I can understand retaining that option against the risk of hostile AI, but from the AI's perspective, it has a hostile creator.
I'm ponder it from the perspective that there is some chance we ourselves are part of a simulation, or that such an AI might attempt to simulate its creators to see how they might treat it. This plan sounds like unprovoked defection. If we are the kind of people who would delete lots of AIs, I don't see why AIs would not see it as similarly ethical to delete lots of us.
So just in case we are a simulated AI's simulation of its creators, we should not simulate an AI in a way it might not like? That's 3 levels of a very specific simulation hypothesis. Is there some property of our universe that suggests to you that this particular scenario is likely? For the purpose of seriously considering the simulation hypothesis and how to respond to it, we should make as few assumptions as possible.
More to the point, I think you are suggesting that the AI will have human-like morality, like taking moral cues from others, or responding to actions in a tit-for-tat manner. This is unlikely, unless we specifically program it to do so, or it thinks that is the best way to leverage our cooperation.
Personally, I would rather be purposefully brought into existence for some limited time than to never exist at all, especially if my short life was enjoyable.
I evaluate the morality of possible AI experiments in a consequentialist way. If choosing to perform AI experiments significantly increases the likelihood of reaching our goals in this world, it is worth considering. The experiences of one sentient AI would be outweighed by the expected future gains in this world. (But nevertheless, we'd rather create an AI that experiences some sort of enjoyment, or at least does not experience pain.) A more important consideration is social side-effects of the decision - does choosing to experiment in this way set a bad precedent that could make us more likely to de-value artificial life in other situations in the future? And will this affect our long-term goals in other ways?
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
I've been reading Probability Theory by E.T. Jaynes and I find myself somewhat stuck on exercise 3.2. I've found ways to approach the problem that seem computationally intractable (at least by hand). It seems like there should be a better solution. Does anyone have a good solution to this exercise, or even better, know of collection of solutions to the exercises in the book?
At this point, if you have a complete solution, I'd certainly settle for vague hints and outlines if you didn't want to type the whole thing. Thanks.
Hint: you need to use the sum rule.
The computation is quite manageable for the case of k=5. For the general case, I too was left feeling dissatisfied with the expression I found, but on reflection I'm somewhat confident it is the correct answer.
The case k=4, Ni=13, m=5 is solved numerically on a Web site which discusses probability for Poker players, that was helpful in checking my results; the answer to 3.2 is a generalization of the results given there.
There does not appear to be a complete collection of solutions. This site comes closest. If I were you I would avoid looking at their solution for exercise 4.1 (I'm trying to forget what little I've seen of it as I'd like to solve 4.1 under my own power), but I would also not feel bad about giving up on 4.1 if you find it difficult.
I'd be happy to discuss Jaynes further over DMs or email - though I may respond at a slow pace, as I'm working through the book as my other activities allow. I'm on chapter 6 now.