Comment author:red75
08 June 2010 10:34:42AM
0 points
[-]
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
Comment author:red75
10 June 2010 02:10:54AM
1 point
[-]
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
Comment author:SilasBarta
09 June 2010 06:06:53PM
*
0 points
[-]
For those of you who have been following my campaignagainst the "It's impossible to explain this, so don't expect me to!" defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan's blog.
In case he deletes the entire exchange thus far (which he's been known to do when I post), here's what's transpired (paragraphing truncated):
Me: That's not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function ("rules") for his actions. Maybe he doesn't really understand it himself?
Gene: Well, if I had a silly mechanical view of human nature and thought peoples' actions came from a "generating function", I would think this was a problem.
Me: Which physical law do humans violate? What is the experimental evidence for this violation? Btw, the monk problem isn't hard. Watch this: "Hello, students. Here is why we don't touch women. Here is what we value. Here is where it falls in our value system." There you go. It didn't require a lifetime of learning to convey the reasoning the senior monk used to the junior, now, did it?
ETA: Previous remark by me was rejected by Gene for posting. He instead posted this:
Gene: Silas, you only got through one post without becoming an unbearable douche [!] this time. You had seemed to be improving.
I just tried to post this:
Me: Don't worry, I made sure the exchange was preserved so that other people can view for themselves what you consider "being an unbearable douche", or what others might call, "serious challenges to your position".
Me: If you ever want to specify how it is that human beings' actions don't come from a generating function, thereby violating physical law, I'd love to have that chat and help you flesh out the idea enough to get yourself a Nobel. However, what I think you really meant to say was that the generating function is so difficult to learn directly, that lifelong practice is easy by comparison (if you were to argue the best defense of your position, that is)
Me: Can you at least agree you picked a bad example of knowledge that necessarily comes from lifelong practice? Would that be too much to ask?
Comment author:[deleted]
13 June 2010 12:09:25AM
1 point
[-]
Maybe this has been discussed before -- if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people like ourselves. (For instance, ideas like reciprocity only make sense if the things we can do to other people are similar to the things they can do to us.)
The decision function of a lone, far more powerful AI would not have this quality. So it would be very different from all human decision functions or principles. Maybe this difference should cause us to call it immoral.
Comment author:[deleted]
13 June 2010 01:07:16PM
1 point
[+]
(1
child)
Comment author:[deleted]
13 June 2010 01:07:16PM
1 point
[-]
I'm not necessarily arguing for this position as saying we need to address it. "Suicidal AI" is to the problem of constructing FAI as anarchism is to political theory; if you want to build something (an FAI, a good government) then, on the philosophical level, you have to at least take a stab at countering the argument that perhaps it is impossible to build it.
I'm working under the assumption that we don't really know at this point what "Friendly" means, otherwise there wouldn't be a problem to solve. We don't yet know what we want the AI to do.
What we do know about morality is that human beings practice it. So all our moral laws and intuitions are designed, in particular, for small, mortal creatures, living among other small, mortal creatures.
Egalitarianism, for example, only makes sense if "all men are created equal" is more or less a statement of fact. What should an egalitarian human make of a powerful AI? Is it a tyrant? Well, no, a tyrant is a human who behaves as if he's not equal to other humans; the AI simply isn't equal. Well, then, is the AI a good citizen? No, not really, because citizens treat each other on an equal footing...
The trouble here, I think, is that really all our notions of goodness are really "what is good for a human to do." Perhaps you could extend them to "what is good for a Klingon to do" -- but a lot of moral opinions are specifically about how to treat other people who are roughly equivalent to yourself. "Do unto others as you would have them do unto you." The kind of rules you'd set for an AI would be fundamentally different from our rules for ourselves and each other.
It would be as if a human had a special, obsessive concern and care for an ant farm. You can protect the ants from dying. But there are lots of things you can't do for the ants: be an ant's friend, respect an ant, keep up your end of a bargain with an ant, treat an ant as a brother...
I had a friend once who said, "If God existed, I would be his enemy." Couldn't someone have the same sentiment about an AI?
(As always, I may very well be wrong on the Internet.)
You say, human values are made for agents of equal power; an AI would not be equal; so maybe the friendly thing to do is for it to delete itself. My question was, is it allowed to do just one or two positive things before it does this? I can also ask: if overwhelming power is the problem, can't it just reduce itself to human scale? And when you think about all the things that go wrong in the world every day, then it is obvious that there is plenty for a friendly superhuman agency to do. So the whole idea that the best thing it could do is delete itself or hobble itself looks extremely dubious. If your point was that we cannot hope to figure out what friendliness should actually be, and so we just shouldn't make superhuman agents, that would make more sense.
The comparison to government makes sense in that the power of a mature AI is imagined to be more like that of a state than that of a human individual. It is likely that once an AI had arrived at a stable conception of purpose, it would produce many, many other agents, of varying capability and lifespan, for the implementation of that purpose in the world. There might still be a central super-AI, or its progeny might operate in a completely distributed fashion. But everything would still have been determined by the initial purpose. If it was a purpose that cared nothing for life as we know it, then these derived agencies might just pave the earth and build a new machine ecology. If it was a purpose that placed a value on humans being there and living a certain sort of life, then some of them would spread out among us and interact with us accordingly. You could think of it in cultural terms: the AI sphere would have a culture, a value system, governing its interactions with us. Because of the radical contingency of programmed values, that culture might leave us alone, it might prod our affairs into taking a different shape, or it might act to swiftly and decisively transform human nature. All of these outcomes would appear to be possibilities.
It seems unlikely that an FAI would commit suicide if humans need to be protected from UAI, or if there are other threats that only an FAI could handle.
Do you ever have a day when you log on and it seems like everyone is "wrong on the Internet"? (For values of "everyone" equal to 3, on this occasion.) Robin Hanson and Katja Grace both have posts (on teenage angst, on population) where something just seems off, elusively wrong; and now SarahC suggests that "the only friendly AI may be one that commits suicide". Something about this conjunction of opinions seems obscurely portentous to me. Maybe it's just a know-thyself moment; there's some nascent opinion of my own that's going to crystallize in response.
Now that my special moment of sharing is out of the way... Sarah, is the friendly AI allowed to do just one act of good before it kills itself? Make a child smile, take a few pretty photos from orbit, save someone from dying, stop a war, invent cures for a few hundred diseases? I assume there is some integrity of internal logic behind this thought of yours, but it seems to be overlooking so much about reality that there has to be a significant cognitive disconnect at work here.
Comment author:Rain
09 June 2010 07:51:57PM
*
3 points
[-]
I've recently begun downvoting comments that are at -2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach -2 but fail to be pushed over to -3, which I'm attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don't want to be 'the one to push the button'. This is an extension of my RL policy of taking 'the last' of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, I expect others will vote it back up.
Edit: It's likely that most of the negative response to this comment centers around the phrase "regardless of my feelings about them." I now consider this to be too strong a statement with regards to my implemented actions. I do read the comment to make sure I don't consider it any good, and doubt I would perversely vote something down even if I wanted to see more of it.
Comment author:Morendil
09 June 2010 08:02:41PM
5 points
[-]
I wish you wouldn't do that, and stuck instead with the generally approved norm of downvoting to mean "I'd prefer to see fewer comments like this" and upvoting "I'd like to see more like this".
You're deliberately participating in information cascades, and thereby undermining the filtering process. As an antidote, I recommend using the anti-kibitzer script (you can do that through your Preferences page).
Comment author:Rain
09 June 2010 08:05:39PM
1 point
[-]
I wish you wouldn't do that, and stuck instead with the generally approved norm of downvoting to mean "I'd prefer to see fewer comments like this" and upvoting "I'd like to see more like this".
I disagree that that's the formula used for comments that exist within the range -2 to 2. Within that range, from what I've observed of voting patterns, it seems far more likely that the equation is related to what value the comment "should be at." If many people used anti-kibitzing, I doubt this would remain a problem.
I believe your hypothesis and decision are possibly correct, but if they are, you should expect your downvotes to often be corrected upwards again. If this doesn't happen, then you are wrong and shouldn't apply this heuristic.
I disagree that that's the formula used for comments that exist within the range -2 to 2.
Morendil doesn't say it's what actually happens, he merely says it should happen this way, and that you in particular should behave this way.
I'm using it as an excuse to overcome my general laziness with regards to voting, which has the typical pattern of one vote (up or down) per hundreds of comments read.
Comment author:taw
11 June 2010 12:27:04PM
0 points
[-]
How can I understand quantum physics? All explanations I've seen are either:
those that dumb things down too much, and deliver almost no knowledge; or
those that assume too much familiarity with this kind of mathematics that nobody outside physics uses, and are therefore too frustrating.
I don't think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven't seen any quantum physics explanation that did even as little as reasonably explaining why hbar/2 is the correct limit of uncertainty (as opposed to some other constant), and why it even has the units it has (that is why it applies to these pairs of measurements, but not to some other pairs); or what are quark colors (are they discrete; arbitrary 3 orthogonal vectors on unit sphere; or what? can you compare them between quarks in different protons?); spins (it's obviously not about actual spinning, so how does it really work? especially with movement being relative); how electro-weak unification works (these explanations are all handwaved) etc.
Comment author:Alexandros
07 June 2010 09:51:42AM
*
3 points
[-]
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
Comment author:AlanCrowe
07 June 2010 02:01:26PM
*
1 point
[-]
I've become increasingly disillusioned with people's capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won't see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn't blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1/8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people's capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
Comment author:AlanCrowe
01 July 2010 04:00:20PM
1 point
[-]
I've got a better link. David Henderson catches a professor of economics getting costs and benefits confused in a published book. Henderson's review is on on page 54 of Regulation, and my viewer puts it on the ninth page of the pdf that Henderson links to
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Comment author:Emile
07 June 2010 12:46:05PM
3 points
[-]
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
Comment author:Vladimir_M
07 June 2010 07:09:23PM
*
4 points
[-]
Emile:
Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
These claims would be beyond the border of lunacy for any person, but still, I'm sure you'll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard's point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism: http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I've seen criticisms of Bouw's ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
Comment author:Clippy
07 June 2010 07:20:19PM
0 points
[-]
Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism:
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
Comment author:JoshuaZ
07 June 2010 07:32:14PM
3 points
[-]
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
Comment author:Jack
08 June 2010 03:04:13AM
1 point
[-]
He had a teaching position at a reputable-looking college, and I figure they would have checked.
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he's young enough to make me think that he may have been forced into retirement.
Comment author:[deleted]
13 June 2010 12:37:11PM
1 point
[+]
(1
child)
Comment author:[deleted]
13 June 2010 12:37:11PM
1 point
[-]
If no people believe Y -- literally no people -- then either the topic is very little examined by human beings, or it's very exhaustively examined and seems obvious to everyone. In the first case, I give a smaller probability than in the second case.
In the first case, only X believers exist because only X believers have yet considered the issue. That's minimal evidence in favor of X.
In the second case, lots of people have heard of the issue; if there were a decent case against X, somebody would have thought of it. The fact that none of them -- not a minority, but none -- argued against X is strong evidence that X is true.
Comment author:Jack
08 June 2010 01:05:43AM
1 point
[-]
I don't think belief has a consistent evidentiary strength since it depends on the testifier's credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I'm carrying. I don't see any relation that could be described as baseline so the only answer is: context.
Comment author:Rain
17 June 2010 07:23:26PM
5 points
[-]
Sock puppet accounts aren't appreciated, mwaser, especially when you keep plugging the same blog. Comments about those links have received at least 28 downvotes already, just in this Open Thread.
Summary: Even if you agree that trees normally make vibrations when they fall, you're still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the "standard" debate over a famous philosophical dilemma: "If a tree falls in a forest and no one hears it, does it make a sound?" (Call this "Question Y.") Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between "sound as vibration" and "sound as auditory perception in one's mind", and that the standard (naive) debate relies on two parties assuming different definitions, leading to a pointless argument. Obviously, it makes a sound in the first sense but not the second, right?
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
If a tree falls, and no human (or human-entangled[1] sensor) is around to hear it, does it still make vibrations? On what basis do you believe this, lacking a way to directly check? (Call this "Question S".)
Now, if you're a regular on this site, you will find that question easy to answer. But before going into my exposition of the answer, I want to point out some errors that Question S does not make.
For one thing, it does not equivocate between two meanings of sound -- there, sound is taken to mean only one thing: the vibrations.
Second, it does not reduce to a simple question about anticipation of experience. In Question Y, the disputants can run through all observations they anticipate, and find them to be the same. However, if you look at the same cases in Question S, you don't resolve the debate so easily: both parties agree that by putting a tape-recorder by the tree, you will detect vibrations from the tree falling, even if people aren't around. But Question S instead specifically asks about what goes on when these kinds of sensors are not around, rendering such tests unhelpful for resolving such a disagreement.
So how do you go about resolving Question S? Yudkowsky gave a model for how to do this in Belief in the Implied Invisible, and I will do something similar here.
Complexity of the hypothesis
First, we observe that, in all cases where we can make a direct measurement, trees make vibrations when they fall. And we're tasked with finding out whether, specifically in those cases where a human (or appropriate organism with vibration sensitivity in its cognition) will never make a measurement of the vibrations, the vibrations simply don't happen. That is, when we're not looking -- and never intend to look -- trees stop the "act" and don't vibrate.
The complexity this adds to the laws of physics is astounding and may be hard to appreciate at first. This belief would require us to accept that nature has some way of knowing which things will eventually reach a cognitive system in such a way that it informs it that vibrations have happened. It must selectively modify material properties in precisely defined scenarios. It must have a precise definition of what counts as a tree.
Now, if this actually happens to be how the world works, well, then all the worse for our current models! However, each bit of complexity you add to a hypothesis reduces its probability and so must be justified by observations with a corresponding likelihood ratio -- that is, the ratio of the probability of the observation happening if this alternate hypothesis is true, compared to if it were false. By specifying the vibrations' immunity to observation, the log of this ratio is zero, meaning observations are stipulated to be uninformative, and unable to justify this additional supposition in the hypothesis.
[1] You might wonder how someone my age in '89-'91 would come up with terms like "human-entangled sensor", and you're right: I didn't use that term. Still, I considered the use of a tape recorder that someone will check to be a "someone around to hear it", for purposes of this dilemma. Least Convenient Possible World and all...
Comment author:mwaser
14 June 2010 02:31:34AM
-1 points
[-]
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
I'm not sure what effect that this has upon your argument, but it's something that I think that you're missing.
Comment author:SilasBarta
14 June 2010 02:43:38AM
*
3 points
[-]
I'm familiar with this: entanglement between the environment and the quantum system affects the outcome, but nature doesn't have a special law that distinguishes human entanglement from non-human entanglement (as far as we know, given Occam's Razor, etc.), which the alternate hypothesis would require.
The error that early quantum scientists made was in failing to recognize that it was the entanglement with their measuring devices that affected the outcome, not their immaterial "conscious knowledge". As EY wrote somewhere, they asked,
"The outcome changes when I know something about system -- what difference should that make?"
when they should have asked,
"The outcome changes when I establish more mutual information with the system -- what different should that make?"
In any case, detection of vibration does not require sensitivity to quantum-specific effects.
Comment author:JoshuaZ
14 June 2010 02:38:36AM
2 points
[-]
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
Not really. This is only the case for certain interpretations of what is going on such as in certain forms of the Copenhagen interpretation. Even then, observation in this context doesn't really mean observe in the colloquial sense but something closer to interact with another particle in a certain class of conditions. The notion that you seem to be conflating this with is the idea that consciousness causes collapse. Not many physicists take that idea at all seriously. In most version of the Many-Worlds interpretation, one doesn't need to say anything about observations triggering anything (or at least can talk about everything without talking about observations).
Disclaimer: My knowledge of QM is very poor. If someone here who knows more spots anything wrong above please correct me.
Comment author:MugaSofer
22 January 2013 09:58:17AM
-2 points
[-]
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
If a tree falls, and no human (or human-entangled[1] sensor) is around to hear it, does it still make vibrations? On what basis do you believe this, lacking a way to directly check? (Call this "Question S".)
Me too! It was actually explained that way to me by my parents as a kid, in fact. I wonder if there are two subtly different versions floating around or EY just interpreted it uncharitably.
I think that if this post is left as it is this post would be to trivial to be a top level post. You could reframe it as a beginners' guide to Occam, or you could make it more interesting by going deeper into some of the issues (if you can think of anything more to say on the topic of differentiating between hypotheses that make the same predictions, that might be interesting, although I think you might have said all there is to say)
It could also be framed as an issue of making your beliefs pay rent, similar to the dragon in the garage example - or perhaps as an example of how reality is entangled with itself to such a degree that some questions that seem to carve reality at the joints don't really do so.
(If falling trees don't make vibrations when there's no human-entangled sensor, how do you differentiate a human-entangled sensor from a non-human-entangled sensor? If falling-tree vibrations leave subtle patterns in the surrounding leaf litter that sufficiently-sensitive human-entangled sensors can detect, does leaf litter then count as a human-entangled sensor? How about if certain plants or animals have observably evolved to handle falling-tree vibrations in a certain way, and we can detect that. Then such plants or animals (or their absence, if we're able to form a strong enough theory of evolution to notice the absence of such reactions where we would expect them) could count as human-entangled sensors well before humans even existed. In that case, is there anything that isn't a human-entangled sensor?)
Comment author:SilasBarta
13 June 2010 07:17:37PM
3 points
[-]
Good points in the parenthetical -- if I make it into a top-level article, I'll be sure to include a more thorough discussion of what concept is being carved with the hypothesis that there are no tree vibrations.
Comment author:kodos96
10 June 2010 07:06:10PM
*
4 points
[-]
And modulo all the forensic evidence.
Obviously this is breaking news and it's too soon to draw a conclusion, but at first blush this sounds like just another attention seeker, like those who always pop up in these high profile cases. If he really can produce a knife, and it matches the wounds, then maybe I'll reconsider, but at the moment my BS detector is pegged.
Of course, it's still orders of magnitude more likely than Knox and Sollecito being guilty.
Comment author:komponisto
11 June 2010 04:20:20AM
*
4 points
[-]
Seconding kodos96. As this would exonerate not only Knox and Sollecito but Guede as well, it has to be treated with considerable skepticism, to say the least.
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread.
Still, the Aviello story is enough of a surprise to marginally lower my probability of Guede's guilt. My current probabilities of guilt are:
Knox: < 0.1 % (i.e. not a chance)
Sollecito: < 0.1 % (likewise)
Guede: 95-99% (perhaps just low enough to insist on a debunking of the Aviello testimony before convicting)
It's probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
Finally, I'd like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice in Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
Comment author:kodos96
11 June 2010 07:21:12PM
1 point
[-]
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread. Still, the story is enough of a surprise to marginally lower my probability of Guede's guilt.
I hadn't heard about this - I just read your link though, and maybe I'm missing something, but I don't see how it lowers the probability of Guede's guilt. He (supposedly) confessed to having been at the crimescene, and that Knox and Sollecito weren't there. How does that, if true, exonerate Guede?
Comment author:komponisto
11 June 2010 09:57:04PM
*
2 points
[-]
You omitted a crucial paragraph break. :-)
The Aviello testimony would exonerate Guede (and hence is unlikely to be true); the Alessi testimony is essentially consistent with everything else we know, and isn't particularly surprising at all.
Comment author:Eneasz
09 June 2010 08:41:05PM
10 points
[-]
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won't offer much to regular/long-time LW readers, but it's a great resource to give to friends/family who don't have the time/energy demanded by LW.
It is a good blog, and it has a slightly wider topic spread than LW, so even if you're familiar with most of the standard failures of judgment there'll be a few new things worth reading. (I found the "introducing fines can actually increase a behavior" post particularly good, as I wasn't aware of that effect.)
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as "safety gloves" for dangerous memes?
I'm asking because I'm currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the "justifications" are incoherent, Orwellian doublespeak, and/or tendentiously ideological. I really do want to memorize (nearly) all of these justifications, so that I can be sure to pass the exam and continue my career as a rationalist lawyer, but I don't want the pattern of thought used by the justifications to become a part of my pattern of thought.
Comment author:Jordan
13 June 2010 10:55:07PM
3 points
[-]
I worry about this as well when I'm reading long arguments or long works of fiction presenting ideas I disagree with. My tactic is to stop occasionally and go through a mental dialog simulating how I would respond to the author in person. This serves a double purpose, as hopefully I'll have better cached arguments in the event I ever need them.
Of course, this is a dangerous tactic as well, because you may be shutting off critical reasoning applied to your preexisting beliefs. I only apply this tactic when I'm very confident the author is wrong and is using fallacious arguments. Even then I make sure to spend some amount of time playing devil's advocate.
I would not worry overmuch about the long-term negative effects of your studying for the bar: with the possible exception of the "overly sincere" types who fall very hard for cults and other forms of indoctrination, people have a lot of antibodies to this kind of thing.
You will continue to be entagled with reality after you pass the exam, and you can do things, like read works of social science that carve reality at the joints, to speed up the rate at which your continued entaglement with reality with cancel out any falsehoods you have to cram for now. Specifically, there are works about the law that do carve reality at the joints -- Nick Szabo's online writings IMO fall in that category. Nick has a law degree, by the way, and there is certainly nothing wrong with his ability to perceive reality correctly.
ADDED. The things that are really damaging to a person's rationality, IMHO, are natural human motivations. When for example you start practicing, if you were to decide to do a lot of trials, and you learned to derive pleasure -- to get a real high -- from the combative and adversarial part of that, so that the high you got from winning with a slick and misleading angle trumped the high you get from satisfying you curiosity and from refining and finding errors in your model of reality -- well, I would worry about that a lot more than your throwing yourself fully into winning on this exam because IMHO the things we derive no pleasure from, but do to achieve some end we care about (like advancing in our career by getting a credential) have a lot less influence on who we turn out to be than things we do because we find them intrinsically rewarding.
One more thing: we should not all make our living as computer programmers. That would make the community less robust than it otherwise would be :)
It promises such lovely possibilities as quick solutions to NP-complete problems, and I'm not entirely sure the mechanism couldn't also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don't understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they've discovered are. I'm hoping one of you does.
If this worked, Harry could use it to recover any sort of answer that was easy to check but hard to find. He wouldn't have just shown that P=NP once you had a Time-Turner, this trick was more general than that. Harry could use it to find the combinations on combination locks, or passwords of every sort. Maybe even find the entrance to Slytherin's Chamber of Secrets, if Harry could figure out some systematic way of describing all the locations in Hogwarts. It would be an awesome cheat even by Harry's standards of cheating.
Harry took Paper-2 in his trembling hand, and unfolded it.
Paper-2 said in slightly shaky handwriting:
DO NOT MESS WITH TIME
Harry wrote down "DO NOT MESS WITH TIME" on Paper-1 in slightly shaky handwriting, folded it neatly, and resolved not to do any more truly brilliant experiments on Time until he was at least fifteen years old.
To put this into my own words "The more information you extract from the future, the less you are able to control the future from the past. And hence, the less understanding you can have about what those bits of future-generated information are actually going to mean."
I wrote that before actually looking at the paper you linked. I don't understand much QM either, but now that I have looked it seems to me that figure 2 of the paper backs me up on my interpretation of Harry's experiment.
Comment author:Baughn
05 April 2011 08:05:02PM
4 points
[-]
Even if it's written by Eliezer, that's still generalizing from fictional evidence. We don't know what the laws of physics are supposed to be there..
Well. You probably can't use time-travel to get infinite computing power. But that's not to say you can't get strictly finite power out of it; in Harry's case, his experiment would probably have worked just fine if he'd been the sort of person who'd refuse to write "DO NOT MESS WITH TIME".
Comment author:cousin_it
05 April 2011 08:14:23PM
*
1 point
[-]
Playing chicken with the universe, huh? As long as scaring Harry is easier than solving his homework problem, I'd expect the universe to do the former :-) Then again, you could make a robot use the Time-Turner...
Comment author:Wei_Dai
13 June 2010 06:15:35PM
*
5 points
[-]
While searching for literature on "intuition", I came upon a book chapter that gives "the state of the art in moral psychology from a social-psychological perspective". This is the best summary I've seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I've put up a mirror of it.
ETA: Here's the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social Psychology, 5th Edition. Hobeken, NJ: Wiley. Pp. 797-832.
Comment author:Will_Newsome
24 January 2011 11:52:52AM
*
-1 points
[-]
[T]o avoid that trivial inconvenience, I've put up a mirror of it.
You're awesome.
I've previously been impressed by how social psychologists reason, especially about identity. Schemata theory is also a decent language for talking about cognitive algorithms from a less cognitive sciencey perspective. I look forward to reading this chapter. Thanks for mirroring, I wouldn't have bothered otherwise.
Comment author:RobinZ
13 June 2010 05:47:45PM
1 point
[-]
Beautiful. Matthew Yglesias, +1 point.
It is entirely possible that some social groups are experiencing the kind of changes that Flanagan describes, but as Yglesias says, she apparently is unaware that there is such a thing as scientific evidence on the question.
Comment author:Rain
13 June 2010 02:35:25PM
*
2 points
[-]
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
Comment author:Blueberry
13 June 2010 05:11:59PM
2 points
[-]
From that Wikipedia article:
Inside the railcar, besides the paper clips, there are the Schroeders’ book and a suitcase filled with letters of apology to Anne Frank by a class of German schoolchildren.
Apologizing for ... being German? That's really bizarre.
Comment author:LucasSloan
13 June 2010 06:49:10PM
2 points
[-]
Apologizing for ... being German? That's really bizarre.
Not really. Most cultures go funny in the head around the Holocaust. It is, for some reason, considered imperative that 10th graders in California spend more time being made to feel guilty about the Holocaust than learning about the actual politics of the Weimar Republic.
Cultures can also be very weird about how they treat schoolchildren. The kids weren't responsible for any part of the Holocaust, and they're theoretically apologizing to someone who can't hear it.
I can see some point in all this if you believe that Germans are especially apt to genocide (I have no strong opinion about this) and need to keep being reminded not to do it. Still, if this sort of apology is of any use, I'd take it more seriously if it were done spontaneously by individuals.
Comment author:cupholder
13 June 2010 03:22:46AM
*
3 points
[-]
I think I found the study they're talking about thanks to this article. I might take a look at it - if the methodology is literally just 'smoking was banned, then the heart attack rate dropped', that sucks.
(Edit to link to the full study and not the abstract.)
Just skimmed it. The methodology is better than that. They use a regression to adjust for the pre-existing downward trend in the heart attack hospital admission rate; they represent it as a linear trend, and that looks fair to me based on eyeballing the data in figures 1 and 2. They also adjust for week-to-week variation and temperature, and the study says its results are 'more modest' than others', and fit the predictions of someone else's mathematical model, which are fair sanity checks.
I still don't know how robust the study is - there might be some confounder they've overlooked that I don't know enough about smoking to think of - but it's at least not as bad as I expected. The authors say they want to do future work with a better data set that has data on whether patients are active smokers, to separate the effect of secondhand smoke from active smoking. Sounds interesting.
Comment author:JoshuaZ
12 June 2010 09:54:23PM
*
2 points
[-]
I agree that this article isn't very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil's fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski's entire approach is to criticize anything remotely new or weird-seeming, I'm disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
Comment author:ata
12 June 2010 10:50:01PM
*
9 points
[-]
I'm starting to think SIAI might have to jettison the "singularity" terminology (for the intelligence explosion thesis) if it's going to stand on its own. It's a cool word, and it would be a shame to lose it, but it's become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit:Look at this Facebook group. This sort of thing is just embarrassing to be associated with. "If you are feeling brave, you can approach a stranger in the street and speak your message!" Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn't do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
Comment author:JoshuaZ
14 June 2010 05:24:13AM
*
2 points
[-]
I'm not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
To raise awareness of the Singularity, which is expected to occur no later than the year 2045, we must reach out to everyone on the 1st day of every month.
At 20:45 hours (8:45pm) on the 1st day of each month we will send SINGULARITY MESSAGES to friends or strangers.
Example message:
"Nanobot revolution, AI aware, technological utopia: Singularity2045."
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there's a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that's a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there's a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there's any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they'd be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I'm assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn't seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage-- I'm imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I'm not sure where you'd start to prevent biotech disasters.
I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren't there a lot of intelligent species?
Comment author:taw
11 June 2010 12:52:42PM
1 point
[-]
Somewhat accepted partial answer is that huge brains are ridiculously expensive - you need a lot of high energy density food (= fire), a lot of DHA (= fish) etc. Chimp diet simply couldn't support brains like ours (and aquatic ape etc.), nor could they spend as much time as us engaging in politics as they were too busy just getting food.
Perhaps chimp brains are as big as they could possibly be given their dietary constraints.
That's conceivable, and might also explain why wolves, crows, elephants, and other highly social animals aren't as smart as people.
Also, I think the original bit in Methods of Rationality overestimates how easy it is for new ideas to spread. As came up recently here, even if tacit knowledge can be explained, it usually isn't.
This means that if you figure out a better way to chip flint, you might not be able to explain it in words, and even if you can, you might chose to keep it as a family or tribal secret. Inventions could give their inventors an advantage for quite a long time.
Five-second guess: Human-level Machiavellian intelligence needs language facilities to co-evolve with, grunts and body language doesn't allow nearly as convoluted schemes. Evolving some precursor form of human-style language is the improbable part that other species haven't managed to pull off.
Comment author:hegemonicon
10 June 2010 02:40:51PM
*
8 points
[-]
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level values that we pay lip service too. Group values are the ones we believe are our ‘real’ values, the ones that form the backbone of our ethics, the ones we signal to others at great expense. But actually having these values is tricky from an evolutionary standpoint – strategically, you’re much better off being selfish than generous, being two-faced than loyal, and furthering your own gains at the expense of everyone elses.
So humans are in a pickle – it’s beneficial for them to form groups to solve their problems and increase their chances of survival, but it’s also beneficial for people to be selfish and mooch off the goodwill of the group. Because of this, we have sophisticated machinery called ’suspicion’ to ferret out any liars or cheaters furthering their own gains at the groups expense. Of course, evolution is an arms race, so it’s looking for a method to overcome these mechanisms, for ways it can fulfill it’s base desires while still appearing to support the group.
It accomplished this by implementing willpower. Because deceiving others about what we believe would quickly be uncovered, we don’t actually deceive them – we’re designed so that we really, truly, in our heart of hearts believe that the group-supporting values – charity, nobility, selflessness – are the right things to do. However, we’re only given a limited means to accomplish them. We can leverage our willpower to overcome the occasional temptation, but when push comes to shove – when that huge pile of money or that incredible opportunity or that amazing piece of ass is placed in front of us, willpower tends to fail us. Willpower is generally needed for the values that don’t further our evolutionary best interests – you don’t need willpower to run from danger or to hunt an animal if you’re hungry or to mate with a member of the opposite sex. We have much better, much more successful mechanisms that accomplish those goals. Willpower is designed so that we really do want to support the group, but wind up failing at it and giving in to our baser desires – the ones that will actually help our genes get replicated.
Of course, the maladaption comes into play due to the fact that we use willpower to try to accomplish other, non-group related goals – mostly the long-term, abstract plans we create using high-level, conscious thinking. This does appear to be a design flaw (though since humans are notoriously bad at making long-term predictions, it may not be as crippling as it first appears.)
Comment author:ciphergoth
10 June 2010 10:11:39AM
2 points
[-]
What solution do people prefer to Pascal's Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn't mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we're wrong is found by drawing from a broader reference class, like "offers from a stranger".
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
The utility function assumes that you play the "game" (situation, whatever) an infinite number of times and then find the net utility.
This isn't right. The way utility is normally defined, if outcome X has 10 times the utility of outcome Y for a given utility function, agents behaving in accord with that function will be indifferent between certain Y and a 10% probability of X. That's why they call expected utility theory a theory of "decision under uncertainty." The scenario you describe sounds like one where the payoffs are in some currency such that you have declining utility with increasing amounts of the currency.
Comment author:Houshalter
11 June 2010 06:08:34PM
-1 points
[-]
The scenario you describe sounds like one where the payoffs are in some currency such that you have declining utility with increasing amounts of the currency.
Uh, no. Allright, lets say I give you a 1 out of 10 chance at winning 10 times everything you own, but the other 9 times you lose everything. The net utility for accepting is the same as not accepting, yet thats completely ignoring the fact that if you do enter, 90 % of the time you lose everything, no matter how high the reward is.
As Thom indicates, this is exactly what I was talking about: ten times the stuff you own, rather than ten times the utility. Since utility is just a representation of your preferences, the 1 in 10 payoff would only have ten times the utility of your current endowment if you would be willing to accept this gamble.
Comment author:thomblake
11 June 2010 06:51:58PM
1 point
[-]
That's only true if "everything you own" is cast in terms of utility, which is not intuitive. Normally, "everything you own" would be in terms of dollars or something to that effect, and ten times the number of dollars I have is not worth 10 times the utility of those dollars.
Comment author:cupholder
10 June 2010 01:29:05PM
*
1 point
[-]
Tom_McCabe2 suggests generalizing EY's rebuttal of Pascal's Wager to Pascal's Mugging: it's not actually obvious that someone claiming they'll destroy 3^^^^3 people makes it more likely that 3^^^^3 people will die. The claim is arguably such weak evidence that it's still about equally likely that handing over the $5 will kill 3^^^^3 people, and if the two probabilities are sufficiently equal, they'll cancel out enough to make it not worth handing over the $5.
Personally, I always just figured that the probability of someone (a) threatening me with killing 3^^^^3 people, (b) having the ability to do so, and (c) not going ahead and killing the people anyway after I give them the $5, is going to be way less than 1/3^^^^3, so the expected utility of giving the mugger the $5 is almost certainly less than the $5 of utility I get by hanging on to it. In which case there is no problem to fix. EY claims that the Solomonoff-calculated probability of someone having 'magic powers from outside the Matrix' 'isn't anywhere near as small as 3^^^^3 is large,' but to me that just suggests that the Solomonoff calculation is too credulous.
(Edited to try and improve paraphrase of Tom_McCabe2.)
Comment author:ciphergoth
10 June 2010 11:07:32PM
1 point
[-]
This seems very similar to the "reference class fallback" approach to confidence set out in point 2, but I prefer to explicitly refer to reference classes when setting out that approach, otherwise the exactly even odds you apply to massively positive and massively negative utility here seem to come rather conveniently out of a hat...
The unbounded utility function (in some physical objects that can be tiled indefinitely) in Pascal's mugging gives infinite expected utility to all actions, and no reason to prefer handing over the money to any other action. People don't actually show the pattern of preferences implied by an unbounded utility function.
If we make the utility function a bounded function of happy lives (or other tilable physical structures) with a high bound, other possibilities will offer high expected utility. The Mugger is not the most credible way to get huge rewards (investing in our civilization on the chance that physics allows unlimited computation beats the Mugger). This will be the case no matter how huge we make the (finite) bound.
Comment author:ciphergoth
10 June 2010 11:04:51PM
1 point
[-]
Bounding the utility function definitely solves the problem, but there are a couple of problems. One is the principle that the utility function is not up for grabs, the other is that a bounded utility function has some rather nasty consequences of the "leave one baby on the track" kind.
Comment author:CarlShulman
11 June 2010 03:27:59AM
*
3 points
[-]
One is the principle that the utility function is not up for grabs,
I don't buy this. Many people have inconsistent intuitions regarding aggregation, as with population ethics. Someone with such inconsistent preferences doesn't have a utility function to preserve.
Also note that a bounded utility function can allot some of the potential utility under the bound to producing an infinite amount of stuff, and that as a matter of psychological fact the human emotional response to stimuli can't scale indefinitely with bigger numbers.
And, of course, allowing unbounded growth of utility with some tilable physical process means that process can dominate the utility of any non-aggregative goods, e.g. the existence of at least some instantiations of art or knowledge, or overall properties of the world like ratios of very good to lives just barely worth living/creating (although you might claim that the value of the last scales with population size, many wouldn't characterize it that way).
Bounded utility functions seem to come much closer to letting you represent actual human concerns, or to represent more of them, in my view.
Eliezer's original article bases its argument on the use of Solomonoff induction. He even suggests up front what the problem with it is, although the comments don't make anything of it: SI is based solely on program length and ignores computational resources. The optimality theorems around SI depend on the same assumption. Therefore I suggest:
4. Pascal's Mugging is a refutation of the Solomonoff prior.
But where a computationally bounded agent, or an unbounded one that cares how much work it does, should get its priors from instead would require more thought than a few minutes on a lunchtime break.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don't bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
OK, I have a question! Suppose I hold a risky asset that costs me c at time t, and whose value at time t is predicted to be k * (1 + r), with standard deviation s. How can I calculate the length of time that I will have to hold the asset in order to rationally expect the asset to be worth, say, 2c with probability p?
I am not doing a finance class or anything; I am genuinely curious.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
That article makes it sound like "countersignaling" is forgoing a mandated signal
I said "standard" because game theory doesn't talk about mandates, but that's pretty much what I said, isn't it? If you disagree with that usage, what do you think is right?
Incidentally, in von Neumann's model of poker, you should raise when you have a good hand or a poor hand, and check when you have a mediocre hand, which looks kind of like countersignaling. Of course, the information transference that yields the name "signal" is rather different. Also, I'm not interested in applications of game theory to hermetically sealed games.
Try it out, guys! LongBets and PredictionBook are good, but they're their own niche; LongBets won't help you with pundits who don't use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Comment author:Houshalter
13 June 2010 02:22:07AM
0 points
[-]
Am I correct in reading that Longbets charges a $50 fee for publishing a prediction and they have to be a minimum of 2 years in the future? Thats a bit harsh. But these sites are pretty interesting. And they could be useful to. You could judge the accuracy of different users including how accurate they are at guessing long-term, short-term, etc predictions as well as how accurate they are in different catagories (or just how accurate they are on average if you want to be simple.) Then you can create a fairly decent picture of the future, albeit I expect many of the predictions will contradict each other. This is kind of what their already doing obviously, but they could still take it a step further.
Comment author:Morendil
09 June 2010 04:06:51PM
*
7 points
[-]
Less Wrong Book Club and Study Group
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I'm willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our intention is to form a self-study group composed of peers, working with the assistance of a facilitator - but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.
Our first text will be E.T. Jayne's Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.
We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.
A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises. A few days from now the first section will be posted.
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
Comment author:Alexandros
09 June 2010 11:08:07AM
2 points
[-]
I think the difference is that one event is a statement about the present which is either presently true or not, and the other is a prediction. So you could illustrate the difference by using the following pairs: P(Mom on phone now) vs. P(Mom on phone tomorrow at 12:00am). In the dice case P(die just rolled but not yet examined is 1) vs. P(die I will roll will come out 1).
I do agree with Oscar though, the maths should be the same.
The cases are different in the way that you describe, but the maths of the probability is the same in each case. If you have an unseen die under a cup, and a die that you are about to roll, then one is already determined and the other isn't, but you'd bet at the same odds for each one to come up a six.
Comment author:Maelin
09 June 2010 10:01:53AM
*
5 points
[-]
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don't have perfect knowledge of the universe, so when I ask, "Is your mum on the phone?" you don't have the guaranteed correct answer ready to go. You don't know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying "I have absolutely no idea in the slightest", you are able to say something more useful: "It's possible, but unlikely." Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes' Law.
Comment author:Jack
09 June 2010 09:51:58AM
1 point
[-]
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1/6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
Comment author:Jack
09 June 2010 07:38:54AM
1 point
[-]
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
Comment author:Morendil
09 June 2010 09:26:37AM
3 points
[-]
I've been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I'm studying at the moment. It would probably make sense to expand it to other books including non-math books - though the set of active books should remain small.
Two things have been holding me back - for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Comment author:Jack
09 June 2010 10:20:25AM
*
1 point
[-]
Well those are actually exactly the two books I had in mind (though I think we should probably just start with one of them).
the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off
Agreed. Two options
A new top level post for every chapter (or perhaps every two chapters, whatever division is convenient). This was a little annoying when it was one person covering every chapter in Dennett's Consciousness explained but if a decent number of people were participating the book club (and if each new post was put up by the facilitator, explaining hard to understand concepts) they'd probably justify themselves.
We start a dedicated wordpress or blogspot blog and give the facilitators posting powers.
I wouldn't at all mind posting to start discussion on some sections but I'm not the best person to be explaining the math if it gets confusing-- if that was part of your expectation of facilitation.
I was thinking a reading group for Jaynes would be have a better chance of success than Pearl-- the issues are more general, the math looks easier and the entire thing is online. But it sounds like you've looked at them more than I have, what are your thoughts? I guess what really matters is what people are interested in.
For those interested the Jaynes book can be found here and much of Pearl's book can be found here.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don't want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
Comment author:Morendil
09 June 2010 11:50:49AM
2 points
[-]
There's a risk that any amount of thinking about infrastructure could kill off what energy there is, and since there appears to be some energy at present, I would rather favor having the discussion about the book club in the book club thread. :)
IOW we can kick off the initiative locally and let it find a new venue if and when that becomes necessary. There also seems to be some sort of provisional consensus that it's not quite time yet to fragment the LW readership : the LW subreddit doesn't seem to have panned out.
It seems to me that Jaynes is definitely topical for LW, I wouldn't worry about discussions among people studying it becoming annoying to the rest of the community. There are many, many gems pertaining to rationality in each of the chapters I've read so far.
Comment author:Alexandros
08 June 2010 01:35:33PM
*
5 points
[-]
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Comment author:red75
08 June 2010 02:28:45PM
*
-1 points
[-]
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there's only one source of objections - children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE's morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here's yet another arguments.
Guidelines of FAI as of may 2004.
Defend humans, the future of humankind, and humane nature.
BEs will formulate this as "Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE's nature."
Encapsulate moral growth.
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise.
There's no trend in moral that can be encapsulated.
Humankind should not spend the rest of eternity desperately wishing that the programmers had done something differently.
If they stop being BE they will mourn their wrong doings to the death.
Avoid creating a motive for modern-day humans to fight over the initial dynamic
Every single notion that FAI will make in lines of "Let's suppose that you are non-BE" will cause it to be destroyed.
Help people.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
Comment author:Morendil
08 June 2010 03:35:11PM
2 points
[-]
What would happen if CEV was applied to the Baby Eaters?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: "What rules would you want to prevail if you didn't know in advance who you would turn out to be?"
Where CEV as I understand it adds more information - assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be - the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even - conceivably - how many of you there are.
To this bunch of undifferentiated people you'd put the question, "All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands."
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it's hard to check intuitions.
Comment author:Alexandros
08 June 2010 10:20:14AM
*
1 point
[-]
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
Comment author:cousin_it
08 June 2010 06:24:24AM
*
9 points
[-]
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn't lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn't be done (and some of your examples may qualify for that).
Comment author:cousin_it
08 June 2010 12:10:56PM
*
-2 points
[-]
I can't believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
I can't believe you took the exact cop-out I warned you against.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
restrict your attention to consequentialists whose terminal values have to be observable.
What does this mean? Consequentialist values are about the world, not about observations (but your words don't seem to fit to disagreement with this position, thus the 'what does this mean?'). Consequentialist notion of values allows a third party to act for your benefit, in which case you don't need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don't need to know about these options in order to benefit.
Comment author:prase
08 June 2010 10:04:35AM
*
4 points
[-]
A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying.
In my opinion, this is a lawyer's attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule "never lie" as a consequentialist "I assign an extremely high disutility to situations where I lie". In the same way you can put consequentialist preferences as a deontoligst rule "at any case, do whatever maximises your utility". But doing that, the point of the distinction between the two ethical systems is lost.
Comment author:taw
11 June 2010 12:39:29PM
2 points
[-]
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
#1/#2/#3 - "never knows" fails far too often, so you need to include a very large chance of failure in your analysis.
#4 - it's pretty safe to make stuff like that up
#5 - in the past, undoubtedly yes; in the future this will be nearly certain to leak with everyone undergoing routine genetic testing for medical purposes, so no. (future is relevant because situation will last decades)
#6 - consequentialism assumes probabilistic analysis (% that child is not yours, % chance that husband is making stuff up) - and you weight costs and benefits of different situations proportionally to their likelihood. Here they are in unlikely situation that consequentialism doesn't weight highly. They might be better off with some other value system, but only at cost of being worse off in more likely situations.
Comment author:ciphergoth
11 June 2010 03:44:37PM
*
1 point
[-]
#4 - it's pretty safe to make stuff like that up
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
Comment author:AlephNeil
09 June 2010 01:22:15PM
3 points
[-]
Is it okay to cheat on your spouse as long as (s)he never knows?
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation 'similar' to yours. Then the spouses can 'put themselves in your place' and think "Gee, there's about a 10% chance that I'd now be cheating on myself. I wonder if this means my husband/wife is cheating on me?"
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn't correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very "UDT-style" way of thinking -- but then UDT does remind me of Kant's categorical imperative, and of course Kant is the arch-deontologist.)
Comment author:cousin_it
09 June 2010 01:43:20PM
*
1 point
[-]
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner's Dilemma to avoid "driving net utility down". I'm pretty sure you made a mistake somewhere.
Comment author:thomblake
08 June 2010 09:15:08PM
3 points
[-]
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
It seems like it would be more aptly defined as "the belief that making the world a better place constitutes doing the right thing". Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don't care whether it does.
Comment author:Nisan
08 June 2010 04:50:52PM
1 point
[-]
It's okay to deceive people if they're not actually harmed and you're sure they'll never find out. In practice, it's often too risky.
1-3: This is all okay, but nevertheless, I wouldn't do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child's welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let's assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
Comment author:cousin_it
08 June 2010 05:03:04PM
*
2 points
[-]
1-3: It seems you're using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It's more similar to the Prisoner's Dilemma, if you ask me.
Comment author:Nisan
08 June 2010 08:33:29PM
2 points
[-]
1-3: It's an alief, not a belief, because I know that lying to my spouse doesn't really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Comment author:RobinZ
08 June 2010 02:41:38PM
2 points
[-]
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations - like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Comment author:cousin_it
08 June 2010 02:55:56PM
*
2 points
[-]
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband's heart, not for some material benefit. So if she knew the husband didn't love her, she'd tell the truth. The fact that you automatically parsed the situation differently is... disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don't understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can't wait till other people reply to the questionnaire.
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers' home country is/isn't primarily taken with milk? I'm always tempted to imagine most of the scientists having some ulterior motive or prior belief they're looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: "we did this experiment hoping to show X, but instead, we found not X". Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
Pre-alpha, one hour of work. I plan to improve it.
EDIT:Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2:I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
Comment author:JoshuaZ
08 June 2010 12:10:26AM
2 points
[-]
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don't know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren't actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
Comment author:DanielVarga
08 June 2010 02:11:00PM
*
2 points
[-]
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It'd also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
Comment author:MartinB
07 June 2010 07:20:04PM
7 points
[-]
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
I've had great results from modest (2-3 hrs/wk) investments in hatha yoga, over and above what I get from standard Greco-Roman "calisthenics."
Besides the flexibility, breathing, and posture benefits, I find that the idea of 'chakras' is vaguely useful for focusing my conscious attention on involuntary muscle systems. I would be extremely surprised if chakras "cleaved reality at the joints" in any straightforward sense, but the idea of chakras helps me pay attention to my digestion, heart rate, bladder, etc. by making mentally uninteresting but nevertheless important bodily functions more interesting.
Comment author:khafra
08 June 2010 07:29:51PM
3 points
[-]
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word "chi" does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent's balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
Comment author:khafra
09 June 2010 03:13:43PM
2 points
[-]
There may be a correlation between studying martial arts and vulnerability to techniques which can be modeled well by "chi." But I have tried the striking sequences successfully on capoeristas and catch wrestlers, and the light but effective pushes on my non-martially-trained brother after showing him Wu-style pushing hands for a minute or two.
Comment author:RobinZ
09 June 2010 05:28:53PM
2 points
[-]
That suggests an experiment. Anyone see any flaws in the following?
Write up instructions for two techniques - one which would work and one which not work, according to your theory - in sufficient detail for someone physically adept but not instructed in Chinese internal martial arts (e.g. a dancer) to learn. Label each with a random letter (e.g. I for the correct one and K for the incorrect one).
Have one group learn each technique - have them videotape their actions and send them corrections by text, so that they don't get cues about whether you expect the methods to work.
Have another party ignorant of the technique perform tests to see how well each group does.
Comment author:Blueberry
09 June 2010 07:07:00PM
0 points
[-]
The problem is that a positive result would only show that a specific sequence of attacks worked well. It wouldn't show that "chi" or other unusual models were required to explain it; there could be perfectly normal explanations for why a series of attacks was effective.
Comment author:RobinZ
09 June 2010 07:29:23PM
1 point
[-]
That's why I suggested writing down both techniques which should work according to the model and techniques which should not work according to the model.
Comment author:khafra
09 June 2010 07:37:51PM
1 point
[-]
I like the idea of scientifically testing internal arts; and your idea is certainly more rigorous than TV series attempting to approach martial arts "scientifically" like Mind, Body, and Kickass Moves. Unfortunately, the only one of those I can think of which is both (1) explainable in words and pictures to a precise enough degree that "chi"-type theories could constrain expectations, and (2) has an unambiguous result when done correctly which varies qualitatively from an incorrect attempt is the knockout series of hits, which raises both ethical and practical concerns.
I would classify the other two as tacit knowledge--they require a little bit of instruction on the counterintuitive parts; then a lot of practice which I can't think of a good way to fake.
Note that I would be completely astonished if there weren't a perfectly normal explanation for any of these feats; but deriving methods for them from first principles of biomechanics and cognitive science would take a lot longer than studying with a good teacher who works with the "chi" model.
I used to go to a Tai Chi class (I stopped only because I decided I'd taken it as far as I was going to), and the instructor, who never talked about "chi" as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn't practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three "hard", "external" styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn't useful on its own, it is, but there is that wider context for getting the maximum use out of it.
Comment author:Blueberry
09 June 2010 07:04:52PM
4 points
[-]
I can take an unwary opponent's balance without physical contact.
This sounds magical at first reading, but is actually not that tricky. It's just psychology and balance. If you set up a pattern of predictable attacks, then feint in the right direction while your opponent is jumping at you off-balance, you can surprise him enough to make him fall as he attempts to ward off your feint.
I've done yoga every week for the last month or two. It's pleasant. Other than paying attention to how I'm holding my body vs. the instruction, I mostly stop thinking for an hour (as we're encouraged to do), which is nice.
I can't say I notice any significant lasting effects yet. I'm slightly more flexible.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don't believe any of that, do you? Neither do I, except for the first eight words and the last six. I've been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes' blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it's these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven't taken the trouble to determine by varying them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I'd be interested to hear from anyone here who meditates and gets more from it than I do.
Comment author:gwern
07 June 2010 09:49:58PM
1 point
[-]
Hard to say - even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There's no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there's some for float tanks. In fact, I'm hard pressed to think of any New Agey stuff where there isn't enough background to judge it on its own merits.
Comment author:MartinB
07 June 2010 08:11:16PM
*
1 point
[-]
To have the experience.
I dont mean it as a treatment, but something that would be exciting, new and worth trying just for the sake of it.
edit/add: the deleted comment above asked why i would bother to do something like floating
Comment author:Clippy
07 June 2010 03:59:56PM
8 points
[-]
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Comment author:jimrandomh
07 June 2010 04:32:53PM
1 point
[-]
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don't offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that's at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don't give consent to be eaten.
Comment author:Clippy
07 June 2010 05:40:20PM
5 points
[-]
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
Comment author:jimrandomh
07 June 2010 06:07:45PM
1 point
[-]
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences. In this case there is an obvious reason for those preferences to differ - namely, the adult knows that he won't be one of those eaten.
In extrapolating a child's preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can't extrapolate from a child whose fate is undecided to an adult that believes it won't be eaten; that change alters its preferences.
Comment author:Clippy
07 June 2010 06:14:31PM
1 point
[-]
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences.
Do you believe that all children's preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
Comment author:jimrandomh
07 June 2010 06:25:32PM
-1 points
[-]
I would use a process like coherent extrapolated volition to decide which preferences to count - that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
Comment author:JoshuaZ
07 June 2010 04:20:11PM
0 points
[-]
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We've even evolved to convince ourselves that we actually care about morality and not self-interest. That's likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Comment author:Blueberry
07 June 2010 04:49:50PM
3 points
[-]
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
Comment author:Clippy
07 June 2010 06:03:43PM
*
1 point
[-]
Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
Comment author:Blueberry
07 June 2010 08:05:09PM
2 points
[-]
I understand what you mean now.
Ok, so first of all, there's a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn't do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don't want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don't want the kids to be eaten, and we don't want the adults to eat. We don't want to balance any of these interests, because they go against our values. Just like you wouldn't balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is "Well, of course you don't want the punishments. That's the point. So cooperate, or you'll get punished. It's not fair to exempt yourself from the rules." And my reaction to position (2) is "We don't want any baby-eating, so we'll save you from being eaten, but we won't let you eat any other babies. It's not fair to exempt yourself from the rules." This seems consistent to me.
Comment author:Clippy
07 June 2010 09:03:47PM
*
3 points
[-]
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn't the baby-eaters' universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to "free ride" off the sacrifices that the system requires of everyone?
Comment author:JStewart
08 June 2010 03:41:19AM
3 points
[-]
Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
Comment author:Clippy
08 June 2010 07:05:31PM
1 point
[-]
No, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Comment author:JenniferRM
08 June 2010 12:07:24AM
4 points
[-]
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with "the abstract idea of punishment" into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of "eating children" are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless "intelligent" and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been "illegitimately modified" or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary "credentials" for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren't currently willing to provide such information, are there preconditions you could propose before you would do so?
Comment author:JenniferRM
10 June 2010 12:14:46AM
2 points
[-]
Conversations with you are difficult because I don't know how much I can assume that you'll have (or pretend to have) a human-like motivational psychology... and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I'm not sure about really fundamental aspects of your "inner life" like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about "statistical regularities of observed english" than "compiling english into a data structure that supports generic inference". By the end of such posts I'm generally asking a lot of questions as I grope for common ground, but you general don't answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we're both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Comments (534)
SIAI, Yudkowsky, Friendly AI, CEV, and Morality
This post entitled A Dangerous "Friend" Indeed (http://becominggaia.wordpress.com/2010/06/10/a-dangerous-friend-indeed/) has it all.
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
Not sure where I stand actually, but this seems relevant:
"If God did not exist, it would be necessary to invent him" - Voltaire
I suppose it should be added that one should do one's best to make sure the god that's created is more Friendly than Not.
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
Isn't God fake?
For those of you who have been following my campaign against the "It's impossible to explain this, so don't expect me to!" defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan's blog.
In case he deletes the entire exchange thus far (which he's been known to do when I post), here's what's transpired (paragraphing truncated):
Me: That's not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function ("rules") for his actions. Maybe he doesn't really understand it himself?
Gene: Well, if I had a silly mechanical view of human nature and thought peoples' actions came from a "generating function", I would think this was a problem.
Me: Which physical law do humans violate? What is the experimental evidence for this violation? Btw, the monk problem isn't hard. Watch this: "Hello, students. Here is why we don't touch women. Here is what we value. Here is where it falls in our value system." There you go. It didn't require a lifetime of learning to convey the reasoning the senior monk used to the junior, now, did it?
ETA: Previous remark by me was rejected by Gene for posting. He instead posted this:
Gene: Silas, you only got through one post without becoming an unbearable douche [!] this time. You had seemed to be improving.
I just tried to post this:
Me: Don't worry, I made sure the exchange was preserved so that other people can view for themselves what you consider "being an unbearable douche", or what others might call, "serious challenges to your position".
Me: If you ever want to specify how it is that human beings' actions don't come from a generating function, thereby violating physical law, I'd love to have that chat and help you flesh out the idea enough to get yourself a Nobel. However, what I think you really meant to say was that the generating function is so difficult to learn directly, that lifelong practice is easy by comparison (if you were to argue the best defense of your position, that is)
Me: Can you at least agree you picked a bad example of knowledge that necessarily comes from lifelong practice? Would that be too much to ask?
Maybe this has been discussed before -- if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people like ourselves. (For instance, ideas like reciprocity only make sense if the things we can do to other people are similar to the things they can do to us.)
The decision function of a lone, far more powerful AI would not have this quality. So it would be very different from all human decision functions or principles. Maybe this difference should cause us to call it immoral.
I'm not necessarily arguing for this position as saying we need to address it. "Suicidal AI" is to the problem of constructing FAI as anarchism is to political theory; if you want to build something (an FAI, a good government) then, on the philosophical level, you have to at least take a stab at countering the argument that perhaps it is impossible to build it.
I'm working under the assumption that we don't really know at this point what "Friendly" means, otherwise there wouldn't be a problem to solve. We don't yet know what we want the AI to do.
What we do know about morality is that human beings practice it. So all our moral laws and intuitions are designed, in particular, for small, mortal creatures, living among other small, mortal creatures.
Egalitarianism, for example, only makes sense if "all men are created equal" is more or less a statement of fact. What should an egalitarian human make of a powerful AI? Is it a tyrant? Well, no, a tyrant is a human who behaves as if he's not equal to other humans; the AI simply isn't equal. Well, then, is the AI a good citizen? No, not really, because citizens treat each other on an equal footing...
The trouble here, I think, is that really all our notions of goodness are really "what is good for a human to do." Perhaps you could extend them to "what is good for a Klingon to do" -- but a lot of moral opinions are specifically about how to treat other people who are roughly equivalent to yourself. "Do unto others as you would have them do unto you." The kind of rules you'd set for an AI would be fundamentally different from our rules for ourselves and each other.
It would be as if a human had a special, obsessive concern and care for an ant farm. You can protect the ants from dying. But there are lots of things you can't do for the ants: be an ant's friend, respect an ant, keep up your end of a bargain with an ant, treat an ant as a brother...
I had a friend once who said, "If God existed, I would be his enemy." Couldn't someone have the same sentiment about an AI?
(As always, I may very well be wrong on the Internet.)
You say, human values are made for agents of equal power; an AI would not be equal; so maybe the friendly thing to do is for it to delete itself. My question was, is it allowed to do just one or two positive things before it does this? I can also ask: if overwhelming power is the problem, can't it just reduce itself to human scale? And when you think about all the things that go wrong in the world every day, then it is obvious that there is plenty for a friendly superhuman agency to do. So the whole idea that the best thing it could do is delete itself or hobble itself looks extremely dubious. If your point was that we cannot hope to figure out what friendliness should actually be, and so we just shouldn't make superhuman agents, that would make more sense.
The comparison to government makes sense in that the power of a mature AI is imagined to be more like that of a state than that of a human individual. It is likely that once an AI had arrived at a stable conception of purpose, it would produce many, many other agents, of varying capability and lifespan, for the implementation of that purpose in the world. There might still be a central super-AI, or its progeny might operate in a completely distributed fashion. But everything would still have been determined by the initial purpose. If it was a purpose that cared nothing for life as we know it, then these derived agencies might just pave the earth and build a new machine ecology. If it was a purpose that placed a value on humans being there and living a certain sort of life, then some of them would spread out among us and interact with us accordingly. You could think of it in cultural terms: the AI sphere would have a culture, a value system, governing its interactions with us. Because of the radical contingency of programmed values, that culture might leave us alone, it might prod our affairs into taking a different shape, or it might act to swiftly and decisively transform human nature. All of these outcomes would appear to be possibilities.
It seems unlikely that an FAI would commit suicide if humans need to be protected from UAI, or if there are other threats that only an FAI could handle.
Do you ever have a day when you log on and it seems like everyone is "wrong on the Internet"? (For values of "everyone" equal to 3, on this occasion.) Robin Hanson and Katja Grace both have posts (on teenage angst, on population) where something just seems off, elusively wrong; and now SarahC suggests that "the only friendly AI may be one that commits suicide". Something about this conjunction of opinions seems obscurely portentous to me. Maybe it's just a know-thyself moment; there's some nascent opinion of my own that's going to crystallize in response.
Now that my special moment of sharing is out of the way... Sarah, is the friendly AI allowed to do just one act of good before it kills itself? Make a child smile, take a few pretty photos from orbit, save someone from dying, stop a war, invent cures for a few hundred diseases? I assume there is some integrity of internal logic behind this thought of yours, but it seems to be overlooking so much about reality that there has to be a significant cognitive disconnect at work here.
I've recently begun downvoting comments that are at -2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach -2 but fail to be pushed over to -3, which I'm attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don't want to be 'the one to push the button'. This is an extension of my RL policy of taking 'the last' of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, I expect others will vote it back up.
Edit: It's likely that most of the negative response to this comment centers around the phrase "regardless of my feelings about them." I now consider this to be too strong a statement with regards to my implemented actions. I do read the comment to make sure I don't consider it any good, and doubt I would perversely vote something down even if I wanted to see more of it.
I wish you wouldn't do that, and stuck instead with the generally approved norm of downvoting to mean "I'd prefer to see fewer comments like this" and upvoting "I'd like to see more like this".
You're deliberately participating in information cascades, and thereby undermining the filtering process. As an antidote, I recommend using the anti-kibitzer script (you can do that through your Preferences page).
I disagree that that's the formula used for comments that exist within the range -2 to 2. Within that range, from what I've observed of voting patterns, it seems far more likely that the equation is related to what value the comment "should be at." If many people used anti-kibitzing, I doubt this would remain a problem.
I believe your hypothesis and decision are possibly correct, but if they are, you should expect your downvotes to often be corrected upwards again. If this doesn't happen, then you are wrong and shouldn't apply this heuristic.
Morendil doesn't say it's what actually happens, he merely says it should happen this way, and that you in particular should behave this way.
I thought of doing this after reading the article Composting Fruitless Debates and making a voted-up suggestion to downvote below threshold.
I'm using it as an excuse to overcome my general laziness with regards to voting, which has the typical pattern of one vote (up or down) per hundreds of comments read.
Edit: And due to remembering Eliezer's comments about moderation.
How to write a "Malcolm Gladwell Bestseller" (an MGB)
http://blog.jgc.org/2010/06/how-to-write-malcolm-gladwell.html
How can I understand quantum physics? All explanations I've seen are either:
I don't think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven't seen any quantum physics explanation that did even as little as reasonably explaining why hbar/2 is the correct limit of uncertainty (as opposed to some other constant), and why it even has the units it has (that is why it applies to these pairs of measurements, but not to some other pairs); or what are quark colors (are they discrete; arbitrary 3 orthogonal vectors on unit sphere; or what? can you compare them between quarks in different protons?); spins (it's obviously not about actual spinning, so how does it really work? especially with movement being relative); how electro-weak unification works (these explanations are all handwaved) etc.
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
I've become increasingly disillusioned with people's capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won't see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn't blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1/8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people's capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
I've got a better link. David Henderson catches a professor of economics getting costs and benefits confused in a published book. Henderson's review is on on page 54 of Regulation, and my viewer puts it on the ninth page of the pdf that Henderson links to
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
Emile:
These claims would be beyond the border of lunacy for any person, but still, I'm sure you'll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard's point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism:
http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I've seen criticisms of Bouw's ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
Oh. Well, that's stupid then.
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he's young enough to make me think that he may have been forced into retirement.
Here is another one:
http://en.wikipedia.org/wiki/Courtney_Brown_%28researcher%29
If no people believe Y -- literally no people -- then either the topic is very little examined by human beings, or it's very exhaustively examined and seems obvious to everyone. In the first case, I give a smaller probability than in the second case.
In the first case, only X believers exist because only X believers have yet considered the issue. That's minimal evidence in favor of X. In the second case, lots of people have heard of the issue; if there were a decent case against X, somebody would have thought of it. The fact that none of them -- not a minority, but none -- argued against X is strong evidence that X is true.
I don't think belief has a consistent evidentiary strength since it depends on the testifier's credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I'm carrying. I don't see any relation that could be described as baseline so the only answer is: context.
Potential top-level article, have it mostly written, let me know what you think:
Title: The hard problem of tree vibrations [tentative]
Follow-up to: this comment (Thanks Adelene Dawner!)
Related to: Disputing Definitions, Belief in the Implied Invisible
Summary: Even if you agree that trees normally make vibrations when they fall, you're still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the "standard" debate over a famous philosophical dilemma: "If a tree falls in a forest and no one hears it, does it make a sound?" (Call this "Question Y.") Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between "sound as vibration" and "sound as auditory perception in one's mind", and that the standard (naive) debate relies on two parties assuming different definitions, leading to a pointless argument. Obviously, it makes a sound in the first sense but not the second, right?
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
Now, if you're a regular on this site, you will find that question easy to answer. But before going into my exposition of the answer, I want to point out some errors that Question S does not make.
For one thing, it does not equivocate between two meanings of sound -- there, sound is taken to mean only one thing: the vibrations.
Second, it does not reduce to a simple question about anticipation of experience. In Question Y, the disputants can run through all observations they anticipate, and find them to be the same. However, if you look at the same cases in Question S, you don't resolve the debate so easily: both parties agree that by putting a tape-recorder by the tree, you will detect vibrations from the tree falling, even if people aren't around. But Question S instead specifically asks about what goes on when these kinds of sensors are not around, rendering such tests unhelpful for resolving such a disagreement.
So how do you go about resolving Question S? Yudkowsky gave a model for how to do this in Belief in the Implied Invisible, and I will do something similar here.
Complexity of the hypothesis
First, we observe that, in all cases where we can make a direct measurement, trees make vibrations when they fall. And we're tasked with finding out whether, specifically in those cases where a human (or appropriate organism with vibration sensitivity in its cognition) will never make a measurement of the vibrations, the vibrations simply don't happen. That is, when we're not looking -- and never intend to look -- trees stop the "act" and don't vibrate.
The complexity this adds to the laws of physics is astounding and may be hard to appreciate at first. This belief would require us to accept that nature has some way of knowing which things will eventually reach a cognitive system in such a way that it informs it that vibrations have happened. It must selectively modify material properties in precisely defined scenarios. It must have a precise definition of what counts as a tree.
Now, if this actually happens to be how the world works, well, then all the worse for our current models! However, each bit of complexity you add to a hypothesis reduces its probability and so must be justified by observations with a corresponding likelihood ratio -- that is, the ratio of the probability of the observation happening if this alternate hypothesis is true, compared to if it were false. By specifying the vibrations' immunity to observation, the log of this ratio is zero, meaning observations are stipulated to be uninformative, and unable to justify this additional supposition in the hypothesis.
[1] You might wonder how someone my age in '89-'91 would come up with terms like "human-entangled sensor", and you're right: I didn't use that term. Still, I considered the use of a tape recorder that someone will check to be a "someone around to hear it", for purposes of this dilemma. Least Convenient Possible World and all...
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
I'm not sure what effect that this has upon your argument, but it's something that I think that you're missing.
I'm familiar with this: entanglement between the environment and the quantum system affects the outcome, but nature doesn't have a special law that distinguishes human entanglement from non-human entanglement (as far as we know, given Occam's Razor, etc.), which the alternate hypothesis would require.
The error that early quantum scientists made was in failing to recognize that it was the entanglement with their measuring devices that affected the outcome, not their immaterial "conscious knowledge". As EY wrote somewhere, they asked,
"The outcome changes when I know something about system -- what difference should that make?"
when they should have asked,
"The outcome changes when I establish more mutual information with the system -- what different should that make?"
In any case, detection of vibration does not require sensitivity to quantum-specific effects.
Not really. This is only the case for certain interpretations of what is going on such as in certain forms of the Copenhagen interpretation. Even then, observation in this context doesn't really mean observe in the colloquial sense but something closer to interact with another particle in a certain class of conditions. The notion that you seem to be conflating this with is the idea that consciousness causes collapse. Not many physicists take that idea at all seriously. In most version of the Many-Worlds interpretation, one doesn't need to say anything about observations triggering anything (or at least can talk about everything without talking about observations).
Disclaimer: My knowledge of QM is very poor. If someone here who knows more spots anything wrong above please correct me.
Me too! It was actually explained that way to me by my parents as a kid, in fact. I wonder if there are two subtly different versions floating around or EY just interpreted it uncharitably.
I think that if this post is left as it is this post would be to trivial to be a top level post. You could reframe it as a beginners' guide to Occam, or you could make it more interesting by going deeper into some of the issues (if you can think of anything more to say on the topic of differentiating between hypotheses that make the same predictions, that might be interesting, although I think you might have said all there is to say)
It could also be framed as an issue of making your beliefs pay rent, similar to the dragon in the garage example - or perhaps as an example of how reality is entangled with itself to such a degree that some questions that seem to carve reality at the joints don't really do so.
(If falling trees don't make vibrations when there's no human-entangled sensor, how do you differentiate a human-entangled sensor from a non-human-entangled sensor? If falling-tree vibrations leave subtle patterns in the surrounding leaf litter that sufficiently-sensitive human-entangled sensors can detect, does leaf litter then count as a human-entangled sensor? How about if certain plants or animals have observably evolved to handle falling-tree vibrations in a certain way, and we can detect that. Then such plants or animals (or their absence, if we're able to form a strong enough theory of evolution to notice the absence of such reactions where we would expect them) could count as human-entangled sensors well before humans even existed. In that case, is there anything that isn't a human-entangled sensor?)
Good points in the parenthetical -- if I make it into a top-level article, I'll be sure to include a more thorough discussion of what concept is being carved with the hypothesis that there are no tree vibrations.
I believe this is the conversation you're responding to.
(upvoted)
New evidence in the Amanda Knox case
This is relevant to LW because of a previous discussion.
That story would be consistent with Guédé's, modulo the usual eyewitness confusion.
And modulo all the forensic evidence.
Obviously this is breaking news and it's too soon to draw a conclusion, but at first blush this sounds like just another attention seeker, like those who always pop up in these high profile cases. If he really can produce a knife, and it matches the wounds, then maybe I'll reconsider, but at the moment my BS detector is pegged.
Of course, it's still orders of magnitude more likely than Knox and Sollecito being guilty.
I wasn't following the case even when komponisto posted his analyses, so I really can't say.
Seconding kodos96. As this would exonerate not only Knox and Sollecito but Guede as well, it has to be treated with considerable skepticism, to say the least.
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread.
Still, the Aviello story is enough of a surprise to marginally lower my probability of Guede's guilt. My current probabilities of guilt are:
Knox: < 0.1 % (i.e. not a chance)
Sollecito: < 0.1 % (likewise)
Guede: 95-99% (perhaps just low enough to insist on a debunking of the Aviello testimony before convicting)
It's probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
I of course remain willing to participate in a debate with Rolf Nelson on this subject.
Finally, I'd like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice in Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
I hadn't heard about this - I just read your link though, and maybe I'm missing something, but I don't see how it lowers the probability of Guede's guilt. He (supposedly) confessed to having been at the crimescene, and that Knox and Sollecito weren't there. How does that, if true, exonerate Guede?
You omitted a crucial paragraph break. :-)
The Aviello testimony would exonerate Guede (and hence is unlikely to be true); the Alessi testimony is essentially consistent with everything else we know, and isn't particularly surprising at all.
I've edited the comment to clarify.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won't offer much to regular/long-time LW readers, but it's a great resource to give to friends/family who don't have the time/energy demanded by LW.
It is a good blog, and it has a slightly wider topic spread than LW, so even if you're familiar with most of the standard failures of judgment there'll be a few new things worth reading. (I found the "introducing fines can actually increase a behavior" post particularly good, as I wasn't aware of that effect.)
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as "safety gloves" for dangerous memes?
I'm asking because I'm currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the "justifications" are incoherent, Orwellian doublespeak, and/or tendentiously ideological. I really do want to memorize (nearly) all of these justifications, so that I can be sure to pass the exam and continue my career as a rationalist lawyer, but I don't want the pattern of thought used by the justifications to become a part of my pattern of thought.
I worry about this as well when I'm reading long arguments or long works of fiction presenting ideas I disagree with. My tactic is to stop occasionally and go through a mental dialog simulating how I would respond to the author in person. This serves a double purpose, as hopefully I'll have better cached arguments in the event I ever need them.
Of course, this is a dangerous tactic as well, because you may be shutting off critical reasoning applied to your preexisting beliefs. I only apply this tactic when I'm very confident the author is wrong and is using fallacious arguments. Even then I make sure to spend some amount of time playing devil's advocate.
I would not worry overmuch about the long-term negative effects of your studying for the bar: with the possible exception of the "overly sincere" types who fall very hard for cults and other forms of indoctrination, people have a lot of antibodies to this kind of thing.
You will continue to be entagled with reality after you pass the exam, and you can do things, like read works of social science that carve reality at the joints, to speed up the rate at which your continued entaglement with reality with cancel out any falsehoods you have to cram for now. Specifically, there are works about the law that do carve reality at the joints -- Nick Szabo's online writings IMO fall in that category. Nick has a law degree, by the way, and there is certainly nothing wrong with his ability to perceive reality correctly.
ADDED. The things that are really damaging to a person's rationality, IMHO, are natural human motivations. When for example you start practicing, if you were to decide to do a lot of trials, and you learned to derive pleasure -- to get a real high -- from the combative and adversarial part of that, so that the high you got from winning with a slick and misleading angle trumped the high you get from satisfying you curiosity and from refining and finding errors in your model of reality -- well, I would worry about that a lot more than your throwing yourself fully into winning on this exam because IMHO the things we derive no pleasure from, but do to achieve some end we care about (like advancing in our career by getting a credential) have a lot less influence on who we turn out to be than things we do because we find them intrinsically rewarding.
One more thing: we should not all make our living as computer programmers. That would make the community less robust than it otherwise would be :)
Thank you! This is really helpful, and I look forward to reading Szabo in August.
I found an interesting paper on Arxiv earlier today, by the name of Closed timelike curves via post-selection: theory and experimental demonstration.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I'm not entirely sure the mechanism couldn't also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don't understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they've discovered are. I'm hoping one of you does.
It won't work, as is clearly explained here.
To put this into my own words "The more information you extract from the future, the less you are able to control the future from the past. And hence, the less understanding you can have about what those bits of future-generated information are actually going to mean."
I wrote that before actually looking at the paper you linked. I don't understand much QM either, but now that I have looked it seems to me that figure 2 of the paper backs me up on my interpretation of Harry's experiment.
Even if it's written by Eliezer, that's still generalizing from fictional evidence. We don't know what the laws of physics are supposed to be there..
Well. You probably can't use time-travel to get infinite computing power. But that's not to say you can't get strictly finite power out of it; in Harry's case, his experiment would probably have worked just fine if he'd been the sort of person who'd refuse to write "DO NOT MESS WITH TIME".
Playing chicken with the universe, huh? As long as scaring Harry is easier than solving his homework problem, I'd expect the universe to do the former :-) Then again, you could make a robot use the Time-Turner...
While searching for literature on "intuition", I came upon a book chapter that gives "the state of the art in moral psychology from a social-psychological perspective". This is the best summary I've seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I've put up a mirror of it.
ETA: Here's the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social Psychology, 5th Edition. Hobeken, NJ: Wiley. Pp. 797-832.
You're awesome.
I've previously been impressed by how social psychologists reason, especially about identity. Schemata theory is also a decent language for talking about cognitive algorithms from a less cognitive sciencey perspective. I look forward to reading this chapter. Thanks for mirroring, I wouldn't have bothered otherwise.
An interesting article criticizing speculation about social trends (specifically teen sex) in the absence of statistical evidence.
Beautiful. Matthew Yglesias, +1 point.
It is entirely possible that some social groups are experiencing the kind of changes that Flanagan describes, but as Yglesias says, she apparently is unaware that there is such a thing as scientific evidence on the question.
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
From that Wikipedia article:
Apologizing for ... being German? That's really bizarre.
Not really. Most cultures go funny in the head around the Holocaust. It is, for some reason, considered imperative that 10th graders in California spend more time being made to feel guilty about the Holocaust than learning about the actual politics of the Weimar Republic.
Cultures can also be very weird about how they treat schoolchildren. The kids weren't responsible for any part of the Holocaust, and they're theoretically apologizing to someone who can't hear it.
I can see some point in all this if you believe that Germans are especially apt to genocide (I have no strong opinion about this) and need to keep being reminded not to do it. Still, if this sort of apology is of any use, I'd take it more seriously if it were done spontaneously by individuals.
The number of heart attacks has fallen since England imposed a smoking ban
http://www.economist.com/node/16333351?story_id=16333351&fsrc=scn/tw/te/rss/pe
I think I found the study they're talking about thanks to this article. I might take a look at it - if the methodology is literally just 'smoking was banned, then the heart attack rate dropped', that sucks.
(Edit to link to the full study and not the abstract.)
Just skimmed it. The methodology is better than that. They use a regression to adjust for the pre-existing downward trend in the heart attack hospital admission rate; they represent it as a linear trend, and that looks fair to me based on eyeballing the data in figures 1 and 2. They also adjust for week-to-week variation and temperature, and the study says its results are 'more modest' than others', and fit the predictions of someone else's mathematical model, which are fair sanity checks.
I still don't know how robust the study is - there might be some confounder they've overlooked that I don't know enough about smoking to think of - but it's at least not as bad as I expected. The authors say they want to do future work with a better data set that has data on whether patients are active smokers, to separate the effect of secondhand smoke from active smoking. Sounds interesting.
In the Singularity Movement, Humans Are So Yesterday (long Singularity article in this Sunday's NY Times; it isn't very good)
http://news.ycombinator.com/item?id=1426386
I agree that this article isn't very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil's fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski's entire approach is to criticize anything remotely new or weird-seeming, I'm disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
I'm starting to think SIAI might have to jettison the "singularity" terminology (for the intelligence explosion thesis) if it's going to stand on its own. It's a cool word, and it would be a shame to lose it, but it's become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit: Look at this Facebook group. This sort of thing is just embarrassing to be associated with. "If you are feeling brave, you can approach a stranger in the street and speak your message!" Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn't do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I'm not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there's a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that's a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there's a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there's any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they'd be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I'm assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn't seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage-- I'm imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I'm not sure where you'd start to prevent biotech disasters.
That's one hell of a "relatively" you've got there!
The Science of Gaydar: http://nymag.com/print/?/news/features/33520/
Less Wrong Rationality Quotes since April 2009, sorted by points.
This version copies the visual style and preserves the formatting of the original comments.
Here is the source code.
I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
This is great, even more so as you made it open source. I added it to References & Resources for LessWrong.
You should make a short top-level post about this so more people see this
I'd vote you up again for handing out your source code as well as the quote list, but I can't, so an encouraging reply will have to do...
How To Destroy A Black Hole
http://www.technologyreview.com/blog/arxiv/25316/
Heuristics and biases in charity
http://www.sas.upenn.edu/~baron/papers/charity.pdf (I considered making this link as a top-level post.)
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren't there a lot of intelligent species?
Somewhat accepted partial answer is that huge brains are ridiculously expensive - you need a lot of high energy density food (= fire), a lot of DHA (= fish) etc. Chimp diet simply couldn't support brains like ours (and aquatic ape etc.), nor could they spend as much time as us engaging in politics as they were too busy just getting food.
Perhaps chimp brains are as big as they could possibly be given their dietary constraints.
That's conceivable, and might also explain why wolves, crows, elephants, and other highly social animals aren't as smart as people.
Also, I think the original bit in Methods of Rationality overestimates how easy it is for new ideas to spread. As came up recently here, even if tacit knowledge can be explained, it usually isn't.
This means that if you figure out a better way to chip flint, you might not be able to explain it in words, and even if you can, you might chose to keep it as a family or tribal secret. Inventions could give their inventors an advantage for quite a long time.
Five-second guess: Human-level Machiavellian intelligence needs language facilities to co-evolve with, grunts and body language doesn't allow nearly as convoluted schemes. Evolving some precursor form of human-style language is the improbable part that other species haven't managed to pull off.
Saw this over on Bruce Schneier's blog, it seemed worth reposting here. Wharton’s “Quake” Simulation Game Shows Why Humans Do Such A Poor Job Planning For & Learning From Catastrophes (link is to summary, not original article, as original article is a bit redundant). Not so sure how appropriate the "learning from" part of the title is, as they don't seem to mention people playing the game more than once, but still quite interesting.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level values that we pay lip service too. Group values are the ones we believe are our ‘real’ values, the ones that form the backbone of our ethics, the ones we signal to others at great expense. But actually having these values is tricky from an evolutionary standpoint – strategically, you’re much better off being selfish than generous, being two-faced than loyal, and furthering your own gains at the expense of everyone elses. So humans are in a pickle – it’s beneficial for them to form groups to solve their problems and increase their chances of survival, but it’s also beneficial for people to be selfish and mooch off the goodwill of the group. Because of this, we have sophisticated machinery called ’suspicion’ to ferret out any liars or cheaters furthering their own gains at the groups expense. Of course, evolution is an arms race, so it’s looking for a method to overcome these mechanisms, for ways it can fulfill it’s base desires while still appearing to support the group.
It accomplished this by implementing willpower. Because deceiving others about what we believe would quickly be uncovered, we don’t actually deceive them – we’re designed so that we really, truly, in our heart of hearts believe that the group-supporting values – charity, nobility, selflessness – are the right things to do. However, we’re only given a limited means to accomplish them. We can leverage our willpower to overcome the occasional temptation, but when push comes to shove – when that huge pile of money or that incredible opportunity or that amazing piece of ass is placed in front of us, willpower tends to fail us. Willpower is generally needed for the values that don’t further our evolutionary best interests – you don’t need willpower to run from danger or to hunt an animal if you’re hungry or to mate with a member of the opposite sex. We have much better, much more successful mechanisms that accomplish those goals. Willpower is designed so that we really do want to support the group, but wind up failing at it and giving in to our baser desires – the ones that will actually help our genes get replicated.
Of course, the maladaption comes into play due to the fact that we use willpower to try to accomplish other, non-group related goals – mostly the long-term, abstract plans we create using high-level, conscious thinking. This does appear to be a design flaw (though since humans are notoriously bad at making long-term predictions, it may not be as crippling as it first appears.)
That is certainly interesting enough to subject to further reflection. Do we have any evolutionary psychologists in the audience?
What solution do people prefer to Pascal's Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn't mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we're wrong is found by drawing from a broader reference class, like "offers from a stranger".
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
What have I left out?
Tom_McCabe2 suggests generalizing EY's rebuttal of Pascal's Wager to Pascal's Mugging: it's not actually obvious that someone claiming they'll destroy 3^^^^3 people makes it more likely that 3^^^^3 people will die. The claim is arguably such weak evidence that it's still about equally likely that handing over the $5 will kill 3^^^^3 people, and if the two probabilities are sufficiently equal, they'll cancel out enough to make it not worth handing over the $5.
Personally, I always just figured that the probability of someone (a) threatening me with killing 3^^^^3 people, (b) having the ability to do so, and (c) not going ahead and killing the people anyway after I give them the $5, is going to be way less than 1/3^^^^3, so the expected utility of giving the mugger the $5 is almost certainly less than the $5 of utility I get by hanging on to it. In which case there is no problem to fix. EY claims that the Solomonoff-calculated probability of someone having 'magic powers from outside the Matrix' 'isn't anywhere near as small as 3^^^^3 is large,' but to me that just suggests that the Solomonoff calculation is too credulous.
(Edited to try and improve paraphrase of Tom_McCabe2.)
This seems very similar to the "reference class fallback" approach to confidence set out in point 2, but I prefer to explicitly refer to reference classes when setting out that approach, otherwise the exactly even odds you apply to massively positive and massively negative utility here seem to come rather conveniently out of a hat...
The unbounded utility function (in some physical objects that can be tiled indefinitely) in Pascal's mugging gives infinite expected utility to all actions, and no reason to prefer handing over the money to any other action. People don't actually show the pattern of preferences implied by an unbounded utility function.
If we make the utility function a bounded function of happy lives (or other tilable physical structures) with a high bound, other possibilities will offer high expected utility. The Mugger is not the most credible way to get huge rewards (investing in our civilization on the chance that physics allows unlimited computation beats the Mugger). This will be the case no matter how huge we make the (finite) bound.
Bounding the utility function definitely solves the problem, but there are a couple of problems. One is the principle that the utility function is not up for grabs, the other is that a bounded utility function has some rather nasty consequences of the "leave one baby on the track" kind.
I don't buy this. Many people have inconsistent intuitions regarding aggregation, as with population ethics. Someone with such inconsistent preferences doesn't have a utility function to preserve.
Also note that a bounded utility function can allot some of the potential utility under the bound to producing an infinite amount of stuff, and that as a matter of psychological fact the human emotional response to stimuli can't scale indefinitely with bigger numbers.
And, of course, allowing unbounded growth of utility with some tilable physical process means that process can dominate the utility of any non-aggregative goods, e.g. the existence of at least some instantiations of art or knowledge, or overall properties of the world like ratios of very good to lives just barely worth living/creating (although you might claim that the value of the last scales with population size, many wouldn't characterize it that way).
Bounded utility functions seem to come much closer to letting you represent actual human concerns, or to represent more of them, in my view.
Eliezer's original article bases its argument on the use of Solomonoff induction. He even suggests up front what the problem with it is, although the comments don't make anything of it: SI is based solely on program length and ignores computational resources. The optimality theorems around SI depend on the same assumption. Therefore I suggest:
4. Pascal's Mugging is a refutation of the Solomonoff prior.
But where a computationally bounded agent, or an unbounded one that cares how much work it does, should get its priors from instead would require more thought than a few minutes on a lunchtime break.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don't bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
The traditional answer is to follow the Kelly criterion, is it not? That would imply
where n is the number of tickets. This implies you should buy n such that (€1)*n = Wf*, where W is your initial wealth.
Edit: Thanks, JoshuaZ, for pointing out that the Kelly criterion might not be the applicable one in a given situation.
OK, I have a question! Suppose I hold a risky asset that costs me c at time t, and whose value at time t is predicted to be k * (1 + r), with standard deviation s. How can I calculate the length of time that I will have to hold the asset in order to rationally expect the asset to be worth, say, 2c with probability p?
I am not doing a finance class or anything; I am genuinely curious.
Does countersignaling actually happen? Give me examples.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
That article makes it sound like "countersignaling" is forgoing a mandated signal - like showing up at a formal-dress occasion in street clothes.
Alicorn made a post about the tactics of countersignaling a while back.
I said "standard" because game theory doesn't talk about mandates, but that's pretty much what I said, isn't it? If you disagree with that usage, what do you think is right?
Incidentally, in von Neumann's model of poker, you should raise when you have a good hand or a poor hand, and check when you have a mediocre hand, which looks kind of like countersignaling. Of course, the information transference that yields the name "signal" is rather different. Also, I'm not interested in applications of game theory to hermetically sealed games.
I guess I don't understand your question, then - countersignaling seems like a perfectly ordinary proper subset of signaling.
Yes, countersignaling is signaling. The question is about practice, not theory. Does countersignaling actually happen?
I play randomly for the first several rounds, so as to destroy the entanglement between my bets, my face, and my hand.
My recent comment on Reddit reminded me of WrongTomorrow.com - a site that was mentioned briefly here a while ago, but which I haven't seen much since.
Try it out, guys! LongBets and PredictionBook are good, but they're their own niche; LongBets won't help you with pundits who don't use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Am I correct in reading that Longbets charges a $50 fee for publishing a prediction and they have to be a minimum of 2 years in the future? Thats a bit harsh. But these sites are pretty interesting. And they could be useful to. You could judge the accuracy of different users including how accurate they are at guessing long-term, short-term, etc predictions as well as how accurate they are in different catagories (or just how accurate they are on average if you want to be simple.) Then you can create a fairly decent picture of the future, albeit I expect many of the predictions will contradict each other. This is kind of what their already doing obviously, but they could still take it a step further.
Less Wrong Book Club and Study Group
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I'm willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our intention is to form a self-study group composed of peers, working with the assistance of a facilitator - but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.
Our first text will be E.T. Jayne's Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.
We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.
A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises. A few days from now the first section will be posted.
A question about Bayesian reasoning:
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
You might be interested in this recent discussion, if you haven't seen it already:
http://lesswrong.com/lw/2ax/open_thread_june_2010/23fa
I think the difference is that one event is a statement about the present which is either presently true or not, and the other is a prediction. So you could illustrate the difference by using the following pairs: P(Mom on phone now) vs. P(Mom on phone tomorrow at 12:00am). In the dice case P(die just rolled but not yet examined is 1) vs. P(die I will roll will come out 1).
I do agree with Oscar though, the maths should be the same.
The cases are different in the way that you describe, but the maths of the probability is the same in each case. If you have an unseen die under a cup, and a die that you are about to roll, then one is already determined and the other isn't, but you'd bet at the same odds for each one to come up a six.
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don't have perfect knowledge of the universe, so when I ask, "Is your mum on the phone?" you don't have the guaranteed correct answer ready to go. You don't know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying "I have absolutely no idea in the slightest", you are able to say something more useful: "It's possible, but unlikely." Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes' Law.
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1/6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
I've been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I'm studying at the moment. It would probably make sense to expand it to other books including non-math books - though the set of active books should remain small.
Two things have been holding me back - for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Well those are actually exactly the two books I had in mind (though I think we should probably just start with one of them).
Agreed. Two options
A new top level post for every chapter (or perhaps every two chapters, whatever division is convenient). This was a little annoying when it was one person covering every chapter in Dennett's Consciousness explained but if a decent number of people were participating the book club (and if each new post was put up by the facilitator, explaining hard to understand concepts) they'd probably justify themselves.
We start a dedicated wordpress or blogspot blog and give the facilitators posting powers.
I wouldn't at all mind posting to start discussion on some sections but I'm not the best person to be explaining the math if it gets confusing-- if that was part of your expectation of facilitation.
I was thinking a reading group for Jaynes would be have a better chance of success than Pearl-- the issues are more general, the math looks easier and the entire thing is online. But it sounds like you've looked at them more than I have, what are your thoughts? I guess what really matters is what people are interested in.
For those interested the Jaynes book can be found here and much of Pearl's book can be found here.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don't want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
There's a risk that any amount of thinking about infrastructure could kill off what energy there is, and since there appears to be some energy at present, I would rather favor having the discussion about the book club in the book club thread. :)
IOW we can kick off the initiative locally and let it find a new venue if and when that becomes necessary. There also seems to be some sort of provisional consensus that it's not quite time yet to fragment the LW readership : the LW subreddit doesn't seem to have panned out.
It seems to me that Jaynes is definitely topical for LW, I wouldn't worry about discussions among people studying it becoming annoying to the rest of the community. There are many, many gems pertaining to rationality in each of the chapters I've read so far.
OpenPCR: DNA amplification for anyone
http://www.thinkgene.com/openpcr-dna-amplification-for-anyone/
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Thoughts?
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there's only one source of objections - children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE's morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here's yet another arguments.
Guidelines of FAI as of may 2004.
BEs will formulate this as "Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE's nature."
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise. There's no trend in moral that can be encapsulated.
If they stop being BE they will mourn their wrong doings to the death.
Every single notion that FAI will make in lines of "Let's suppose that you are non-BE" will cause it to be destroyed.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: "What rules would you want to prevail if you didn't know in advance who you would turn out to be?"
Where CEV as I understand it adds more information - assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be - the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even - conceivably - how many of you there are.
To this bunch of undifferentiated people you'd put the question, "All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands."
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it's hard to check intuitions.
Some clips on the dark-side epistemology of history done by Christian apologists by Robert M Price, who describes himself as a Christian Atheist.
Not sure how worthwhile Price is to listen to in general though.
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn't lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn't be done (and some of your examples may qualify for that).
I can't believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
What does this mean? Consequentialist values are about the world, not about observations (but your words don't seem to fit to disagreement with this position, thus the 'what does this mean?'). Consequentialist notion of values allows a third party to act for your benefit, in which case you don't need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don't need to know about these options in order to benefit.
In my opinion, this is a lawyer's attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule "never lie" as a consequentialist "I assign an extremely high disutility to situations where I lie". In the same way you can put consequentialist preferences as a deontoligst rule "at any case, do whatever maximises your utility". But doing that, the point of the distinction between the two ethical systems is lost.
If so, maybe we want that.
Less directly, a person may value a world where beliefs were more accurate - in such a world, both lying and bullshit would be negatives.
I suggest that eliminating lying would only be an improvement if people have reasonable expectations of each other.
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation 'similar' to yours. Then the spouses can 'put themselves in your place' and think "Gee, there's about a 10% chance that I'd now be cheating on myself. I wonder if this means my husband/wife is cheating on me?"
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn't correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very "UDT-style" way of thinking -- but then UDT does remind me of Kant's categorical imperative, and of course Kant is the arch-deontologist.)
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner's Dilemma to avoid "driving net utility down". I'm pretty sure you made a mistake somewhere.
It seems like it would be more aptly defined as "the belief that making the world a better place constitutes doing the right thing". Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don't care whether it does.
It's okay to deceive people if they're not actually harmed and you're sure they'll never find out. In practice, it's often too risky.
1-3: This is all okay, but nevertheless, I wouldn't do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child's welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let's assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you're using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It's more similar to the Prisoner's Dilemma, if you ask me.
1-3: It's an alief, not a belief, because I know that lying to my spouse doesn't really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Or what Nesov said below.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations - like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband's heart, not for some material benefit. So if she knew the husband didn't love her, she'd tell the truth. The fact that you automatically parsed the situation differently is... disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don't understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can't wait till other people reply to the questionnaire.
Supposedly (actual study) milk reduces catechin level in bloodstream.
Other research says: "does not!"
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers' home country is/isn't primarily taken with milk? I'm always tempted to imagine most of the scientists having some ulterior motive or prior belief they're looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: "we did this experiment hoping to show X, but instead, we found not X". Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
Less Wrong Rationality Quotes since April 2009, sorted by points.
Pre-alpha, one hour of work. I plan to improve it.
EDIT: Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2: I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
Not having to side scroll would be spiffy.
If you're using Firefox, there's an add-on for that.
Or, if you're lazy like me, you can select 'Page Source' under the View menu and then select the 'Wrap Long Lines' option.
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don't know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren't actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
On the wiki, this text will be dead, because nobody will be adding new items there by hand.
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It'd also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
Very cool idea.
It would be nice if links were preserved.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
I've had great results from modest (2-3 hrs/wk) investments in hatha yoga, over and above what I get from standard Greco-Roman "calisthenics."
Besides the flexibility, breathing, and posture benefits, I find that the idea of 'chakras' is vaguely useful for focusing my conscious attention on involuntary muscle systems. I would be extremely surprised if chakras "cleaved reality at the joints" in any straightforward sense, but the idea of chakras helps me pay attention to my digestion, heart rate, bladder, etc. by making mentally uninteresting but nevertheless important bodily functions more interesting.
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word "chi" does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent's balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
Interesting. It seems that learning this art (1) gives you a power and (2) makes you vulnerable to it.
There may be a correlation between studying martial arts and vulnerability to techniques which can be modeled well by "chi." But I have tried the striking sequences successfully on capoeristas and catch wrestlers, and the light but effective pushes on my non-martially-trained brother after showing him Wu-style pushing hands for a minute or two.
That suggests an experiment. Anyone see any flaws in the following?
The problem is that a positive result would only show that a specific sequence of attacks worked well. It wouldn't show that "chi" or other unusual models were required to explain it; there could be perfectly normal explanations for why a series of attacks was effective.
That's why I suggested writing down both techniques which should work according to the model and techniques which should not work according to the model.
I like the idea of scientifically testing internal arts; and your idea is certainly more rigorous than TV series attempting to approach martial arts "scientifically" like Mind, Body, and Kickass Moves. Unfortunately, the only one of those I can think of which is both (1) explainable in words and pictures to a precise enough degree that "chi"-type theories could constrain expectations, and (2) has an unambiguous result when done correctly which varies qualitatively from an incorrect attempt is the knockout series of hits, which raises both ethical and practical concerns.
I would classify the other two as tacit knowledge--they require a little bit of instruction on the counterintuitive parts; then a lot of practice which I can't think of a good way to fake.
Note that I would be completely astonished if there weren't a perfectly normal explanation for any of these feats; but deriving methods for them from first principles of biomechanics and cognitive science would take a lot longer than studying with a good teacher who works with the "chi" model.
I used to go to a Tai Chi class (I stopped only because I decided I'd taken it as far as I was going to), and the instructor, who never talked about "chi" as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn't practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three "hard", "external" styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn't useful on its own, it is, but there is that wider context for getting the maximum use out of it.
This sounds magical at first reading, but is actually not that tricky. It's just psychology and balance. If you set up a pattern of predictable attacks, then feint in the right direction while your opponent is jumping at you off-balance, you can surprise him enough to make him fall as he attempts to ward off your feint.
I've done yoga every week for the last month or two. It's pleasant. Other than paying attention to how I'm holding my body vs. the instruction, I mostly stop thinking for an hour (as we're encouraged to do), which is nice.
I can't say I notice any significant lasting effects yet. I'm slightly more flexible.
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don't believe any of that, do you? Neither do I, except for the first eight words and the last six. I've been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes' blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it's these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven't taken the trouble to determine by varying them.
More here and here. Nancy Lebovitz also mentioned them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I'd be interested to hear from anyone here who meditates and gets more from it than I do.
My spreadsheet about effects of the Tibetans
Hard to say - even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There's no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there's some for float tanks. In fact, I'm hard pressed to think of any New Agey stuff where there isn't enough background to judge it on its own merits.
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don't offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that's at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don't give consent to be eaten.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences. In this case there is an obvious reason for those preferences to differ - namely, the adult knows that he won't be one of those eaten.
In extrapolating a child's preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can't extrapolate from a child whose fate is undecided to an adult that believes it won't be eaten; that change alters its preferences.
Do you believe that all children's preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count - that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
And why do you think that such reflection would make the babies reverse the baby-eating policies?
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We've even evolved to convince ourselves that we actually care about morality and not self-interest. That's likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
I understand what you mean now.
Ok, so first of all, there's a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn't do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don't want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don't want the kids to be eaten, and we don't want the adults to eat. We don't want to balance any of these interests, because they go against our values. Just like you wouldn't balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is "Well, of course you don't want the punishments. That's the point. So cooperate, or you'll get punished. It's not fair to exempt yourself from the rules." And my reaction to position (2) is "We don't want any baby-eating, so we'll save you from being eaten, but we won't let you eat any other babies. It's not fair to exempt yourself from the rules." This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn't the baby-eaters' universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to "free ride" off the sacrifices that the system requires of everyone?
Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with "the abstract idea of punishment" into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of "eating children" are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless "intelligent" and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been "illegitimately modified" or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary "credentials" for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren't currently willing to provide such information, are there preconditions you could propose before you would do so?
I ... understood about a tenth of that.
Conversations with you are difficult because I don't know how much I can assume that you'll have (or pretend to have) a human-like motivational psychology... and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I'm not sure about really fundamental aspects of your "inner life" like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about "statistical regularities of observed english" than "compiling english into a data structure that supports generic inference". By the end of such posts I'm generally asking a lot of questions as I grope for common ground, but you general don't answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we're both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Oh, anyone can email me at clippy.paperclips@gmail.com.