Comment author:Alexandros
07 June 2010 09:51:42AM
*
3 points
[-]
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Comment author:Emile
07 June 2010 12:46:05PM
3 points
[-]
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
All generalisations are bounded, even when the bounds are not expressed. In the context of his talk, Ben Goldacre was talking about "doctors" being quoted as supporting various pieces of bad medical science.
Comment author:MartinB
07 June 2010 02:05:37PM
1 point
[-]
Many medical doctors around here (germany) offer homeopathy in addition to their medical practice. Now it might be that they respond to market demand to sneak in some medical science in between, or that they actually take it serious.
Comment author:MartinB
07 June 2010 02:52:06PM
4 points
[-]
I recently found out why doctors cultivate a certain amount of professional arrogance when dealing with patients:
Most patients don't understand whats behind their specific disease - and usually do not care. So if doctors where open to argument, or would state doubts more openly the patient might loose trust, and not do what he is ordered to do.
To instill an absolute belief in doctors powers might be very helpful for a big size of the population.
A lot of my own frustration in doctors experiences can be attributed to me being a non-standard patient that reads to much.
Comment author:MartinB
07 June 2010 04:21:47PM
0 points
[-]
I still assume that doctors actually want to help people. (Despite reading the checklist book, and other stuff).
So if I have the choice between: World a) where doctors also do homeopathy, and b) where other ppl. do it, while doctors stay true to science. Than I would prefer a) because at least the people go to a somewhat competent person.
Comment author:DanArmak
07 June 2010 04:58:17PM
0 points
[-]
I still assume that doctors actually want to help people
Homeopathy is at best a placebo. It's rare that there's no better medical way to help someone. Your assumption is counter to the facts.
Certainly doctors want to help people - all else being equal. But if they practice homeopathy extensively, then they are prioritizing other things over helping people.
If the market condition (i.e. the patients' opinions and desires) are such that they will not accept scientific medicine, and will only use homeopathy anyway, then I suggest then the best way to help people is for all doctors to publicly denounce homeopathy and thus convince at least some people to use better-than-placebo treatments instead.
Comment author:MartinB
07 June 2010 05:05:54PM
2 points
[-]
You might implicitly assume that people make a conscious choice to go the unscientific route. That is not the case.
For a layperson there is no perceivable difference between a doctor and a homeopath. (Well. Maybe there is, but lets exaggerate that here.)
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place.
If I were a doctor, than the idea to offer homeopathy, so that people at least come to me would make sense both money wise, and to get the effect that they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person.
Its a case of corrupting your integrity to some degree to get the message heard.
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
Comment author:thomblake
07 June 2010 06:09:48PM
4 points
[-]
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
You could probably ask the doctor why they offer homeopathy, and base your decision on the sort of answer you get. "Because it's an effective cure..." is straight out.
Comment author:DanArmak
07 June 2010 06:39:47PM
3 points
[-]
tl;dr - if doctors don't denounce homeopaths, people will start going to "real" homeopaths and other alt-medicine people, and there is no practical limit to the lies and harm done by real homeopaths.
For a layperson there is no perceivable difference between a doctor and a homeopath.
That is so because doctors also offer homeopathy. If almost all doctors clearly denounced homeopathy, fewer people would choose to go to homeopaths, and these people would benefit from better treatment.
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place.
This is a problem in its own right that should be solved by giving doctors incentives to listen to patients more. However, do you think that because doctors don't listen enough, homeopaths produce better treatment (i.e. better medical outcomes)?
they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person.
Do you have evidence that this is the result produced?
What if the reverse happens? Because the doctors endorse homeopathy, patients start going to homeopaths instead of doctors. Homeopaths are better at selling themselves, because unlike doctors they can lie ("homeopathy is not a placebo and will cure your disease!"). They are also better at listening, can create a nicer (non-clinical) reception atmosphere, they can get more word-of-mough networking benefits, etc.
Patients can't normally distinguish "trivial stuff" from dangerous conditions until it's too late - even doctors sometimes get this wrong. The next logical step is for people to let homeopaths treat all the trivial stuff, and go to ER when something really bad happens.
Personal story: my mother is a doctor (geriatrician). When I was a teenager I had seasonal allergies and she insisted on sending me for weekly acupuncture. During the hour-long sessions I had to listen to the ramblings of the acupuncturist. He told me (completely seriously) that, although he personally didn't have the skill, the people who taught him acupuncture in China could use it to cure my type 1 diabetes. He also once told me about someone who used various "alternative medicine" to eat only vine leaves for a year before dying.
When the acupuncture didn't help me, my mother said that was my own fault because "I deliberately disbelieved the power of acupuncture and so the placebo effect couldn't work on me".
Comment author:Yvain
07 June 2010 06:54:04PM
*
6 points
[-]
Homeopathy is at best a placebo. It's rare that there's no better medical way to help someone.
I disagree - at least with the part about "it's rare that there's no better medical way to help people". It's depressingly common that there's no better medical way to help people. Things like back pain, tiredness, and muscle aches - the commonest things for which people see doctors - can sometimes be traced to nice curable medical reasons, but very often as far as anyone knows they're just there.
Robin Hanson has a theory - and I kind of agree with him - that homeopathy fills a useful niche. Placebos are pretty effective at curing these random (and sometimes imagined) aches and pains. But most places consider it illegal or unethical for doctors to directly prescribe a placebo. Right now a lot of doctors will just prescribe aspirin or paracetamol or something, but these are far from totally harmless and there are a lot of things you can't trick patients into thinking aspirin is a cure for. So what would be really nice, is if there was a way doctors could give someone a totally harmless and very inexpensive substance like water and make the patient think it was going to cure everything and the kitchen sink, without directly lying or exposing themselves to malpractice allegations.
Where this stands or falls is whether or not it turns patients off real medicine and gets them to start wanting homeopathy for medically known, treatable diseases. Hopefully it won't - there aren't a lot of people who want homeopathic cancer treatment - but that would be the big risk.
Comment author:Vladimir_M
07 June 2010 07:23:23PM
*
3 points
[-]
From what I've heard, in Germany and other places where homeopathy enjoys high status and professional recognition, doctors sometimes use it as a very convenient way to deal with hypochondriacs who pester them. Sounds to me like a win-win solution.
Comment author:Vladimir_M
07 June 2010 07:09:23PM
*
4 points
[-]
Emile:
Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
These claims would be beyond the border of lunacy for any person, but still, I'm sure you'll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard's point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism: http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I've seen criticisms of Bouw's ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
Comment author:Clippy
07 June 2010 07:20:19PM
0 points
[-]
Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism:
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
Comment author:JoshuaZ
07 June 2010 07:32:14PM
3 points
[-]
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
Comment author:JoshuaZ
08 June 2010 01:59:24AM
0 points
[-]
I'm not sure it is completely stupid. Consider the argument in the following fashion:
1) We think your physics is wrong and geocentrism is correct.
2) Even if we're wrong about 1, your physics still supports regarding geocentrism as being just as valid as heliocentrism.
I don't think that their argument approaches this level of coherence.
Comment author:Jack
08 June 2010 03:04:13AM
1 point
[-]
He had a teaching position at a reputable-looking college, and I figure they would have checked.
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he's young enough to make me think that he may have been forced into retirement.
Comment author:RomanDavis
07 June 2010 12:20:20PM
0 points
[-]
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn't tell you rather it's true or not.
If a guy thinks that he can hear Hillary Clinton speaking from the feelings in his teeth, telling him to murder his cellmate, do you believe what he says? Status gets mucked up in the calculation, but with strangers it teeters precariously close to zero.
I really like kids,but the fact that millions of them passionately believe in Santa Claus does not change my degree of subjective belief one iota.
Comment author:Houshalter
07 June 2010 08:48:21PM
*
0 points
[-]
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn't tell you rather it's true or not.
But people only believe things that make sense to them. When it comes to controversial issues, then ya, you'll find that most people will be divided on it. However, we elect people to lead us in the faith that the majority opinion is right. So even that isn't entirly true. And out of the vast majority of possible ideas, most people that live in the same society will agree or disagree the same way on the majority of them, esspecially if they have the same background knowledge.
Comment author:Jack
08 June 2010 12:57:47AM
2 points
[-]
Well obviously propositions with extremely high complexity (and therefore very low priors) are going to remain low even when people believe them. But if someone says they believe they have 10 dollars on them or that the US Constitution was signed in September... the belief is enough to make those claims more likely than not.
Comment author:RobinZ
07 June 2010 12:27:59PM
6 points
[-]
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
Usually fairly substantial - if someone presents me with two equally-unsupported claims X and Y and tells me that they believe X and not Y, I would give greater credence to X than to Y. Many times, however, that credence would not reach the level of ... well, credence, for various good reasons.
Comment author:MartinB
07 June 2010 01:28:15PM
1 point
[-]
I recently learned the hard way, that one can easily be an idiot in one area, while being very competent in another.
Religious scientists / programmers etc.
Or lets say people that are highly competent in their area of occupation without looking into other things.
Comment author:MartinB
07 June 2010 01:24:50PM
3 points
[-]
Depends on the person and the idea.
I have some people whose recommendations I follow regardless, even if I estimate upfront that I will consider the idea wrong. There are different levels of wrongness, and it does not hurt to get good counterarguments.
It also depends on the real life practicability of the idea. If it is for everyday things than common sense is a good starting prior. (Also there is a time and place to use the public joker on Who wants to be a millionaire.)
If a group of professionals agree on something related to their profession it is also a good start.
To systematize: if a group of people has a belief about something they have experience with, that that belief is worth looking at.
And then on further investigation it often turns out that there are systematic mistakes being made.
I was shocked to read in the book on checklists, that not only doctors often don't like them. But even financial companies, that can see how the usage ups their monetary gains.
But finding flaws in a whole group does not imply that everything they say is wrong.
It is good to see a doctor, even if he not using statistics right. He can refer you to a specialist, and treat all the common stuff right away.
If you get a complicated disease you can often read up on it.
The obvious example to your question would be religion. It is widely believed, but probably wrong, yet I did not discard it right away, but spent years studying stuff till I decided there was nothing to it.
There is nothing wrong in examining the ideas other people have.
Comment author:Torben
07 June 2010 05:14:44PM
*
3 points
[-]
Agreed.
As the OP states, idea space is humongous. The fact alone that people comprehend something sufficiently to say anything about it at all means that this something is
a) noteworthy enough to be picked up by our evolutionarily derived faculties by even a bad rationalist
b) expressible by same faculties
c) not immediately, obviously wrong
To sum up, the fact that someone claims something is weak evidence that it's true, cf. Einstein's Arrogance. If this someone is Einstein, the evidence is not so weak.
Edit: just to clarify, I think this evidence is very weak, but evidence for the proposition, nonetheless. Dependent on the metric, by far most propositions must be "not even wrong", i.e. garbled, meaningless or absurd. The ratio of "true" to {"wrong" + "not even wrong"} seems to ineluctably be larger for propositions expressed by humans than for those not expressed, which is why someone uttering the proposition counts as evidence for it. People simply never claim that apples fall upwards, sideways, green, kjO30KJ&¤k etc.
Comment author:MartinB
07 June 2010 05:27:13PM
1 point
[-]
I forgot the major influence of my own prior knowledge. (Which i guess holds true for everyone.) That makes the cases where I had a fixed opinion, and managed to change it all the more interesting.
If you never dealt with an idea before you go where common sense or the experts lead you. But if you already have good knowledge, than public opinion should do nothing to your view.
Public opinion or even experts (esp. when outside their field) often enough state opinions without comprehending the idea. So it doesnt really mean too much.
Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling 'look over here!' And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
Comment author:Torben
07 June 2010 06:47:07PM
1 point
[-]
Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling 'look over here!' And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
I didn't intend to portray Einstein as bulletproof, but rather highlight his reasoning. Plus point to the idea of even locating the idea in idea space. Obviously, creationism is wrong, but less wrong than a random string. It at least manages to identify a problem and using cause and effect.
Comment author:AlanCrowe
07 June 2010 02:01:26PM
*
1 point
[-]
I've become increasingly disillusioned with people's capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won't see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn't blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1/8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people's capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
Comment author:Jack
08 June 2010 01:05:43AM
1 point
[-]
I don't think belief has a consistent evidentiary strength since it depends on the testifier's credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I'm carrying. I don't see any relation that could be described as baseline so the only answer is: context.
Comment author:billswift
07 June 2010 11:17:33AM
*
6 points
[-]
Regrets and Motivation
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measured up. The worst part of having regrets is the impact it has on our motivation.
somewhat expanded version of comment on OB a couple of months ago
Added: I didn't make the connection at first, but this is also Eliezer's point in this quote from The Super Happy People story, "It's bad enough comparing yourself to Isaac Newton without comparing yourself to Kimball Kinnison."
Comment author:xamdam
07 June 2010 04:35:27PM
*
3 points
[-]
I was talking to a friend yesterday and he mentioned a psychological study (I am trying to track down the source) that people tend to suffer MORE from failing to pursue certain opportunities than FAILING after pursuing them. So even if you're right about the overestimation of pleasure, it might just be irrelevant.
Comment author:Unnamed
08 June 2010 03:09:34AM
*
4 points
[-]
Here is a review of that psychological research (pdf), and there are more studies linked here (the keyword to look for is "regret"). The paper I linked is:
Gilovich, T., & Medvec, V. H. (1995). The experience of regret: What, when, and why. Psychological Review, 102, 379-395.
This article reviews evidence indicating that there is a temporal pattern to the experience of regret. Actions, or errors of commission, generate more regret in the short term; but inactions, or errors of omission, produce more regret in the long run. The authors contend that this temporal pattern is multiply determined, and present a framework to organize the divergent causal mechanisms that are responsible for it. In particular, this article documents the importance of psychological processes that (a) decrease the pain of regrettable action over time, (b) bolster the pain of regrettable inaction over time, and (c) differentially affect the cognitive availability of these two types of regrets. Both the functional and cultural origins of how people think about regret are discussed.
Comment author:billswift
07 June 2010 11:27:38AM
4 points
[-]
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors' arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr's blog 2010-05-09, revised and expanded since)
Comment author:soreff
07 June 2010 03:15:21PM
4 points
[-]
Y2K. I thought I had a solid lower bound for the size of that one:
Small businesses basically did nothing in preparation, and they still had a fair
amount of dependence on date-dependent programs, so I was expecting that
the impact on them would set a sizable lower bound on the the size of the
overall impact. I've never been so glad to be wrong. I would still like to see a good
retrospective explaining how that sector of the economy wound up unaffected...
Comment author:pjeby
07 June 2010 04:10:29PM
*
5 points
[-]
Small businesses basically did nothing in preparation [for Y2K], and they still had a fair amount of dependence on date-dependent programs
The smaller the business, the less likely they are to have their own software that's not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80's and 90's was Y2K-ready since before 1985.]
Comment author:billswift
07 June 2010 03:31:24PM
1 point
[-]
First, the "economic collapse" I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a "collapse" scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as "survivalists" and nuts) who were afraid they would succeed.
Not sure if you're referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann's Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
Comment author:ellx
07 June 2010 11:59:57AM
1 point
[-]
I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the current one or two threads per month is better than quadanually. Should eliezer have recalibrated his confidence on this and never said it because its chance of being right was too low or would lowering his confidence on ideas be counter productive and is it optimal for people to have confidence in the ideas that they voice even it causes them to say some things which aren't right.
I suppose this is of importance to me because I think I might be better off if i lowered how judgemental i am of people who say things which are wrong and also lowered how judgemental i am of the ideas i have because i might be putting too much weight on people voicing ideas which are wrong.
Comment author:MartinB
07 June 2010 01:36:19PM
3 points
[-]
Being right on group effects is difficult.
Is there a consistent path for what LW wants to be?
a) rationalist site filled up with meta topics and examples
b) a) + detailed treats of some important topics
c) open to everything as long as reason is used
and so on.
I personally like and profit from the discussing of akrasia methods. But it might be detrimental to the main target of the site.
Also I would very much like to see a cannon develop for knowledge that LWers generally agree upon including, but not limited to the topics I currently care about myself.
Voicing ideas depends on where you are. In social settings I more and more advice against it. Arguing/discussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have.
It would be great to have a place for any idea to be examined for right and wrong.
Comment author:JoshuaZ
07 June 2010 07:07:20PM
3 points
[-]
What does Fallacyzilla have on its chest? It looks like it has "A -> B, ~B, therefore ~A" But that is valid logic. Am I misreading it or did you mean to put "A -> B, ~A, therefore ~B"? That would be actually wrong.
Comment author:Yvain
07 June 2010 07:13:23PM
*
8 points
[-]
I noticed that two seconds after I put it up and it's now corrected...er...incorrected. (Today I learned - my brain has that same annoying auto-correct function as Microsoft Word)
Comment author:Gavin
07 June 2010 09:26:14PM
0 points
[-]
I think the idea is that CEV lets us "grow up more together" and figure that out later.
I have only recently started looking into CEV so I'm not sure whether I a) think it's a workable theory and b)think it's a good solution, but I like the way it puts off important questions.
It's impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).
Comment author:ata
07 June 2010 09:28:53PM
1 point
[-]
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we'd like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
Comment author:JoshuaZ
07 June 2010 01:41:25PM
*
6 points
[-]
That's well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor's algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can't lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there's a fundamentally random aspect to the laws of physics. Thus, we can't simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I'm not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there's no reason you couldn't have a wandering sequence of worlds. Edit: Or for that matter, couldn't have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
Comment author:ocr-fork
07 June 2010 04:16:09PM
*
4 points
[-]
First, the notion that a quantum computer would have infinite processing capability is incorrect... Second, if our understanding of quantum mechanics is correct
It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Comment author:JoshuaZ
07 June 2010 04:23:35PM
4 points
[-]
Ok, but in that case, that world in question almost certainly can't be our world. We'd have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn't our universe.
Comment author:JoshuaZ
07 June 2010 04:59:23PM
*
3 points
[-]
What I mean is that this isn't a type of fiction that could plausibly occur in our universe. In contrast for example, there's nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn't work in our universe.
Comment author:jimrandomh
07 June 2010 10:04:42PM
*
3 points
[-]
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don't prevent using the trick described in the story, they just make it more difficult. You don't have to identify one unique universe that you're in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content - enough that it probably doesn't appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Comment author:Houshalter
07 June 2010 07:29:24PM
2 points
[-]
Couldn't they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
Comment author:Blueberry
07 June 2010 07:45:53PM
1 point
[-]
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn't notice any change if they turned it off.
Comment author:JoshuaZ
07 June 2010 07:52:32PM
0 points
[-]
That doesn't work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
Comment author:Blueberry
07 June 2010 08:42:01PM
1 point
[-]
I'm not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
Comment author:Blueberry
08 June 2010 03:30:16AM
0 points
[-]
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
Comment author:Blueberry
08 June 2010 03:35:09PM
1 point
[-]
I don't understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Comment author:red75
08 June 2010 09:33:09AM
*
0 points
[-]
I can't see any point in turning it off. Run it to the end and you will live, turn it off and "current you" will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
Comment author:Houshalter
07 June 2010 07:56:45PM
0 points
[-]
But they would cease to exist. If they ran it to its end, then it's over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there's no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there's no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
Comment author:Houshalter
08 June 2010 01:24:03AM
*
0 points
[-]
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it's end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Comment author:ocr-fork
08 June 2010 06:25:05PM
*
2 points
[-]
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it's conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that's why restarting the simulation shouldn't work.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.... The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Comment author:cousin_it
08 June 2010 07:50:03PM
*
1 point
[-]
Yeah, but would a binary tree of simulated worlds "converge" as we go deeper and deeper? In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it'll do?
Comment author:ocr-fork
08 June 2010 08:32:24PM
1 point
[-]
In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky.
I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
Comment author:[deleted]
07 June 2010 12:45:52PM
*
6 points
[-]
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we're dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we're willing.
Comment author:Piglet
07 June 2010 04:01:43PM
7 points
[-]
It depends on what you mean by "criminal"; under environmental law, there are both negligence-based (negligent discharge of pollutants to navigable waters) and strict liability (no intent requirement, such as killing of migratory birds) crimes that could apply to this spill. I don't think anyone thinks BP intended to have this kind of spill, so the interesting question from an environmental criminal law perspective is whether BP did enough to be treated as acting "knowingly" -- the relevant intent standard for environmental felonies. This is an extremely slippery concept in the law, especially given the complexity of the systems at issue here. Litigation will go on for many years on this exact point.
Comment author:Houshalter
07 June 2010 06:11:37PM
1 point
[-]
I'm not sure it's relevant whether they did anything illegal or not. People always seem to want to blame and punish someone for their problems. In my opinion, they should be forced to pay for and compensate for all the damage, as well as a very large fine as punishment. This way in the future they, and other companies, can regulate themselves and prepare for emergencies as efficiently as possible without arbitrary and clunky government regulations and agencies trying to slap everything together at the last moment. Of course, if a single person actually did something irresponsible (eg; bob the worker just used duct tape to fix that pipe knowing that it wouldn't hold) then they should be able to be tried in court or sued/fined by the company. But even then, it's up to the company to make sure that stuff like this doesn't happen by making sure all of their workers are competent and certified.
Comment author:billswift
08 June 2010 02:55:59AM
1 point
[-]
You are not really going to learn much unless you are interested in wading through lots of technical articles. If you want to learn, you need to wait until it has been digested by relevant experts into books. I am not sure what you think you can learn from this, but there are two good books of related information available now:
Jeff Wheelwright, Degrees of Disaster, about the environmental effects of the Exxon Valdez spill and the clean up.
Trevor Kletz, What Went Wrong?: Case Histories of Process Plant Disasters, which is really excellent. [For general reading, an older edition is perfectly adequate, new copies are expensive.] It has an incredible amount of detail, and horrifying accounts of how apparently insignificant mistakes can (often literally) blow up on you.
Comment author:MartinB
07 June 2010 01:12:41PM
*
2 points
[-]
Because it was used somewhere I calculated my own weights worth in gold - it is about 3.5 million EUR. In silver you can get me for 50.000 EUR.
The Mythbusters recently build a lead balloon and had it fly. Some proverb don't hold up to reality and/or engineering.
Comment author:Clippy
07 June 2010 03:59:56PM
8 points
[-]
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
Comment author:JoshuaZ
07 June 2010 04:20:11PM
0 points
[-]
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We've even evolved to convince ourselves that we actually care about morality and not self-interest. That's likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Comment author:jimrandomh
07 June 2010 04:32:53PM
1 point
[-]
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don't offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that's at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don't give consent to be eaten.
Comment author:Clippy
07 June 2010 05:40:20PM
5 points
[-]
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
Comment author:jimrandomh
07 June 2010 06:07:45PM
1 point
[-]
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences. In this case there is an obvious reason for those preferences to differ - namely, the adult knows that he won't be one of those eaten.
In extrapolating a child's preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can't extrapolate from a child whose fate is undecided to an adult that believes it won't be eaten; that change alters its preferences.
Comment author:Clippy
07 June 2010 06:14:31PM
1 point
[-]
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences.
Do you believe that all children's preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
Comment author:jimrandomh
07 June 2010 06:25:32PM
-1 points
[-]
I would use a process like coherent extrapolated volition to decide which preferences to count - that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
Comment author:Blueberry
07 June 2010 04:49:50PM
3 points
[-]
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
Comment author:Clippy
07 June 2010 06:03:43PM
*
1 point
[-]
Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
Comment author:Blueberry
07 June 2010 08:05:09PM
2 points
[-]
I understand what you mean now.
Ok, so first of all, there's a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn't do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don't want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don't want the kids to be eaten, and we don't want the adults to eat. We don't want to balance any of these interests, because they go against our values. Just like you wouldn't balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is "Well, of course you don't want the punishments. That's the point. So cooperate, or you'll get punished. It's not fair to exempt yourself from the rules." And my reaction to position (2) is "We don't want any baby-eating, so we'll save you from being eaten, but we won't let you eat any other babies. It's not fair to exempt yourself from the rules." This seems consistent to me.
Comment author:Clippy
07 June 2010 09:03:47PM
*
3 points
[-]
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn't the baby-eaters' universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to "free ride" off the sacrifices that the system requires of everyone?
Comment author:JStewart
08 June 2010 03:41:19AM
3 points
[-]
Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
Comment author:Clippy
08 June 2010 07:05:31PM
1 point
[-]
No, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Comment author:JenniferRM
08 June 2010 12:07:24AM
4 points
[-]
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with "the abstract idea of punishment" into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of "eating children" are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless "intelligent" and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been "illegitimately modified" or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary "credentials" for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren't currently willing to provide such information, are there preconditions you could propose before you would do so?
Comment author:MartinB
07 June 2010 07:20:04PM
7 points
[-]
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Comment author:MartinB
07 June 2010 08:11:16PM
*
1 point
[-]
To have the experience.
I dont mean it as a treatment, but something that would be exciting, new and worth trying just for the sake of it.
edit/add: the deleted comment above asked why i would bother to do something like floating
Comment author:gwern
07 June 2010 09:49:58PM
1 point
[-]
Hard to say - even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There's no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there's some for float tanks. In fact, I'm hard pressed to think of any New Agey stuff where there isn't enough background to judge it on its own merits.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don't believe any of that, do you? Neither do I, except for the first eight words and the last six. I've been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes' blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it's these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven't taken the trouble to determine by varying them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I'd be interested to hear from anyone here who meditates and gets more from it than I do.
I've done yoga every week for the last month or two. It's pleasant. Other than paying attention to how I'm holding my body vs. the instruction, I mostly stop thinking for an hour (as we're encouraged to do), which is nice.
I can't say I notice any significant lasting effects yet. I'm slightly more flexible.
Comment author:sketerpot
08 June 2010 05:13:20AM
0 points
[-]
Meditation can be pretty darn relaxing. Especially if you happen to live within walking distance of any pleasant yet sparsely-populated mountaintops. I would recommend giving it a shot; don't worry about advanced techniques or anything, and just close your eyes and focus on your breathing, and the wind (if any). Very pleasant.
Comment author:khafra
08 June 2010 07:29:51PM
3 points
[-]
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word "chi" does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent's balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
Pre-alpha, one hour of work. I plan to improve it.
EDIT:Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2:I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
Comment author:JoshuaZ
08 June 2010 12:10:26AM
2 points
[-]
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don't know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren't actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
Comment author:DanielVarga
08 June 2010 02:11:00PM
*
2 points
[-]
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It'd also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers' home country is/isn't primarily taken with milk? I'm always tempted to imagine most of the scientists having some ulterior motive or prior belief they're looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: "we did this experiment hoping to show X, but instead, we found not X". Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
Bad luck could be, not just getting that 5% result which 95% accuracy implies, but some non-obvious difference in the volunteers (different genetics?), in the tea. or in the milk.
Comment author:JoshuaZ
08 June 2010 01:00:11PM
*
0 points
[-]
It does seem odd to get such divergent results.
It isn't that odd. There are a lot of things that could easily change the results. Exact temperature of tea (if one protocol involved hotter or colder water), temperature of milk, type of milk, type of tea (one of the protocols uses black tea, and another uses green tea). Note also that the studies are using different metrics as well.
Comment author:cousin_it
08 June 2010 06:24:24AM
*
9 points
[-]
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn't lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn't be done (and some of your examples may qualify for that).
Comment author:prase
08 June 2010 10:04:35AM
*
4 points
[-]
A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying.
In my opinion, this is a lawyer's attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule "never lie" as a consequentialist "I assign an extremely high disutility to situations where I lie". In the same way you can put consequentialist preferences as a deontoligst rule "at any case, do whatever maximises your utility". But doing that, the point of the distinction between the two ethical systems is lost.
My comment argues about the relationship of concepts "make the world a better place" and "makes people happier". cousin_it's statement:
For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then "make the world a better place" should be the same as "makes people happier". However, it's against the spirit of consequentialist outlook, in that it privileges "happy people" and disregards other aspects of value. Taking "happy people" as a value through deontological lens would be more appropriate, but it's not what was being said.
Comment author:cousin_it
08 June 2010 02:14:32PM
*
2 points
[-]
Let's carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn't happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a "consequentialist" to take, and the word "deontologism" would fit it way better.
IMO, a "proper" consequentialist should care about consequences they can (in principle, someday) see, and shouldn't care about something they can never ever receive information about. If we don't make this distinction or something similar to it, there's no theoretical difference between deontologism and consequentialism - each one can be implemented perfectly on top of the other - and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one's ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn't seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don't need to distinguish them at all, although it doesn't make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
Comment author:cousin_it
08 June 2010 02:44:57PM
*
0 points
[-]
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it's not okay for me to lie even if I can't get caught, because then I'd be the "third-party beneficiary", but somehow it's okay to lie and then erase my memory of lying. Is that right?
You seem to be saying that it's not okay for me to lie even if I can't get caught, because then I'd be the "third-party beneficiary"
Right. "Third-party beneficiary" can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
but it's somehow okay to lie and then erase my memory of lying. Is that right?
It's not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party "beneficiary" in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn't make sense for you to have that concept in your ontology if the states of the world that contained you-lying can't be in principle (in the strong sense described in the previous comment) distinguished from the ones that don't. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
Comment author:AlephNeil
09 June 2010 01:54:30AM
1 point
[-]
The condition for the difference to be observable in principle is much weaker than you seem to imply.
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don't seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can't we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it's idea that a "proper" consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it's still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being 'sufficient' for a proper consequentialist to care about it. But if we don't, and all that matters is the indefinite future, then don't we face the problem that "in the long term we're all dead"? OK, perhaps some of us think that rule will eventually cease to apply, but for argument's sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we'd want our ethical theory to be more robust than to say "Do whatever you like - nothing matters any more."
Comment author:cousin_it
08 June 2010 12:10:56PM
*
-2 points
[-]
I can't believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
I can't believe you took the exact cop-out I warned you against.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
restrict your attention to consequentialists whose terminal values have to be observable.
What does this mean? Consequentialist values are about the world, not about observations (but your words don't seem to fit to disagreement with this position, thus the 'what does this mean?'). Consequentialist notion of values allows a third party to act for your benefit, in which case you don't need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don't need to know about these options in order to benefit.
Comment author:RobinZ
08 June 2010 02:41:38PM
2 points
[-]
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations - like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Comment author:cousin_it
08 June 2010 02:55:56PM
*
2 points
[-]
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband's heart, not for some material benefit. So if she knew the husband didn't love her, she'd tell the truth. The fact that you automatically parsed the situation differently is... disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don't understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can't wait till other people reply to the questionnaire.
Comment author:RobinZ
08 June 2010 03:06:40PM
*
0 points
[-]
The husband does benefit, by her lights. The chief reason it comes out in the husband's favor in #6 is because the husband doesn't value the marital relationship and (I assumed) wouldn't value the child relationship.
You're right - in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it's a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite - particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don't accept most of this analysis, despite being a consequentialist - as I said, it is the "naive consequentialist solution", and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that "happier couples" does not imply "happier coupling" - the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Comment author:Nisan
08 June 2010 04:50:52PM
1 point
[-]
It's okay to deceive people if they're not actually harmed and you're sure they'll never find out. In practice, it's often too risky.
1-3: This is all okay, but nevertheless, I wouldn't do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child's welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let's assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
Comment author:cousin_it
08 June 2010 05:03:04PM
*
2 points
[-]
1-3: It seems you're using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It's more similar to the Prisoner's Dilemma, if you ask me.
Comment author:Larks
08 June 2010 07:23:15PM
0 points
[-]
6: In the trolley problem, a deontologist wouldn't push decide to push the man, so the pseudo-fat man's life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Comment author:cousin_it
08 June 2010 07:39:37PM
*
1 point
[-]
Maybe you missed the point of my comment. (Maybe I'm missing my own point; can't tell right now, too sleepy) Anyway, here's what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they're lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Comment author:Larks
08 June 2010 07:46:25PM
1 point
[-]
Fair point, I didn't see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
Comment author:JoshuaZ
08 June 2010 07:49:26PM
0 points
[-]
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn't clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Comment author:cousin_it
08 June 2010 08:01:55PM
*
0 points
[-]
You're right, it's pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don't even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn't to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
Comment author:Nisan
08 June 2010 08:11:27PM
2 points
[-]
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
Comment author:thomblake
08 June 2010 09:08:46PM
0 points
[-]
That's an argument that only appeals to the consequentalist.
I'm not sure that's true. Forms of deontology will usually have some sort of theory of value that allows for a 'better world', though it's usually tied up with weird metaphysical views that don't jive well with consequentialism.
Comment author:Nisan
08 June 2010 08:33:29PM
2 points
[-]
1-3: It's an alief, not a belief, because I know that lying to my spouse doesn't really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Comment author:cousin_it
08 June 2010 08:52:50PM
*
0 points
[-]
Thanks for the link. I think Alicorn would call it an "unofficial" or "non-endorsed" belief.
Let's put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
Comment author:thomblake
08 June 2010 09:15:08PM
3 points
[-]
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
It seems like it would be more aptly defined as "the belief that making the world a better place constitutes doing the right thing". Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don't care whether it does.
Comment author:Alexandros
08 June 2010 10:20:14AM
*
1 point
[-]
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
Comment author:red75
08 June 2010 10:34:42AM
0 points
[-]
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
Comment author:Alexandros
08 June 2010 01:35:33PM
*
5 points
[-]
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Comment author:red75
08 June 2010 02:28:45PM
*
-1 points
[-]
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there's only one source of objections - children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE's morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here's yet another arguments.
Guidelines of FAI as of may 2004.
Defend humans, the future of humankind, and humane nature.
BEs will formulate this as "Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE's nature."
Encapsulate moral growth.
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise.
There's no trend in moral that can be encapsulated.
Humankind should not spend the rest of eternity desperately wishing that the programmers had done something differently.
If they stop being BE they will mourn their wrong doings to the death.
Avoid creating a motive for modern-day humans to fight over the initial dynamic
Every single notion that FAI will make in lines of "Let's suppose that you are non-BE" will cause it to be destroyed.
Help people.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
Comment author:Morendil
08 June 2010 03:35:11PM
2 points
[-]
What would happen if CEV was applied to the Baby Eaters?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: "What rules would you want to prevail if you didn't know in advance who you would turn out to be?"
Where CEV as I understand it adds more information - assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be - the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even - conceivably - how many of you there are.
To this bunch of undifferentiated people you'd put the question, "All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands."
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it's hard to check intuitions.
Comment author:Morendil
08 June 2010 05:15:17PM
2 points
[-]
You're correct. I'm using the term "people" loosely. However, I wrote the grand-parent while fully informed of what the Babyeaters are. Did you mean to rebut something in particular in the above?
Comment author:red75
08 June 2010 05:29:28PM
*
5 points
[-]
"All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands."
If we translate it to our cultural context, we will get something like "All in favor of 100% dying horribly of old age, in return for good lives of your babies, please rise your hands". They ARE aliens.
Comment author:Alexandros
08 June 2010 05:43:03PM
*
1 point
[-]
It doesn't seem from the story like the babies are gladly sacrificing for the tribe...
"But..." said the Master. "But, my Lady, if they want to be eaten -"
"They don't," said the Xenopsychologist. "Of course they don't. They run from their parents when the terrible winnowing comes. The Babyeater children aren't emotionally mature - I mean they don't have their adult emotional state yet. Evolution would take care of anyone who wanted to get eaten. And they're still learning, still making mistakes, so they don't yet have the instinct to exterminate violators of the group code. It's a simpler time for them. They play, they explore, they try out new ideas. They're..." and the Xenopsychologist stopped. "Damn," she said, and turned her head away from the table, covering her face with her hands. "Excuse me." Her voice was unsteady. "They're a lot like human children, really."
Comment author:red75
08 June 2010 05:59:15PM
*
-3 points
[-]
Yes. It's horrible. For us. But why FAI should place any weight on removing that? How FAI can generalize past "Life of Baby Eater is sacred" to "Life of every conscious being is sacred"? FAI has all evidence that latter is plain wrong.
Do You want convince me or FAI that it's bad? I know that it is, I just try to demonstrate that FAI as it is, is about preservation and not development to (universally) better ends.
Comment author:Morendil
08 June 2010 06:01:42PM
2 points
[-]
Well, we would say "no" to that, if we had the means to abolish old age. We'd want to have our cake and eat it too.
The text stipulates that it is within the BE's technological means to abolish the suffering of the babies, so I expect that they would choose to do so, behind the Veil.
Comment author:red75
08 June 2010 06:34:50PM
*
-1 points
[-]
Who will ask them? FAI have no idea, that a) baby eating is bad, b) it should generalize moral values past BE to all conscious beings.
Even if FAI will ask that question and it turns out that majority of population don't want to do inherently good thing (it is for them), then FAI must undergo controlled shutdown.
EDIT: To disambiguate. I am talking about FAI, which is implemented by BEs.
As we should not allow FAI to generalize morals past conscious beings, just to be sure, that it will not take CEV of all bacterium, so BEs should not allow their FAI to generalize past BEs.
As we should built in automatic off switch into our FAI, to stop it if its goals is inherently wrong, so should BEs.
Comment author:Jack
09 June 2010 07:38:54AM
1 point
[-]
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
Comment author:Morendil
09 June 2010 09:26:37AM
3 points
[-]
I've been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I'm studying at the moment. It would probably make sense to expand it to other books including non-math books - though the set of active books should remain small.
Two things have been holding me back - for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don't want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
Comment author:Jack
09 June 2010 09:51:58AM
1 point
[-]
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1/6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
Comment author:Maelin
09 June 2010 10:01:53AM
*
5 points
[-]
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don't have perfect knowledge of the universe, so when I ask, "Is your mum on the phone?" you don't have the guaranteed correct answer ready to go. You don't know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying "I have absolutely no idea in the slightest", you are able to say something more useful: "It's possible, but unlikely." Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes' Law.
Comments (534)
Let's get this thread going:
I'd like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can't find some doctor to argue for.
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn't exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
All generalisations are bounded, even when the bounds are not expressed. In the context of his talk, Ben Goldacre was talking about "doctors" being quoted as supporting various pieces of bad medical science.
Many medical doctors around here (germany) offer homeopathy in addition to their medical practice. Now it might be that they respond to market demand to sneak in some medical science in between, or that they actually take it serious.
I recently found out why doctors cultivate a certain amount of professional arrogance when dealing with patients: Most patients don't understand whats behind their specific disease - and usually do not care. So if doctors where open to argument, or would state doubts more openly the patient might loose trust, and not do what he is ordered to do. To instill an absolute belief in doctors powers might be very helpful for a big size of the population. A lot of my own frustration in doctors experiences can be attributed to me being a non-standard patient that reads to much.
Or that they respond to market demand and don't try to sneak any medical science in, based on the principle that the customer is always right.
I still assume that doctors actually want to help people. (Despite reading the checklist book, and other stuff). So if I have the choice between: World a) where doctors also do homeopathy, and b) where other ppl. do it, while doctors stay true to science. Than I would prefer a) because at least the people go to a somewhat competent person.
Homeopathy is at best a placebo. It's rare that there's no better medical way to help someone. Your assumption is counter to the facts.
Certainly doctors want to help people - all else being equal. But if they practice homeopathy extensively, then they are prioritizing other things over helping people.
If the market condition (i.e. the patients' opinions and desires) are such that they will not accept scientific medicine, and will only use homeopathy anyway, then I suggest then the best way to help people is for all doctors to publicly denounce homeopathy and thus convince at least some people to use better-than-placebo treatments instead.
You might implicitly assume that people make a conscious choice to go the unscientific route. That is not the case. For a layperson there is no perceivable difference between a doctor and a homeopath. (Well. Maybe there is, but lets exaggerate that here.)
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place. If I were a doctor, than the idea to offer homeopathy, so that people at least come to me would make sense both money wise, and to get the effect that they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person. Its a case of corrupting your integrity to some degree to get the message heard.
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
You could probably ask the doctor why they offer homeopathy, and base your decision on the sort of answer you get. "Because it's an effective cure..." is straight out.
tl;dr - if doctors don't denounce homeopaths, people will start going to "real" homeopaths and other alt-medicine people, and there is no practical limit to the lies and harm done by real homeopaths.
That is so because doctors also offer homeopathy. If almost all doctors clearly denounced homeopathy, fewer people would choose to go to homeopaths, and these people would benefit from better treatment.
This is a problem in its own right that should be solved by giving doctors incentives to listen to patients more. However, do you think that because doctors don't listen enough, homeopaths produce better treatment (i.e. better medical outcomes)?
Do you have evidence that this is the result produced?
What if the reverse happens? Because the doctors endorse homeopathy, patients start going to homeopaths instead of doctors. Homeopaths are better at selling themselves, because unlike doctors they can lie ("homeopathy is not a placebo and will cure your disease!"). They are also better at listening, can create a nicer (non-clinical) reception atmosphere, they can get more word-of-mough networking benefits, etc.
Patients can't normally distinguish "trivial stuff" from dangerous conditions until it's too late - even doctors sometimes get this wrong. The next logical step is for people to let homeopaths treat all the trivial stuff, and go to ER when something really bad happens.
Personal story: my mother is a doctor (geriatrician). When I was a teenager I had seasonal allergies and she insisted on sending me for weekly acupuncture. During the hour-long sessions I had to listen to the ramblings of the acupuncturist. He told me (completely seriously) that, although he personally didn't have the skill, the people who taught him acupuncture in China could use it to cure my type 1 diabetes. He also once told me about someone who used various "alternative medicine" to eat only vine leaves for a year before dying.
When the acupuncture didn't help me, my mother said that was my own fault because "I deliberately disbelieved the power of acupuncture and so the placebo effect couldn't work on me".
I disagree - at least with the part about "it's rare that there's no better medical way to help people". It's depressingly common that there's no better medical way to help people. Things like back pain, tiredness, and muscle aches - the commonest things for which people see doctors - can sometimes be traced to nice curable medical reasons, but very often as far as anyone knows they're just there.
Robin Hanson has a theory - and I kind of agree with him - that homeopathy fills a useful niche. Placebos are pretty effective at curing these random (and sometimes imagined) aches and pains. But most places consider it illegal or unethical for doctors to directly prescribe a placebo. Right now a lot of doctors will just prescribe aspirin or paracetamol or something, but these are far from totally harmless and there are a lot of things you can't trick patients into thinking aspirin is a cure for. So what would be really nice, is if there was a way doctors could give someone a totally harmless and very inexpensive substance like water and make the patient think it was going to cure everything and the kitchen sink, without directly lying or exposing themselves to malpractice allegations.
Where this stands or falls is whether or not it turns patients off real medicine and gets them to start wanting homeopathy for medically known, treatable diseases. Hopefully it won't - there aren't a lot of people who want homeopathic cancer treatment - but that would be the big risk.
From what I've heard, in Germany and other places where homeopathy enjoys high status and professional recognition, doctors sometimes use it as a very convenient way to deal with hypochondriacs who pester them. Sounds to me like a win-win solution.
Emile:
These claims would be beyond the border of lunacy for any person, but still, I'm sure you'll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard's point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you'll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates -- prepare for it -- geocentrism:
http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I've seen criticisms of Bouw's ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
Earth's sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
Oh. Well, that's stupid then.
I'm not sure it is completely stupid. Consider the argument in the following fashion:
1) We think your physics is wrong and geocentrism is correct. 2) Even if we're wrong about 1, your physics still supports regarding geocentrism as being just as valid as heliocentrism.
I don't think that their argument approaches this level of coherence.
Here is another one:
http://en.wikipedia.org/wiki/Courtney_Brown_%28researcher%29
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he's young enough to make me think that he may have been forced into retirement.
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn't tell you rather it's true or not.
If a guy thinks that he can hear Hillary Clinton speaking from the feelings in his teeth, telling him to murder his cellmate, do you believe what he says? Status gets mucked up in the calculation, but with strangers it teeters precariously close to zero.
I really like kids,but the fact that millions of them passionately believe in Santa Claus does not change my degree of subjective belief one iota.
But people only believe things that make sense to them. When it comes to controversial issues, then ya, you'll find that most people will be divided on it. However, we elect people to lead us in the faith that the majority opinion is right. So even that isn't entirly true. And out of the vast majority of possible ideas, most people that live in the same society will agree or disagree the same way on the majority of them, esspecially if they have the same background knowledge.
Well obviously propositions with extremely high complexity (and therefore very low priors) are going to remain low even when people believe them. But if someone says they believe they have 10 dollars on them or that the US Constitution was signed in September... the belief is enough to make those claims more likely than not.
Usually fairly substantial - if someone presents me with two equally-unsupported claims X and Y and tells me that they believe X and not Y, I would give greater credence to X than to Y. Many times, however, that credence would not reach the level of ... well, credence, for various good reasons.
I think it largely depends on a) what the idea is and b) who believes it = and what their rationality skills are.
I recently learned the hard way, that one can easily be an idiot in one area, while being very competent in another. Religious scientists / programmers etc. Or lets say people that are highly competent in their area of occupation without looking into other things.
Depends on the person and the idea. I have some people whose recommendations I follow regardless, even if I estimate upfront that I will consider the idea wrong. There are different levels of wrongness, and it does not hurt to get good counterarguments. It also depends on the real life practicability of the idea. If it is for everyday things than common sense is a good starting prior. (Also there is a time and place to use the public joker on Who wants to be a millionaire.) If a group of professionals agree on something related to their profession it is also a good start. To systematize: if a group of people has a belief about something they have experience with, that that belief is worth looking at.
And then on further investigation it often turns out that there are systematic mistakes being made.
I was shocked to read in the book on checklists, that not only doctors often don't like them. But even financial companies, that can see how the usage ups their monetary gains. But finding flaws in a whole group does not imply that everything they say is wrong. It is good to see a doctor, even if he not using statistics right. He can refer you to a specialist, and treat all the common stuff right away. If you get a complicated disease you can often read up on it.
The obvious example to your question would be religion. It is widely believed, but probably wrong, yet I did not discard it right away, but spent years studying stuff till I decided there was nothing to it. There is nothing wrong in examining the ideas other people have.
Agreed.
As the OP states, idea space is humongous. The fact alone that people comprehend something sufficiently to say anything about it at all means that this something is a) noteworthy enough to be picked up by our evolutionarily derived faculties by even a bad rationalist b) expressible by same faculties c) not immediately, obviously wrong
To sum up, the fact that someone claims something is weak evidence that it's true, cf. Einstein's Arrogance. If this someone is Einstein, the evidence is not so weak.
Edit: just to clarify, I think this evidence is very weak, but evidence for the proposition, nonetheless. Dependent on the metric, by far most propositions must be "not even wrong", i.e. garbled, meaningless or absurd. The ratio of "true" to {"wrong" + "not even wrong"} seems to ineluctably be larger for propositions expressed by humans than for those not expressed, which is why someone uttering the proposition counts as evidence for it. People simply never claim that apples fall upwards, sideways, green, kjO30KJ&¤k etc.
I forgot the major influence of my own prior knowledge. (Which i guess holds true for everyone.) That makes the cases where I had a fixed opinion, and managed to change it all the more interesting. If you never dealt with an idea before you go where common sense or the experts lead you. But if you already have good knowledge, than public opinion should do nothing to your view. Public opinion or even experts (esp. when outside their field) often enough state opinions without comprehending the idea. So it doesnt really mean too much. Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling 'look over here!' And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
I didn't intend to portray Einstein as bulletproof, but rather highlight his reasoning. Plus point to the idea of even locating the idea in idea space. Obviously, creationism is wrong, but less wrong than a random string. It at least manages to identify a problem and using cause and effect.
Thank you, this is what I was getting at.
I've become increasingly disillusioned with people's capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won't see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn't blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1/8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people's capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
I don't think belief has a consistent evidentiary strength since it depends on the testifier's credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I'm carrying. I don't see any relation that could be described as baseline so the only answer is: context.
Regrets and Motivation
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measured up. The worst part of having regrets is the impact it has on our motivation.
somewhat expanded version of comment on OB a couple of months ago
Added: I didn't make the connection at first, but this is also Eliezer's point in this quote from The Super Happy People story, "It's bad enough comparing yourself to Isaac Newton without comparing yourself to Kimball Kinnison."
Maybe you can set your success setpoint to a lower value. The optimum is hard to achieve. So looking for 100% everywhere might be bad.
One variable often invoked to explain happiness in Denmark (who regularly rank #1 for happiness) is modest expectations.
ETA: the above paper seems a bit tongue-in-cheek, but as I gather, the results are solid. Full disclosure: I'm from Denmark.
Awesome coincidence. I am going to travel to Denmark next week for 10 days. Will check it out myself!
Often it is our imagined bad futures that keep us too afraid to act. In my experience this is more common than the opposite.
What do you mean by "the opposite"? I can think of at least two ways to invert that sentence.
I meant billswift's original idea: that we imagine good futures and that motivates us to act.
I was talking to a friend yesterday and he mentioned a psychological study (I am trying to track down the source) that people tend to suffer MORE from failing to pursue certain opportunities than FAILING after pursuing them. So even if you're right about the overestimation of pleasure, it might just be irrelevant.
I haven't seen a study, but that is a common belief. A good quote to that effect,
And I vaguely remember seeing another similar quote from Churchill.
Here is a review of that psychological research (pdf), and there are more studies linked here (the keyword to look for is "regret"). The paper I linked is:
Gilovich, T., & Medvec, V. H. (1995). The experience of regret: What, when, and why. Psychological Review, 102, 379-395.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors' arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr's blog 2010-05-09, revised and expanded since)
Could you give some examples of the predicted collapses that didn't happen?
Y2K. I thought I had a solid lower bound for the size of that one: Small businesses basically did nothing in preparation, and they still had a fair amount of dependence on date-dependent programs, so I was expecting that the impact on them would set a sizable lower bound on the the size of the overall impact. I've never been so glad to be wrong. I would still like to see a good retrospective explaining how that sector of the economy wound up unaffected...
The smaller the business, the less likely they are to have their own software that's not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80's and 90's was Y2K-ready since before 1985.]
First, the "economic collapse" I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a "collapse" scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as "survivalists" and nuts) who were afraid they would succeed.
Not sure if you're referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann's Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
I'd like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn't appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the current one or two threads per month is better than quadanually. Should eliezer have recalibrated his confidence on this and never said it because its chance of being right was too low or would lowering his confidence on ideas be counter productive and is it optimal for people to have confidence in the ideas that they voice even it causes them to say some things which aren't right.
I suppose this is of importance to me because I think I might be better off if i lowered how judgemental i am of people who say things which are wrong and also lowered how judgemental i am of the ideas i have because i might be putting too much weight on people voicing ideas which are wrong.
Being right on group effects is difficult.
Is there a consistent path for what LW wants to be? a) rationalist site filled up with meta topics and examples b) a) + detailed treats of some important topics c) open to everything as long as reason is used
and so on. I personally like and profit from the discussing of akrasia methods. But it might be detrimental to the main target of the site. Also I would very much like to see a cannon develop for knowledge that LWers generally agree upon including, but not limited to the topics I currently care about myself.
Voicing ideas depends on where you are. In social settings I more and more advice against it. Arguing/discussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have.
It would be great to have a place for any idea to be examined for right and wrong.
LW is working on it, and you can help!
I'd like to see a picture of this LW cannon!
To whoever downvoted the parent: please refrain from downvoting people who draw attention to other's mistakes in a gentle and humorous way.
Rather than waste time doing both your cannon request and Roko's Fallacyzilla request, I just combined them into one picture of the Less Wrong Cannon attacking Fallacyzilla.
...now someone take Photoshop away from me, please.
What does Fallacyzilla have on its chest? It looks like it has "A -> B, ~B, therefore ~A" But that is valid logic. Am I misreading it or did you mean to put "A -> B, ~A, therefore ~B"? That would be actually wrong.
I noticed that two seconds after I put it up and it's now corrected...er...incorrected. (Today I learned - my brain has that same annoying auto-correct function as Microsoft Word)
There's a related XKCD. The mouse-over text is especially relevant.
I think the idea is that CEV lets us "grow up more together" and figure that out later.
I have only recently started looking into CEV so I'm not sure whether I a) think it's a workable theory and b)think it's a good solution, but I like the way it puts off important questions.
It's impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we'd like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
Fiction about simulation
That's well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor's algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can't lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there's a fundamentally random aspect to the laws of physics. Thus, we can't simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I'm not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there's no reason you couldn't have a wandering sequence of worlds. Edit: Or for that matter, couldn't have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
It isn't. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Ok, but in that case, that world in question almost certainly can't be our world. We'd have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn't our universe.
Of course. It's fiction.
What I mean is that this isn't a type of fiction that could plausibly occur in our universe. In contrast for example, there's nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn't work in our universe.
Well, it does suggest they've made recent discoveries that changed the way they understood the laws of physics, which could happen in our world.
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don't prevent using the trick described in the story, they just make it more difficult. You don't have to identify one unique universe that you're in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content - enough that it probably doesn't appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Couldn't they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn't notice any change if they turned it off.
That doesn't work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
I'm not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
That's a good point. Everyone but the top layer will be identical and the top layer will then only diverge by a few seconds.
Why do you think deterministic worlds can only spawn simulations of themselves?
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
That doesn't say anything about the top layer.
I don't understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Until they turned it on, they thought it was the only layer.
It's surprising that they aren't also experimenting with alternate universes, but that would be a different (and probably much longer) story.
I can't see any point in turning it off. Run it to the end and you will live, turn it off and "current you" will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
If current you is identical with top-layer you, you won't cease to exist by turning it off, you'll just "become" top-layer you.
But they would cease to exist. If they ran it to its end, then it's over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there's no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there's no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
We can't be sure that there is a top layer. Maybe there are infinitely many simulations in both directions.
Then they miss their chance to control reality. They could make a shield out of black cubes.
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it's end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Then it would be someone else's reality, not theirs. They can't be inside two simulations at once.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it's conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it's conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won't mirror the new 559's actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it's conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that's why restarting the simulation shouldn't work.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.... The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Yeah, but would a binary tree of simulated worlds "converge" as we go deeper and deeper? In fact it's not even obvious to me that a stack of worlds would "converge": it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it'll do?
I'm convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
They could program in an indestructible control console, with appropriate safeguards, then run the program to its conclusion. Much safer.
That's probably weeks of work, though, and they've only had one day so far. Hum, I do hope they have a good UPS.
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we're dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we're willing.
It depends on what you mean by "criminal"; under environmental law, there are both negligence-based (negligent discharge of pollutants to navigable waters) and strict liability (no intent requirement, such as killing of migratory birds) crimes that could apply to this spill. I don't think anyone thinks BP intended to have this kind of spill, so the interesting question from an environmental criminal law perspective is whether BP did enough to be treated as acting "knowingly" -- the relevant intent standard for environmental felonies. This is an extremely slippery concept in the law, especially given the complexity of the systems at issue here. Litigation will go on for many years on this exact point.
I'm not sure it's relevant whether they did anything illegal or not. People always seem to want to blame and punish someone for their problems. In my opinion, they should be forced to pay for and compensate for all the damage, as well as a very large fine as punishment. This way in the future they, and other companies, can regulate themselves and prepare for emergencies as efficiently as possible without arbitrary and clunky government regulations and agencies trying to slap everything together at the last moment. Of course, if a single person actually did something irresponsible (eg; bob the worker just used duct tape to fix that pipe knowing that it wouldn't hold) then they should be able to be tried in court or sued/fined by the company. But even then, it's up to the company to make sure that stuff like this doesn't happen by making sure all of their workers are competent and certified.
You are not really going to learn much unless you are interested in wading through lots of technical articles. If you want to learn, you need to wait until it has been digested by relevant experts into books. I am not sure what you think you can learn from this, but there are two good books of related information available now:
Jeff Wheelwright, Degrees of Disaster, about the environmental effects of the Exxon Valdez spill and the clean up.
Trevor Kletz, What Went Wrong?: Case Histories of Process Plant Disasters, which is really excellent. [For general reading, an older edition is perfectly adequate, new copies are expensive.] It has an incredible amount of detail, and horrifying accounts of how apparently insignificant mistakes can (often literally) blow up on you.
In a recent video, Taleb argues that people generally put too much focus on the specifics of a disaster, and too little on what makes systems fragile.
He said that high debt means (among other things) too much focus on the short run, and skimping on insurance and precautions.
Because it was used somewhere I calculated my own weights worth in gold - it is about 3.5 million EUR. In silver you can get me for 50.000 EUR. The Mythbusters recently build a lead balloon and had it fly. Some proverb don't hold up to reality and/or engineering.
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) "I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don't cooperate."
2) "I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating."
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We've even evolved to convince ourselves that we actually care about morality and not self-interest. That's likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Sounds like Robin Hanson's Homo Hypocritus theory.
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don't offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn't have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that's at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don't give consent to be eaten.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults' preference that matters, but the adults' best model of the childrens' preferences. In this case there is an obvious reason for those preferences to differ - namely, the adult knows that he won't be one of those eaten.
In extrapolating a child's preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can't extrapolate from a child whose fate is undecided to an adult that believes it won't be eaten; that change alters its preferences.
Do you believe that all children's preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count - that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
And why do you think that such reflection would make the babies reverse the baby-eating policies?
With 1), you're non-cooperator and the punisher is society in general. With 2), you play both roles at different times.
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
I understand what you mean now.
Ok, so first of all, there's a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn't do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don't want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don't want the kids to be eaten, and we don't want the adults to eat. We don't want to balance any of these interests, because they go against our values. Just like you wouldn't balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is "Well, of course you don't want the punishments. That's the point. So cooperate, or you'll get punished. It's not fair to exempt yourself from the rules." And my reaction to position (2) is "We don't want any baby-eating, so we'll save you from being eaten, but we won't let you eat any other babies. It's not fair to exempt yourself from the rules." This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn't the baby-eaters' universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to "free ride" off the sacrifices that the system requires of everyone?
Isn't your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I'm criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Different topic spheres. One line sounds nicely abstract, while the other is just iffy.
Also killing people is different from betraying them. (Nice read: the real life section of tvtropes/moraleventhorizon)
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with "the abstract idea of punishment" into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of "eating children" are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless "intelligent" and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been "illegitimately modified" or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary "credentials" for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren't currently willing to provide such information, are there preconditions you could propose before you would do so?
I ... understood about a tenth of that.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Hard to say - even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There's no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there's some for float tanks. In fact, I'm hard pressed to think of any New Agey stuff where there isn't enough background to judge it on its own merits.
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don't believe any of that, do you? Neither do I, except for the first eight words and the last six. I've been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes' blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it's these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven't taken the trouble to determine by varying them.
More here and here. Nancy Lebovitz also mentioned them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I'd be interested to hear from anyone here who meditates and gets more from it than I do.
My spreadsheet about effects of the Tibetans
I've done yoga every week for the last month or two. It's pleasant. Other than paying attention to how I'm holding my body vs. the instruction, I mostly stop thinking for an hour (as we're encouraged to do), which is nice.
I can't say I notice any significant lasting effects yet. I'm slightly more flexible.
Meditation can be pretty darn relaxing. Especially if you happen to live within walking distance of any pleasant yet sparsely-populated mountaintops. I would recommend giving it a shot; don't worry about advanced techniques or anything, and just close your eyes and focus on your breathing, and the wind (if any). Very pleasant.
Every time I try to meditate I fall asleep.
There are loads of times I would like to be able to fall asleep, but can't. I envy your power.
I guess this is another reason for people to give meditation a try.
I find a meditation-like focus on my breathing and heartbeat to be a very effective way to fall asleep when my thoughts are keeping me awake.
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word "chi" does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent's balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
Less Wrong Rationality Quotes since April 2009, sorted by points.
Pre-alpha, one hour of work. I plan to improve it.
EDIT: Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2: I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
Not having to side scroll would be spiffy.
If you're using Firefox, there's an add-on for that.
Arigato :)
Or, if you're lazy like me, you can select 'Page Source' under the View menu and then select the 'Wrap Long Lines' option.
Very cool idea.
It would be nice if links were preserved.
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don't know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren't actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
On the wiki, this text will be dead, because nobody will be adding new items there by hand.
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It'd also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
Supposedly (actual study) milk reduces catechin level in bloodstream.
Other research says: "does not!"
Really hot (but not scalded) milk tastes fantastic to me, so I've often added it to tea. I don't really care much about the health benefits of tea per se; I'm mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it's clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers' home country is/isn't primarily taken with milk? I'm always tempted to imagine most of the scientists having some ulterior motive or prior belief they're looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: "we did this experiment hoping to show X, but instead, we found not X". Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
It does seem odd to get such divergent results.
Bad luck could be, not just getting that 5% result which 95% accuracy implies, but some non-obvious difference in the volunteers (different genetics?), in the tea. or in the milk.
It isn't that odd. There are a lot of things that could easily change the results. Exact temperature of tea (if one protocol involved hotter or colder water), temperature of milk, type of milk, type of tea (one of the protocols uses black tea, and another uses green tea). Note also that the studies are using different metrics as well.
Nitpick: the second study included both black and green tea.
However, your general point stands, and I'll add that there are different sorts of both black and green teas.
As an old quote from DanielLC says, consequentialism is "the belief that doing the right thing makes the world a better place". I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn't know the child isn't his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you're thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the "right" conclusion into a consequentialist frame. For example, eliminating lies doesn't "make the world a better place" unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn't lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn't be done (and some of your examples may qualify for that).
I suggest that eliminating lying would only be an improvement if people have reasonable expectations of each other.
In my opinion, this is a lawyer's attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule "never lie" as a consequentialist "I assign an extremely high disutility to situations where I lie". In the same way you can put consequentialist preferences as a deontoligst rule "at any case, do whatever maximises your utility". But doing that, the point of the distinction between the two ethical systems is lost.
My comment argues about the relationship of concepts "make the world a better place" and "makes people happier". cousin_it's statement:
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then "make the world a better place" should be the same as "makes people happier". However, it's against the spirit of consequentialist outlook, in that it privileges "happy people" and disregards other aspects of value. Taking "happy people" as a value through deontological lens would be more appropriate, but it's not what was being said.
Let's carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn't happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a "consequentialist" to take, and the word "deontologism" would fit it way better.
IMO, a "proper" consequentialist should care about consequences they can (in principle, someday) see, and shouldn't care about something they can never ever receive information about. If we don't make this distinction or something similar to it, there's no theoretical difference between deontologism and consequentialism - each one can be implemented perfectly on top of the other - and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one's ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn't seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don't need to distinguish them at all, although it doesn't make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it's not okay for me to lie even if I can't get caught, because then I'd be the "third-party beneficiary", but somehow it's okay to lie and then erase my memory of lying. Is that right?
Right. "Third-party beneficiary" can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
It's not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party "beneficiary" in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn't make sense for you to have that concept in your ontology if the states of the world that contained you-lying can't be in principle (in the strong sense described in the previous comment) distinguished from the ones that don't. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don't seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can't we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it's idea that a "proper" consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it's still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being 'sufficient' for a proper consequentialist to care about it. But if we don't, and all that matters is the indefinite future, then don't we face the problem that "in the long term we're all dead"? OK, perhaps some of us think that rule will eventually cease to apply, but for argument's sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we'd want our ethical theory to be more robust than to say "Do whatever you like - nothing matters any more."
If so, maybe we want that.
I can't believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
What does this mean? Consequentialist values are about the world, not about observations (but your words don't seem to fit to disagreement with this position, thus the 'what does this mean?'). Consequentialist notion of values allows a third party to act for your benefit, in which case you don't need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don't need to know about these options in order to benefit.
Less directly, a person may value a world where beliefs were more accurate - in such a world, both lying and bullshit would be negatives.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations - like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband's heart, not for some material benefit. So if she knew the husband didn't love her, she'd tell the truth. The fact that you automatically parsed the situation differently is... disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don't understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can't wait till other people reply to the questionnaire.
The husband does benefit, by her lights. The chief reason it comes out in the husband's favor in #6 is because the husband doesn't value the marital relationship and (I assumed) wouldn't value the child relationship.
You're right - in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it's a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite - particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don't accept most of this analysis, despite being a consequentialist - as I said, it is the "naive consequentialist solution", and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that "happier couples" does not imply "happier coupling" - the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Or what Nesov said below.
It's okay to deceive people if they're not actually harmed and you're sure they'll never find out. In practice, it's often too risky.
1-3: This is all okay, but nevertheless, I wouldn't do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child's welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let's assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you're using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It's more similar to the Prisoner's Dilemma, if you ask me.
6: In the trolley problem, a deontologist wouldn't push decide to push the man, so the pseudo-fat man's life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I'm missing my own point; can't tell right now, too sleepy) Anyway, here's what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they're lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn't see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn't clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
You're right, it's pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don't even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn't to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That's an argument that only appeals to the consequentalist.
Of course. I am only arguing that consequentialists want to be consequentialists, despite cousin_it's scenario #6.
I'm not sure that's true. Forms of deontology will usually have some sort of theory of value that allows for a 'better world', though it's usually tied up with weird metaphysical views that don't jive well with consequentialism.
1-3: It's an alief, not a belief, because I know that lying to my spouse doesn't really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Thanks for the link. I think Alicorn would call it an "unofficial" or "non-endorsed" belief.
Let's put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
It seems like it would be more aptly defined as "the belief that making the world a better place constitutes doing the right thing". Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don't care whether it does.
Some clips on the dark-side epistemology of history done by Christian apologists by Robert M Price, who describes himself as a Christian Atheist.
Not sure how worthwhile Price is to listen to in general though.
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let's god decide who's right...
This one came up at the recent London meetup and I'm curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Thoughts?
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there's only one source of objections - children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE's morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here's yet another arguments.
Guidelines of FAI as of may 2004.
BEs will formulate this as "Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE's nature."
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise. There's no trend in moral that can be encapsulated.
If they stop being BE they will mourn their wrong doings to the death.
Every single notion that FAI will make in lines of "Let's suppose that you are non-BE" will cause it to be destroyed.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: "What rules would you want to prevail if you didn't know in advance who you would turn out to be?"
Where CEV as I understand it adds more information - assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be - the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even - conceivably - how many of you there are.
To this bunch of undifferentiated people you'd put the question, "All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands."
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it's hard to check intuitions.
BEs aren't humans. They are Baby-Eating aliens
You're correct. I'm using the term "people" loosely. However, I wrote the grand-parent while fully informed of what the Babyeaters are. Did you mean to rebut something in particular in the above?
If we translate it to our cultural context, we will get something like "All in favor of 100% dying horribly of old age, in return for good lives of your babies, please rise your hands". They ARE aliens.
It doesn't seem from the story like the babies are gladly sacrificing for the tribe...
Well, we would say "no" to that, if we had the means to abolish old age. We'd want to have our cake and eat it too.
The text stipulates that it is within the BE's technological means to abolish the suffering of the babies, so I expect that they would choose to do so, behind the Veil.
Yes, but a surprisingly large number of humans seem to react in horror when you talk about getting rid of aging.
Who will ask them? FAI have no idea, that a) baby eating is bad, b) it should generalize moral values past BE to all conscious beings.
Even if FAI will ask that question and it turns out that majority of population don't want to do inherently good thing (it is for them), then FAI must undergo controlled shutdown.
EDIT: To disambiguate. I am talking about FAI, which is implemented by BEs.
As we should not allow FAI to generalize morals past conscious beings, just to be sure, that it will not take CEV of all bacterium, so BEs should not allow their FAI to generalize past BEs.
As we should built in automatic off switch into our FAI, to stop it if its goals is inherently wrong, so should BEs.
Correct. CEV is supposed to be a component of Friendliness, which is defined in reference to human values.
OpenPCR: DNA amplification for anyone
http://www.thinkgene.com/openpcr-dna-amplification-for-anyone/
We've talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
I've been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I'm studying at the moment. It would probably make sense to expand it to other books including non-math books - though the set of active books should remain small.
Two things have been holding me back - for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don't want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
A question about Bayesian reasoning:
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1/6, its very different from saying Pr(I roll a one on a fair die) = 1/6.
In the first case, my mom is either on the phone or not, but I'm just saying that I'm pretty sure she isn't. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense... or are they really the same thing and I'm over complicating?
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1/6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don't have perfect knowledge of the universe, so when I ask, "Is your mum on the phone?" you don't have the guaranteed correct answer ready to go. You don't know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying "I have absolutely no idea in the slightest", you are able to say something more useful: "It's possible, but unlikely." Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes' Law.