I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.
Do they tend to cancel? How well?
A proposed law to require psychologists who testify in court to dress like wizards:
When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong…
I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,
C preferred being single to being with A.
C preferred being with W to being single.
C preferred being with A to being with W.
I think all three of us could have been spared some heartache if we had figured out that was what was going on.
Currently listening to the Grace-Hanson podcasts. Topics:
I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will ...
Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacr...
Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?
I've just seen the Wikipedia article for the ‘overwhelming gain paradox’:
...Harford illustrates the paradox by the comparison of three potential job offers:
- In Job 1, you will be paid $100, and if you work hard you will be paid $200.
- In Job 2, you will be paid $100, and if you work hard you will have a 1% chance of being paid $200.
- In Job 3, you will be paid $100, and if you work hard you will have a 1% chance of being paid $1billion.
Most people will state that they will choose to work hard in jobs 1 and 3, but not job 2 [2]. In Job 1, working hard is ob
Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?
Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.
I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).
1) Write a comment or PM in Markdown syntax. Post the thing.
2) Select the text and copy it straight into the WYSIWYG editor
3) Delete the original post or PM.
It is such an obvious solution, yet I didn't think of it for months.
I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.
Do con-artistry and the Dark arts share similar strategies? If so any in particular?
Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.
Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.
Agree or disagree?
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.
This whole conversation was such a cliché.
Woman: Yay I want to get married with the man I love! Does anyone have any advice?
Man: Marriage is a bad idea. I can't see why anyone would want that.
Woman: I'm allowed to want things! You are being mean.
Man: Don't try and chain the poor guy with whom I suddenly identify!
Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!
Man: I'm sorry.
Woman: Apology accepted.
It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.
This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.
It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/
Why would anyone make a lifetime commitment?
The high cost of divorce can make a lifetime commitment more robust.
Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized...
You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.
That having been said:
Yeah, I know about Valentine's day. That's why this was on my mind.
I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.
My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.
Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.
It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.
I was told this would be a more appropriate place than the discussion board for this post:
I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.
I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.
Thanks, Jeremy
I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?
In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.
The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.
And I wonder h...
It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.
Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.
I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.
In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another in...
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing the...
If you were given the option (somehow) of changing the past such that Alice was not replaced by Bob, thereby causing Bob not to have existed, would you take it? (I'm genuinely unsure what you'll say here)
You're not the only one who is unsure. I've occasionally pondered the ethics of time-travel and they make my head hurt. I'm not entirely sure time travel where it is possible to change the past is a coherent concept (after, if I change the past so Alice never died then what motivated present me to go save her?). If this is the case then any attempt to inject time travel into ethical reasoning would result in nonsense. So it's possible that the crude attempts at answers I am about to try to give are all nonsensical.
If time travel where you can change the past is a coherent concept then my gut feeling is that maybe it's wrong to go back and change it. This is partly because Bob does exist prior to me making the decision to go back in time, so it might be "killing him" to go back and change history. If he was still alive at the time I was making the decision I'm sure he'd beg me to stop. The larger and more important part is that, due to the butterfly effect, if I went back and changed the past I'd essentially be killing everybody who existed in the present and a ton of people who existed in the past.
This is a large problem with the idea of using time travel to right past wrongs. If you tried to use time travel to stop World War Two, for instance, you would be erasing from existence everyone who had been born between World War Two and the point where you activated your time machine (because WWII affected the birth and conception circumstances of everyone born after it).
So maybe a better way to do this is to imagine one of those time machines that creates a whole new timeline, while allowing the original one to continue existing as a parallel universe. If that is the case then yes, I'd save Alice. But I don't think this is an effective thought experiment either, since in this case we'd get to "have our cake and eat it too," by being able to save Alice without erasing Bob.
So yeah, time travel is something I'm really not sure about the ethics of.
If you knew that the consequence of doing so would be that everyone in the world right now is a little bit worse off, because Alice will have produced less value than Bob in the same amount of time, would that affect your choice? (I expect you to say no, it wouldn't.)
My main argument hasn't been that it's wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it's wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
The original argument I was replying to basically argued that it was okay to kill older people and replace them with new people because the older people might have done everything fun already and have a smaller amount of fun to look forward to in the future than a new person. I personally find the factual premise of that argument to be highly questionable (there's plenty of fun if you know where to look), but I believe that it would still be wrong to kill older people even if it were true, for the same reasons that it is wrong to replace Alice with Bob.
If Bob produces a sufficiently greater amount of value for others than Alice then it might be acceptable to replace her with him. For instance, if Bob invents a vaccine for HIV twenty years before anyone would have in a timeline where he didn't exist it would probably be acceptable to kill Alice, if there was no other possible way to create Bob.
That being said, I can still imagine a world where Alice exists being slightly worse for everyone else, even if she produces the same amount of value for others as Bob. For instance, maybe everyone felt sorry for her because of her disabilities and gave her some of their money to make her feel better, money they would have kept if Bob existed. In that case you are right, I would still choose to save Alice and not create Bob.
But if Alice inflicted a sufficiently huge disutility on others, or Bob was sufficiently better at creating utility for others than Alice, I might consider it acceptable to kill her and make Bob. Again, my argument is it's wrong to kill and replace people because they are bad at producing utility for themselves, not that it is wrong to kill and replace people because they are bad at producing utility for others.
My main argument hasn't been that it's wrong to kill Alice and replace her with Bob, even if Bob is better at producing value for others. It has been that it's wrong to kill Alice and replace her with Bob, even though Bob is better at producing value for himself than Alice is at producing value for herself.
Huh. I think I'm even more deeply confused about your position than I thought I was, and that's saying something.
But, OK, if we can agree that replacing Alice with Bob is sometimes worth doing because Bob is more valuable than Alice (or valuable-to-ot...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.