I notice overconfidence bias and risk aversion seem to operate in opposite directions. Like, there's a 90% chance of something being true, you say it's 99% likely, and then you bet at 9 to 1 odds.
Do they tend to cancel? How well?
A proposed law to require psychologists who testify in court to dress like wizards:
When a psychologist or psychiatrist testifies during a defendant’s competency hearing, the psychologist or psychiatrist shall wear a cone-shaped hat that is not less than two feet tall. The surface of the hat shall be imprinted with stars and lightning bolts. Additionally, a psychologist or psychiatrist shall be required to don a white beard that is not less than 18 inches in length, and shall punctuate crucial elements of his testimony by stabbing the air with a wand. Whenever a psychologist or psychiatrist provides expert testimony regarding a defendant’s competency, the bailiff shall contemporaneously dim the courtroom lights and administer two strikes to a Chinese gong…
I had a somewhat chaotic phase in my romantic life a few years ago, and I just had the thought that a lot of it could be modeled as a result of non-transitive preferences. Specifically,
C preferred being single to being with A.
C preferred being with W to being single.
C preferred being with A to being with W.
I think all three of us could have been spared some heartache if we had figured out that was what was going on.
Currently listening to the Grace-Hanson podcasts. Topics:
I'm coming to increasingly notice that maintaining a specific, regular sleep pattern is worth making sacrifices for. Specifically, if I go to bed around 10:30 PM and get up around 8 AM, I will wake up feeling energetic, productive and physically good. If I get up even a few hours later, or if I go to bed late but regardless get up at 8 in the morning, there's a very good chance that I will accomplish basically nothing on that day. It's weird how getting the timing so precisely correct seems to basically be the biggest determining factor in how my day will ...
Summary: Years of life are in finite supply. It is morally better that these be spread among relatively more people rather than concentrated in the hands of a relative few. Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
The argument would be limited to certain age ranges; an unborn fetus or newborn infant might justly be sacr...
Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
When working on a primarily mental task (example: web browsing, studying, programming), I sometimes find myself coming up with an idea, forgetting the idea itself, but remembering I have come up with it. Backtracking through the mental steps may help recall it, but often I'll not be able to recall it at all, ending in frustration. Is there a technical term for this I can google / does anyone have an idea what this is?
I've just seen the Wikipedia article for the ‘overwhelming gain paradox’:
...Harford illustrates the paradox by the comparison of three potential job offers:
- In Job 1, you will be paid $100, and if you work hard you will be paid $200.
- In Job 2, you will be paid $100, and if you work hard you will have a 1% chance of being paid $200.
- In Job 3, you will be paid $100, and if you work hard you will have a 1% chance of being paid $1billion.
Most people will state that they will choose to work hard in jobs 1 and 3, but not job 2 [2]. In Job 1, working hard is ob
Can't an AI escape the dangers of Pascal's Mugging by having a decision theory that weighs against having exploitable decision theories according to the measure of their exploitability?
Scumbag brain is a newish meme of the generic image macro variety. Some are pretty entertaining and relevant to the LW ideaspace, but most are lowest common denominator-style "broke up with girlfriend, makes you feel sad about it for weeks".
Since there seem to be quite a few lesswrongers involved in making games, or interested in doing it as a hobby, I just created a little mailing-list for general chat - talk about your projects, rant about design theory, ask for advice, talk about how to apply lesswrong ideas to game development, talk about how to apply game development ideas to lesswrong's goals, etc.
I've recently figured out an all too obvious workaround for the vanishing spaces bug. Considering links, italics and bold basically cover 95% of all formatting needs I think some people may find use for it (it has cured my distaste for writing articles on LW).
1) Write a comment or PM in Markdown syntax. Post the thing.
2) Select the text and copy it straight into the WYSIWYG editor
3) Delete the original post or PM.
It is such an obvious solution, yet I didn't think of it for months.
I'm trying to keep a dream journal, but when I wake up I keep having this cognitive block preventing me from writing my dreams down It will do anything necessary to prevent me from writing my dreams down. I regret this later every single time. Does anyone know how to prevent this? I don't think I can do it at that time, so it probably has to be something done beforehand, as I go to bed.
Do con-artistry and the Dark arts share similar strategies? If so any in particular?
Are there any guidelines, or does anyone have any significant thoughts, about mentioning Less Wrong in text in fanfiction (or any other type of fiction)? I know a lot of people came here by way of HP:MoR, myself included, but I'm interested if anyone has reasons that they believe it would be a bad idea, or an especially good one.
Caring about conscious minds where you can't observe them existing carries basically the same philosophical problems as caring about pretty statues (and other otherwise desirable or undesirable arrangements of matter) where you can't observe them.
Agree or disagree?
What does the outside view say about when during the course of a relationship it is wisest to get engaged (in terms of subsequent marital longevity/quality)? Data that doesn't just turn up obvious correlations with religious groups who forbid divorce is especially useful.
"Why in the world would anyone [X]?" comes off as starting with a strong opinion that [X] is a bad idea, rather than actually asking for information about motives.
This whole conversation was such a cliché.
Woman: Yay I want to get married with the man I love! Does anyone have any advice?
Man: Marriage is a bad idea. I can't see why anyone would want that.
Woman: I'm allowed to want things! You are being mean.
Man: Don't try and chain the poor guy with whom I suddenly identify!
Woman: I hate you and my fear of instability and falling out of love that you now represent! I want to wear a wedding dress and a pretty ring on my hand!
Man: I'm sorry.
Woman: Apology accepted.
It could be a cultural or language barrier, the same phrase "why in the world would you X" has a literal Slovenian equivalent that I now however think seems to carry very different connotations. Much more surprise and much less disapproval than in English.
This phrase might have set of the conversation on the wrong foot, since later on seemingly unprovoked hostility and evasiveness may have caused me to respond by hardening up and even escalating.
It is also possible that since I have recently had irl discussions regarding marriage I may have just thrown out some arguments at Alicorn that where originality crafted for someone else. If that was the case then we both became pretty emotional in the discussion because of its relevance to our personal lives. :/
Why would anyone make a lifetime commitment?
The high cost of divorce can make a lifetime commitment more robust.
Committing a crime together and vowing to remain silent produces high costs. Exchanging embarrassing pictures or other blackmailing material can also produce high costs. I don't know this seems like a fake reason, I mean if you wanted to optimize for robustness of long range commitment and set out to optimize for it would you really end up with anything like marriage? Especially since more than 50% of all marriages end in divorce it dosen't seem to be, as it is practised currently, very good at its supposed function.
In addition unlike other imaginable mechanism, this one isn't symmetric unless it is a same sex marriage. The penalties are on average significantly higher for the male participant. This just seems plain unfair and bad signalling though I admit asymmetric arrangements can be a feature not a bug.
Also I seem to be able to maintain long term relationships with friends and family members without state enforced contracts. Why should a particular kind of relationship between two people require it? And even further why a contract that can't be much customized...
You're being kind of a jerk. Your questions aren't relevant to the information I wanted; you're just picking on me because I brought up something vaguely related.
That having been said:
Yeah, I know about Valentine's day. That's why this was on my mind.
I don't think singlehood will kill my partner or cause him to shun me. (Although if I didn't poke him about cryo, he might cryocrastinate himself to room-temperatureness.) I'm not hoping that anyone will "enforce" anything about my prospective marriage.
My culture encourages permanent and public-facing relationships to be solidified with a party and thereafter called by a different name. In particular, it has caused me to assign value to producing children in this context rather than outside of it. I believe that getting married will affect my primate brain and the primate brains of my and my partner's families and friends in various ways, mostly positive. It will entitle me to use different words, which I want, and entitle me to wear certain jewelry, which I want, and allow me to summarize my inextricability from my partner very concisely to people in general, which I want. It will also allow me to get on my partner's health insurance.
Edit in response to edit: I'm poly, but my style of poly involves a primary relationship (this one). It doesn't seem at all unreasonable to go ahead and promote it to a new set of terms.
It seems cultural and perhaps even value differences are the root of how this conversation proceeded. Ok I think I understand now. I should have suspected this earlier, I was way too stuck in my local cultural context where among the young basically only the religious still marry and it is generally seen as an "old fashioned" thing to do.
I was told this would be a more appropriate place than the discussion board for this post:
I'm taking a class on heuristics and biases. I'm this class we have the option to read one of two "applied" books on the subject. The books are "The Panic Virus: A True Story of Medicine, Science, and Fear" by Seth Mnookin and "Sold on Language: How Advertisers Talk to You and What This Says About You" by Judith Sedivy and Greg Carlson.
I'd like to know if anyone has read one or both of these books, and how well or poorly they mesh with less wrong rationality.
Thanks, Jeremy
I want to read the paper "Three theorems on recursive enumeration" by Friedberg. It doesn't seem to be available on the open web. Can someone with journal access help me out?
In this comment I pegged a web site as being nothing but a link farm, filled with ads and worthless "content". A couple of ideas occurred to me.
The web site looks to me as if it was actually written by human beings, but computer-generated prose of this sort might not be far off. The better the programmers get at simulating humans (and the spammers are certainly trying), the better humans will have to become at not being mistaken for computers. If you sound like a spambot, it doesn't matter if you really aren't, you'll get tuned out.
And I wonder h...
It seems a suspicious coincidence that our puny human ideas of justice would automatically be a) physically possible b) have reasonable cost, but this is a very popular belief.
Having read a lot of philosophers talking of morality here, and having read a lot of economists talking of utility, I think I will concentrate on the economists.
I was going to say I think my utility is maximized by spending no more time on the philosophers and using that on economists instead. But of course someone who chose the philosophers might say she believes the moral thing to do is to study the morality instead of the utility.
In physics sometimes you get to a point where your calculation involves subtracting an infinite quantity from another in...
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing the...
Presumably, the problems of friendly or unfriendly AI are just like the problems of friendly or unfriendly NI (Natural Intelligence). Intelligence seems more an agency, a tool, and friendliness or unfriendliness a largely orthogonal consideration. In the case of humans, I would imagine our values are largely dictated by "what worked." That is, societies and even subspecies with different values would undergo natural selection pressures proportional to how effective the values were at adding to survival and thrivance of the group possessing them.
Suppose, as this group generally does, that self-modifying AI will have the ability to modify itself by design, and that one of its values it designs towards is higher intelligence. Is such an evolution constrained by evolution-like pressures or is it not?
The argument that it is not is that it is changing so fast, and so far ahead of any concievable competition, that from the point of view of the evolution of its values, it is running "open loop." THat is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values. That is, with the quickly increasing intelligence, the values of the FOOMing AI are nearly irrelevant to its overall effectiveness, and therefore totally irrelevant to determining whether it will survive and thrive going up against humans. Its intelligence is sufficient to guarantee its survival, its values get a free ride.
But is this right? Does a FOOMing AI really look like a single intelligence ramping up its own ability? This is certainly NOT the way evolution has gone about improving the intelligence of our species. Evolution tries many small modifications and then does natural experiments to see which ones do better and which do worse. By attrition it keeps the ones that did better and uses these as a base for further experiments.
My own sense of how I create using my intelligence is that I try many different things. Many are tried purely in the sandbox of my own brain, run as simulations there, and only the more promising kept for further testing and development. It seems to me that my pool of ideas is an almost random noise of "what ifs" and that my creative intelligence is the discrimination function filtering which of these ideas are given more resources and which are killed in the crib.
So intelligent creation seems to me to be very much like evolution, with competition.
Might we expect an AI to do something like this? To essentially hypothesize various modifications to itself, and then to test the more promising ones by running them as simulations, with increasing exactitude of the sims as the various ideas are winnowed down to the best ones?
Might an AI determine that the most efficient way to do this is to actually have many competing versions of itself constantly running, essentially, against each other? Might the FOOMing of an AI look a lot like the FOOMing of NI, which is what is going on on our planet right now?
I really don't know what the implications of this point of view are for FAI. I don't know whether this point of view is even at odds in any real way with SIAI's biggest worries.
I do wonder whether humanity is meant to survive when, in some sense, whatever comes next arrives. In one picture, the dinosaurs did not survive their design of mammals. (They designed mammals by putting a lot of selection pressure on mammals). In another picture, the dinosaurs did survive their design of mammals, but they survived by "slightly modifying" themselves into birds and lizards and stuff.
Th next step is electronic-based intelligence which is kick started on its evolution by us, just as we were kickstarted by plants (there are NO animals until you have plants), and plants were kickstarted by simpler life that exploited less abundant but more available energy in chemical mixes. Or the next step might be something that arrives through some natural path we are not considering carefully, either aliens invading, or a strong psi arising among the whales so that their intelligence grows enoguh to overcome their lack of digits.
WHatever the next step, if its presence has the human race survive and thrive by doing the equivalent of what turned dinosaurs in to birds, or turned wolves into domesticated dogs, does that count as Friendly or Unfriendly?
And is there really any point at all to fighting against it?
That is, the first AI to go FOOM is so far superior in ability to anything else in the world that its subsequent steps of evolution are unconstrained by any outside pressures, and only follow either some sort of internal logic of value-change as intelligence increases, or else follow no logic at all, go in some sense on a "random walk" through possible values.
The AI is not supposed to change it values, regardless of whether it is powerful enough to realize them. Values are not up for grabs. Once the AI has some values it either wins and reshap...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.