This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.

Yet more "stupid" questions
New Comment


342 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I occasionally have dreams in which I am playing an RTS videogame like Starcraft. In these, I am a disembodied entity seeing the world only as it might be displayed in such a game. During those dreams, this feels natural and unsurprising and I don't put a second thought to the matter. In fact, I've been having these dreams for a while now and only just recently noticed that odd fact that it's not me sitting at a computer playing the game, it's just the game being the only thing in the world at all.

Do other people have dreams in which they are not human-shaped or otherwise experience from a perspective that is very different from real life?

3knb
I used to have Age of Empires dreams. I've even had Tetris dreams.
5tgb
Tetris dreams are well-known phenomena, but the reports of them I've read are unclear as to the nature of the dreams themselves. Do you just see falling blocks? Or is it as if you are in a Tetris universe with nothing else? Can anyone comment or elaborate on the sensation?
5[anonymous]
I had numerous Tetris dreams during my peak of playing and for many months afterwards. My own experience was mostly going about my business in ordinary cityscapes, office spaces, rooms in my house, but with Tetris pieces constantly falling into gaps between objects. Rotate/drop was under my control but not always dependably so, sometimes creating an experience of panic as there was often some unknown but disastrous consequence of failure. During this period incidence of such dreams increased with more Tetris-playing, but also were more often when I was stressed at work, in which cases the Tetris shapes were also somehow related to complex statistical / simulation programming I was doing in my day job. I gave up Tetris cold-turkey when I began to see imaginary shapes falling between real objects during waking hours. Other games since then had similar but far smaller effects on my dream states.
2knb
I'm trying to recall, I haven't played Tetris in a few years. IIRC, was like playing Tetris on my computer, but without anything in my peripheral vision.
2[anonymous]
I get something similar, in that I frequently lose my perspective as a humanoid actor in my dreams. It appears that in my own dreams I am more or less incapable of simulating another living being without subjectively experiencing that beings thoughts and emotions at the same time. Perhaps for that reason my dreams are usually of empty wilderness or stars flying around. However, a few times per month I wake up very confused at just being one person because while asleep I experienced the thoughts of multiple individuals simultaneously, including whatever emotions they felt and their relative lack of information of each others perspectives. The maximum number of people I've been at once was seven, where three beings were fighting another two beings to save the other three from being tortured.
2CronoDAS
I've seen top-down perspectives in dreams, such as those in 2D RPGs. I feel like I'm playing a video game, but I don't have an awareness of a controller or anything; the characters just do what I tell them, and the "screen" is my entire visual field. (The actual experience of playing a video game tends to be similar: I almost never think about the controller or my hand; I just make stuff happen.) I also tend not to have much of a kinesthetic sense in dreams I remember, either. Another weird thing: Everything I try to type in dreams is invariably misspelled. Once, in a dream, I was trying to Google something, but the text I was "typing" in the search bar kept changing pretty much at random. Only the letters that I'm "looking at" during any given moment stay what they are.
3Viliam_Bur
Happens to me too, just instead of googling it is usually me trying to write something down, e.g. someone's phone number, and failing to make the text legible or realising I wrote some nonsense instead of what I tried to write. Actually, this is one of the techniques for lucid dreaming -- how to realize that you are in a dream. You need a test that will reliably give different results in reality and in dreams. Different things work for different people, but reading and writing is among frequent examples. Other examples: counting, or trying to levitate. (With levitation it is the other way round: it works only in dreams.) Strange. I just now realized I probably never used a computer in my dream, although I spend most of my days at computer. How is that possible? An ad-hoc explanation is that precisely because my life is so much connected with computers, I don't perceive the computer as "computer", but merely as an extension of myself, as another input/output channel. Most of my dreams are about being with people or walking in the nature; and I actually do a very little of that.
4Gunnar_Zarncke
Failing to achieve any kind of goals is a very common topic of dreams.
0Ishaan
Yes to both. It's very common for me to be human but different (child, woman, or very different looking man). Most common non-human forms are non-human-ape, wolf, or bird. Sometimes it's an imaginary monster of some sort. But dreaming in non-human forms is generally fairly rare. Most common non-embodied perspectives are either eye-level-view, television view-style, or looking down from a distance. In these cases, I'll either self-identify with one of the bodies, or simply be an observer. This frequently switches mid-storyline.
0FiftyTwo
I've had similar dreams. In general I don't think I'm aware of my self/body in dreams. Occasionally I'm different people but dont notice.
0RowanE
I've had some dreams like that - a few dreams were specifically of the game Supreme Commander, and I also occasionally am in third-person in dreams as if I was watching from the screen of a third-person game - I don't think it's really "very different from real life", it's close to the experience of being immersed in a videogame, it's just that rather than overlooking details you're not paying attention to, those details simply don't exist because it's a dream.

Is LSD like a thing?

Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.

My greatest concern would be that I would find the results of a trip irreducibly spiritual, or some other nonsense. That I would end up sacrificing a lot of epistemic rationality for some of the instrumental variety, or perhaps a loss of both in favor of living off of some big, new, and imaginary life changing experience.

In short, I'm comfortable with recent life changes and recent introspection, and I wonder whether I should expect a trip to reinforce and categorize those positive experiences, or else replace them with something farce.

Also I should ask about any other health dangers, or even other non-obvious benefits.

One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.

When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".

I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.

7AndyWood
I won't be able to do it justice in words, but I like to try. If you value your current makeup as a "rationalist" - LSD will not necessarily help with that. Whatever your current worldview, it is not "the truth", it is constructed, and it will not be the same after you come down. You can't expect a trip to do anything in particular, except maybe blow your mind. A trip is like finding out you were adopted. It's discovering a secret hidden in plain sight. It's waking up to realize you've never been awake before - you were only dreaming you were awake. It's finding out that everything familiar, everything you took for granted, was something else all along, and you had no idea. No matter how much you've invested in the identity of "rationalist", no matter how much science you've read... Even if you know how many stars there are in the visible universe, and how many atoms. Even if you've cultivated a sense for numbers like that, real reality is so much bigger than whatever your perception of it is. I don't know how acid works, but it seems to open you in a way that lets more of everything in. More light. More information. Reality is not what you think it is. Reality is reality. Acid may not be able to show you reality, but it can viscerally drive home that difference. It can show you that you've been living in your mind all your life, and mistaking it for reality. It will also change your sense of self. You may find that your self-concept is like a mirage. You may experience ego-loss, which is like becoming nobody and nothing in particular, only immediate sensory awareness and thought, unconnected to what you think of as you, the person. I don't know about health dangers. I never experienced any. Tripping does permanently change the way you view the world. It's a special case of seeing something you can't un-see. Whether it's a "benefit" ... depends a lot on what you want.

(Created an alternative username for replying to this because I don't want to associate my LSD use with my real name.)

I'd just like to add a contrary datapoint - I had a one pretty intense trip that you might describe as "fucking weird", which was certainly mind-blowing in a sense. My sense of time transformed stopped being linear and started feeling like it was a labyrinth that I could walk in, I alternatively perceived the other people in the room as being real separate people or as parts of my own subconsciousness, and at one point it felt like my unity of consciousness shattered into a thousand different strands of thought which I could perceive as complex geometric visualizations...

But afterwards, it didn't particularly feel like I'd have learned anything. It was a weird and cool experience, but that was it. You say that one's worldview won't be the same after coming down, but I don't feel like the trip changed anything. At most it might've given me some mildly interesting hypotheses about the way the brain might work.

I'm guessing that the main reason for this might be that I already thought of my reality as being essentially constructed by my brain. Tripping did confirm that a bit, but then I never had serious doubts about it in the first place.

0RowanE
I don't think describing the experience itself is very helpful to answering the question.. The comment seems as close to an answer of "yes, it's likely you would find the results of a trip irreducibly spiritual or some other nonsense" as someone would actually give, but because of the vagueness that seems to be intrinsic to descriptions of the experience of a trip, I'm not even sure if you're espousing such things or not.
6AndyWood
In my experience, it is possible to bring parts of the experience back and subject it to analytical and critical thinking, but it is very challenging. The trip does tend to defy comprehension by the normal mode of consciousness, which is why descriptions have the quality you call "vagueness". In short, distilling more than "irreducibly spiritual nonsense" from the trip takes work, not unlike the work of organizing thoughts into a term paper. It can be done, and the more analytical your habits of thought to begin with, the more success I think you could expect.
2gattsuru
I don't imbibe (for that matter, pretty much anything stronger than caffeine), so I can't offer any information about the experience of its affects on rationality. From the literature, it has a relatively high ratio of activity threshold to lethal dose (even assuming the lowest supported toxic doses), but that usually doesn't include behavior toxicity. Supervision is strongly recommended. There's some evidence that psychoactive drugs (even weakly psychoactive drugs like marijuana) can aggravate preexisting conditions or even trigger latent conditions like depression, schizophrenia, and schizoid personality disorder.
0hyporational
Another data point here. I've done LSD a couple of times, and didn't find the experience "spiritual" at all. The experience was mostly visual: illusion of movement in static objects when eyes open, and intense visualization when eyes closed. It's hard to describe these images, but it felt like my visual cortex was overstimulated and randomly generated geometric patterns intertwined with visual memories and newly generated constructs and sceneries. This all happened while travelling through a fractal-like pattern, so I felt the word "trip" was quite fitting. The trip didn't seem to affect my thinking much during or after. I can see why a susceptible (irrational) mind could find this chemical alteration of consciousness a godly revelation, but I can't imagine taking the stuff for anything else than entertainment purposes. A couple of friends of mine had similar experiences. LSD is known to cause persistent psychosis, apparently in people who already have latent or diagnosed mental health problems. This is what they teach in my med school, but the epidemiology of the phenomenom was left vague.
2NancyLebovitz
I find that LSD does have emotional effects-- for me, it's a stimulant and it tends to cheer me up.
0hyporational
Now that I think about it, I felt quite elated too. Could have been just the novel experience though, hard to say. Some other emotions perhaps intensified too, but I wasn't interested in exploring that venue.
0FiftyTwo
Datapoint: another halluciogen, ketamine, has been shown to effectively treat depression. Not sure if mechanisms of LSD are similar.
2kalium
The visual system is very complicated, and many different classes of drugs can cause hallucinations in different ways without the overall experience being similar. Ketamine and LSD do not have similar mechanisms in the brain, nor (from what I've read) are their effects qualitatively similar. LSD is a psychedelic acting as an agonist on 5-HT_2A receptors (among other things, but that's what it shares in common with other classic psychedelics. Ketamine is a dissociative anesthetic acting as an antagonist on NMDA receptors. LSD is, however, effective against migraines at sub-hallucinogenic doses.

How does stage hypnotism "work?"

6sixes_and_sevens
Based on the descriptions of thoughtful, educated people who practise hypnosis, it seems useful to think of it as a "suite" of psychological effects such as suggestion, group conformity, hype, etc., rather than a single coherent phenomenon.
3knb
Not sure exactly what you want to know here, but here are a few basic points: 1. Hypnotized people are not unconscious, rather they are fully awake and focused. 2. Brain scans don't show any signs of abnormal brain activity during hypnosis. 3. Some psychologists argue hypnotized people are just fulfilling the socially expected role for a hypnotized person.
2CronoDAS
That depends on what you consider "abnormal". The states appear to be the same kind of states that occur in "normal" functioning, but they appear out of the context that they normally appear in. For example, according to one study a person exposed to a painful stimulus and one acting out a hypnotic suggestion to feel pain show similar patterns of brain activation, but a person told to "imagine" feeling pain shows a different one. In general, brain scans do tend to show a difference between hypnotized subjects and subjects asked to pretend to be hypnotized. My interpretation of these results is that hypnosis consists of the conscious mind telling the perceptual systems to shut up and do what they're told.
2Omid
Do you know how normal hypnotism works?
2Scott Garrabrant
No

The subject basically pretends that everything that hypnotist says is true. Have you ever played a video game and got so wrapped up in the virtual world you just stopped noticing the real world? That's called immersion, and it's achieved by keeping your attention focused on the game. When your attention drifts away from the game, you start noticing that it's 2 am or that's you've been playing for four hours, and you remember that you are not in the video game, you're just playing a video game. But as long as your attention remains on game, your feel like you are actually living in the video game's world. Gamers love the feeling of immersion, so developers put a lot of work into figuring out how to keep gamers attention, which maintains the immersion.

Hypnosis works on the same principle. The hypnotist uses the patients full attention to create an imaginary world that feels real to the patient. The difference between video games and hypnosis is hypnosis patients actively give their attention to the hypnotist, while gamers passively expect the game to take their attention. When a hypnotic induction starts, the subject is asked to imagine the something in great detail, effecitvely putt... (read more)

9NancyLebovitz
From Derren Brown's Tricks of the Mind:
3Viliam_Bur
A professional hypnotist once told me that it is very difficult to hypnotize "mathematicians" (him meaning: math, physics, and computer science students), because (this was his intepretation) they are too well connected with the reality and will not accept nonsense. But he also said that given enough time and trying different hypnotists, probably everyone can be hypnotized. This happened at a hypnosis training camp, where this guy had an interesting idea: To teach hypnosis more efficiently, he would hypnotize all the participants and give them hypnotic commands to remember the lessons better. And then he would teach the theory and let us do the exercises, as usual. Also, he said that in order to learn hypnosis it is better to be hypnotized first, because then you know what it feels like to be hypnotized, and that knowledge is very useful when hypnotizing others (you have better intuition on what can and cannot work). -- This strategy seemed to work for many participants, most of which were psychology students. Only two people in the group couldn't be hypnotized: me and one girl, both students of computer science. The only time in my life when I regretted I wasn't more susceptible to hypnosis. So at the end, all I learned was some theory.
3ChristianKl
"Connected to reality" is in this context a nice way of saying that someone can't let go and relax. Computer Science/Physics/Math people especially have a problem with forgetting numbers because numbers are way more important for them then the usual person. That's not about having an intuition about what works. Part of hypnotising somebody else effectively involves going into a trance state yourself.

I'm mid-twenties. Does it make sense to take a low-dose aspirin?

No it does not. Aspirin reduces the risk of heart attacks and strokes but also causes adverse outcomes - most importantly by raising the risk of gastro-intestinal bleeds. For the typical person in their mid twenties the risk of a heart attack or stroke is so low that the benefit of aspirin will be almost nil, the absolute value of intervening will be vanishingly small even though the proportional decrease in risk stays the same.

There are many possible effects of taking low dose aspirin other than those described so far - it may reduce the risk of colon cancer, for instance, but there are so many possible adverse outcomes too. Cyclooxygenase - the enzyme targeted by aspirin - is involved in many housekeeping functions throughout the body in particular the kidneys, stomach and possibly erectile tissue.

Studies examining risk versus benefit for low dose aspirin treatment have found that a cardiovascular risk of about 1.5%/year is necessary for the benefits of aspirin to outweigh the ill effects. Whilst no studies have been conducted on healthy young individuals I don't think such studies should be conducted, given that studies in those at a much higher cardiovascular risk than someone ... (read more)

How do you cure "something is wrong on the Internet" syndrome? It bugs me when people have political opinions that are simplistic and self-congratulating, but I've found that arguing with them wastes time and energy and rarely persuades them.

Cultivate a sense of warm satisfaction every time you avoid a pointless online debate.

Really think about how very much is wrong on the internet compared to your capacity to try to correct it. I think this might be a case of cultivating scope sensitivity.

Or (which is what I think I do) combine that with a sense that giving a little shove towards correctness is a public service, but it isn't a strong obligation. This tones the compulsion down to a very moderate hobby.

9Viliam_Bur
For me debating with people on LessWrong somehow cured the syndrome. Now when I see a political debate among non-LessWrongians, the participants seem like retarded people -- I no longer expect them to be reasonable; I don't even expect them to be able to understand logical arguments and process them correctly.; I don't feel any hope of conveying anything meaningful to any of them. (At best we could have an illusion of understanding.) Speaking with them would be like speaking with a rock; certainly not tempting. I am not saying this is a correct model of the world. It is probably exaggerated a bit. Just explaining that this is how I feel, and this is what cured the syndrom. These days the syndrom manifests mostly when speaking with someone whom I consider they could be rational. If they feel like a potential LW candidate. It usually ends with me revising my opinion about the candidate, and silently stopping. So, for me the cure is feeling that the inferential distance between typical internet discussion and rational discussion is so huge that I don't have a chance to overcome it in one debate.
7Shmi
Realize that it's not their fault, they are just automatons with faulty programming.
4FiftyTwo
I just became unwilling to devote the effort to replying.
[-][anonymous]80

I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person's quality of life can be quantified--i.e., that person's "utility"--and these utilities can be aggregated. Under preference utilitarianism, a person's utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone's utility function has the same weight when th... (read more)

So are people conflating utility maximization and utilitarianism

Often, yes.

0Douglas_Knight
It's true that people often conflate utilitarianism with consequentialism, but I don't think that's what's going on here. I think it is quite reasonable to include under utilitarianism moral theories that are pretty close, like weighting people when aggregating. If people think that raw utilitarianism doesn't describe human morality, isn't it more useful for the term to describe people departing from the outpost, rather than the single theory? Abstract values that are not per-person are more problematic to include in the umbrella, but searching for "free" in that post doesn't turn up an example. If your definition is so narrow that you reject Nozick's utility monster as having to do with utilitarianism, then your definition is too narrow. Also, the lack of a normalization means that giving everyone "the same weight" does not clearly pin it down.
0blacktrance
This confused me for a long time too. I ultimately came to the conclusion that "utilitarianism" as that word is usually used by LessWrongers doesn't have the standard meaning of "an ethical theory that holds some kind of maximization of utils in the world to be the good", and instead uses it as something largely synonymous with "consequentialism".

"Consequentialism" is too broad, "utilitarianism" is too narrow, and "VNM rationality" is too clumsy and not generally thought of as a school of ethical thought.

1blacktrance
It sounds like certain forms of egoism.
0[anonymous]
Egoism, perhaps?

What fiction should I read first?

I have read pretty much nothing but MoR and books I didn't like for school, so I don't really know what my preferences are. I am a mathematician and a Bayesianist with an emphasis on the more theoretical side of rationality. I like smart characters that win. I looked at some recommendations on other topics, but there are too many options. If you suggest more than one, please describe a decision procedure that uses information that I have and you don't to narrow it down.

Update: I decided on Permutation City, and was unable to put it down until it was done. I am very happy with the book. I am a lot more convinced now that I will eventually read almost all of these, so the order doesn't matter as much.

0lukstafi
I liked "Diaspora" more.

Terry Pratchett's Discworld series. I recommend starting with Mort (the fourth book published). The first two books are straight-up parodies of fantasy cliches that are significantly different from what comes afterward, and the third book, Equal Rites, I didn't care for very much. Pratchett said that Mort was when he discovered plot, and it's the book that I recommend to everyone.

3taelor
I can second Discworld.
2kgalias
I particularly enjoyed the City Watch series. It also seems to be the most "non-ridiculous" and down to earth, which can help at the start.
0CronoDAS
It actually took me a while to warm up to the Watch books; when I read Guards Guards, I was expecting more laugh-out-loud moments of the kind that there were in the sillier early books. /me read Discworld in publication order
7Armok_GoB
Well, if you liked MoR, there are the two other Big Rationalist Fanfics: * http://www.fimfiction.net/story/62074/friendship-is-optimal (don't judge it because it's about ponies - it's really good even if you dont like them) (if yo like it, don't forget to check out the meta-fanfics) * http://luminous.elcenia.com/story.shtml (as good as the thing it's parodying is terrible) Also in a similar style: http://www.sagaofsoul.com/ Then there's the scifi classics, if you're willing to shell out some money. no links for these. Here are a few good ones to get you started * Permutation City * Accelerando * Diaspora * Fire Upon the Deep This should be enough to get you started. I can give you MUCH more if you want to and maybe tell me some other things you like. Finding stuff like this to specification is basically what I do.
6feanor1600
"smart characters that win" Miles Vorkosigan saga, Ender's Game, anything by Neal Stephenson.
0kgalias
I started reading Ender's and the world didn't seem to make enough sense to keep me immersed.
3Risto_Saarelma
Give Neal Stephenson a go. Snow Crash and Cryptonomicon are good starting points.
3NancyLebovitz
First is probably Bujold, specifically her Miles Vorkosigan series. I think of Vinge more in terms of awesome author than awesome characters, but he does have some pretty impressive characters. Lee Child has an intelligent good guy and intelligent associates vs. intelligent bad guys. (Not sf.)
0randallsquared
You may, however, come to strongly dislike the protagonist later in the series.
2drethelin
Miles? He does some douchebaggy things but then he grows up. It's one of my favorite character arcs.
2randallsquared
Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.
2[anonymous]
The First Law trilogy by Joe Abercrombie. No promises on the characters, most of them are not so rational, but you'll see why I said it by the end. There are more books in the same setting with some of the same characters if you like them. The first book is mostly setup but it is great after that.
1polymathwannabe
Re "smart characters that win," I recommend these from my random reading history: The Pillars of the Earth and A World without Endby Ken Follett River God by Wilbur Smith Singularity Sky and Iron Sunrise by Charles Stross And Then There Were None by Agatha Christie
0polymathwannabe
And as for specifically rationalist stories, you might want to check the Dark Materials trilogy by Philip Pullman.
0drethelin
what? no! Dark materials is specifically anti-christian, but the characters are not AT ALL rationalists. They often do stupid things and everything gets saved by random deus ex machina rather than cunning plots. It's an inverse Narnia, which is not rationality.
1Izeinwinter
SF; Go to the amazon kindle store, read the first chapters (free samples) of: Vernor Vinge, a fire on the deep. Finest example of classical IE; space ships, politics and aliens! SF there is. Lois Mcmaster Bujold: A large sample of the first book in the Vor saga. http://www.baen.com/chapters/W200307/0743436164.htm?blurb If you like Harry for being a high-competence chaos magnet, this should scratch that itch in just the right spot.
0Turgurth
It's not specifically rationalist, but Dune is what first comes to mind for "smart characters that win", at least in the first book.
0D_Alex
I recommend pretty much anything by Jack Vance. If you like fantasy settings, read "Lyonesse", "Cugel's Saga" and "Rhialto the Marvellous". If you like sci-fi settings, try "Araminta Station" , "Night Lamp" and "Alastor". For a quaint mix of the two, try "Emphyrio" or "Languages of Pao". Vance wrote a bunch of great stuff, so if you like his first book, you have heaps more to look forward to. Also "Name of the Wind" and "Wise Man's Fear" by Patrick Rothfuss are pretty good. I also second "Ender's Game".

Hi, I'm new here and have some questons regarding editing and posting. I read thru http://wiki.lesswrong.com/wiki/Help:User_Guide and http://wiki.lesswrong.com/wiki/FAQ but couldn't find the answers there so I decided to ask here. Probably I overlooked something obvious and a link will suffice.

  • How do I add follow up links to a post? Most main and sequences posts have them but I'm unable to add them to my post. Note: I posted in Discussion as recommended because these were my first posts. I didn't any feedback to change that but I'd nonetheless cross-lin

... (read more)
1NancyLebovitz
The "show help" box at the lower right of the comment field gives you information on the markdown methods for emphasis and links and such. I'm pretty sure that you just use links to your other posts for follow up links, unless I'm missing something about your question.
0Gunnar_Zarncke
What about profiles? How can I create one? I see that many users have profiles: http://lesswrong.com/search/results?cx=015839050583929870010%3A-802ptn4igi&cof=FORID%3A11&ie=UTF-8&q=profile&sa=Search&siteurl=lesswrong.com%2F&ref=lesswrong.com%2Fsearch%2Fresults%3Fq%3Dprofile%26sa%3DSearch%26siteurl%3Dlesswrong.com%26ref%3Dlesswrong.com%26ss%3D703j91859j7&ss=760j108736j7 (by the way: is there a way to create shorter URLs for simple searches? I tried http://lesswrong.com/search/results?q=profile but that comes up empty)
0linkhyrule5
Use a url shortener. Adf.ly will even pay you for it.
0Gunnar_Zarncke
Generally a good idea. But that's not what I meant. I mean a generally short search URL for less wrong where I can just add the query term. I can shorten the abive via adf.ly but I can't modify that to also search for q=parenting, q=ai, q=tags...
0Gunnar_Zarncke
what about the profile page? how do I create these?
3Sniffnoy
Set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page. (Thanks to gwern for informing me about this.)
0Gunnar_Zarncke
Thank you. I'm just creating http://wiki.lesswrong.com/mediawiki/index.php?title=User:Gunnar_Zarncke and hope that it will get linked to http://lesswrong.com/user/Gunnar_Zarncke/ Halt. I have a problem here: Saving doesn't seem to work. The page stays empty and I can't leave the edit area.. Same for my talk page. The wiki appears to be slow overall.
3gwern
Sounds like you've been hit by the edit filter: I've been trying out disabling page creation for users younger than 3 or 4 days. It's supposed to be giving you a warning explaining that, though.
0Gunnar_Zarncke
Indeed. Now it works. There definitely was no warning or anything related.
0Sniffnoy
Try again, maybe? I haven't had a problem with the wiki before...
1Gunnar_Zarncke
I still can't save. They page stays empty. A few more notes: * the start page appears to have been defaced (a link to a gambling casino). I can't revert to dito. Maybe you should protect the page. * the wiki loads quite slow. * LW in general seems to have some issue with loading - the page never loads completely. it displayes, but the browser continues to wait for the URL http://b.scorecardresearch.com/b?c1=8&c2=2101&c3=1234567891234567891&ns__t=1378360611089&ns_c=ISO-8859-1&c8=&c7=http%3A%2F%2Fdg.specificclick.net%2F%3Fy%3D3%26t%3Dh%26u%3Dhttp%253A%252F%252Fwiki.lesswrong.com%252Fmediawiki%252Findex.php%253Ftitle%253DLessWrong_Wiki%2526action%253Dsubmit%26r%3Dhttp%253A%252F%252Fwiki.lesswrong.com%252Fmediawiki%252Findex.php%253Ftitle%253DLessWrong_Wiki%2526action%253Dedit&c9=http%3A%2F%2Fwiki.lesswrong.com%2Fmediawiki%2Findex.php%3Ftitle%3DLessWrong_Wiki%26action%3Dsubmit
0Gunnar_Zarncke
No problem with markdown. As for the follow-up links I checked again and these are normal links. I'm somewhat surprised that they are used that consistently. Can you also provide a tip on tags?
0NancyLebovitz
I didn't answer about tags because I don't know of a guide. I just found that if you search on tag [word that you think might be a good tag], you'll get lw articles with that tag, but that would be a process of exploration rather than knowing about common tags.
2Gunnar_Zarncke
Then I assume that tags are used by intuition. I just invented a tag and will use it consistently.
0Gunnar_Zarncke
I found that it is possible to list all posts with a tag via a short URL, e.g. on parenting it is http://lesswrong.com/tag/parenting/ But this doesn't show my postings with that tag. Can it be that only posts in Main are found by that? If so is there a different shortcut that will (also) lists hits in Comments?
1Douglas_Knight
discussion/parenting. Also, I think tagged articles are sorted by old first, opposite to most things.

Can someone explain the payoff of a many worlds theory? What it's supposed to buy you?

People talk like it somehow avoids the issue of wave function collapse, but I just see many different collapsed function in different timelines.

3Luke_A_Somers
MWI or non-ontological collapse gets you to a place where you can even entertain the notion that the framework of Quantum Mechanics is correct and complete, so that: * you can stop worrying about bogus unphysical theories that you'd only invent in an attempt to make things look normal again, and * you're more comfortable with working with larger superpositions.
2kalium
How is this preferable to the "shut up and calculate" interpretation of QM?
0IlyaShpitser
Is 'unphysical' anything at all like 'unchristian'? In other words, is 'un' modifying 'physics' or 'physicists'?
2Luke_A_Somers
It's modifying Physics. A theory that doesn't act like physics. A theory that produces no new predictions but invents details to shuffle our ignorance into different more palatable forms. I'm thinking of, on the one hand, objective collapse, and on the other hand, global hidden variables about imagined real states -- variables which, in order to be anything like compatible with QM must mysteriously shuffle around so that each time you measure one that is the end of its domain of applicability and you'll never be able to use that information for anything.
-1IlyaShpitser
I think you are confusing "theory" and "interpretation." There is consensus on the vast majority (all?) of QM, the physical theory. ---------------------------------------- "Interpretations" are stories we tell ourselves. Arguing about interpretation is like arguing about taste.
-2Luke_A_Somers
Fine, but something needs explanation when you've got this energy-conserving theory which results in the energy content of the universe changing.
1IlyaShpitser
The quantum theory has the same predictions for all interpretations, and does not violate energy conservation. I don't know what "energy content of the universe changing" means if energy is conserved. You are arguing about taste.

Why aren't people preserved cryogenically before they die?

8Lumifer
Because under most current legal system this is called "murder".
-2Gurkenglas
Surely, getting to overseas facilities would solve that problem?
4Lumifer
First, let my clarify. This is called "murder" under the current legal systems of most countries in the world. Second, if kill e.g. an American citizen anywhere in the world, the American justice system still gets jurisdiction and can prosecute you for murder.
2drethelin
moving existing facilities or setting up new ones, in addition to what lumifer said, is also extremely expensive.

Is the Fun Theory Sequence literally meant to answer "How much fun is there in the universe?", or is it more intended to set a lower bound on that figure? Personally I'm hoping that once I become a superintelligence, I'll have access to currently unimaginable forms of fun, ones that are vastly more efficient (i.e., much more fun per unit of resource consumed) than what the Fun Theory Sequence suggests. Do other people think this is implausible?

4JoachimSchipper
Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.

Suppose that energy were not conserved. Can we, in that case, construct a physics so that knowledge of initial conditions plus dynamics is not sufficient to predict future states? (Here 'future states' should be understood as including the full decoherent wave-function; I don't care about the "probabilistic uncertainty" in collapse interpretations of QM.) If so, is libertarian free will possible in such a universe? Are there any conservation laws that could be "knocked out" without giving rise to such a physics; or conversely, if conservation of energy is not enough, what is the minimum necessary set?

Conservation of energy can be derived from Lagrangian mechanics from the assumption that the Lagrangian is constant over time. That is equivalent to saying that the dynamics of the system do not change over time. If the mechanics are changing over time, it would certainly be more difficult to predict future states, and one could imagine the mechanics changing unpredictably over time, in which case future states could be unpredictable as well. But now we don't just have physics that changes in time, we have physics that changes randomly.

I think I find that thought more troubling than the lack of free will.

(I know of no reason why any further conservation laws would break in a universe such as that, so long as you maintain symmetry under translations, rotations, CPT, etc. Time-dependent Lagrangians are not exotic. For example, a physicist might construct a Lagrangian of a system and include a time-changing component that is determined by something outside of the system, like say a harmonic oscillator being driven by an external power source.)

4kilobug
I don't see any direct link between determinism and conservation of energy. You can have one or the other or both or none. You could have laws of physics like "when two protons collide, they become three protons", determinist but without conservation of energy. As for "libertarian free will" I'm not sure what you mean by that, but free will is concept that must be dissolved, not answered "it exists" or "it doesn't exist", and anyway I don't see the link between that and the rest.
0RolfAndreassen
You can have determinism without conservation of energy, but I opine that you cannot have conservation of energy (plus the other things that are conserved in our physics) without determinism.
2Luke_A_Somers
JUST conservation of energy, sure... consider a universe composed of a ball moving at constant speed in random directions. But conserving everything our physics conserves means you're using our physics. It's not even a hypothetical if you do that.
0RolfAndreassen
Suppose you changed electromagnetism to be one over r-cubed instead of r-squared. What conservation law breaks? Or just fiddle with the constants.
0Luke_A_Somers
Hrm. Well, I suppose that if you change the constants then you're not conserving the same exact things, but they would devolve to words in the same ways. All right, second statement withdrawn. I'll take the first further, though - you can have an energy which is purely kinetic, momentum and angular momentum as usual, etc... and the coupling constants fluctuate randomly, thereby rendering the world highly nondeterministic.
2RolfAndreassen
Ok, but it's not clear to me that your "energy" is now describing the same thing that is meant in our universe. Suppose everything in Randomverse stood still for a moment, and then the electric coupling constant changed; clearly the potential energy changes. So it does not seem to me that Randomverse has conservation of energy.
2Luke_A_Somers
Hmmm... yes, totally freely randomly won't work. All right. If I can go classical I can do it. You have some ensemble of particles and each pair maintains a stack recording a partial history of their interactions, kept in terms of distance of separation (with the bottom of the stack being at infinite separation). Whenever two particles approach each other, they push the force they experienced as they approached onto the pair's stack; the derivative of this force is subject to random fluctuations. When two particles recede, they pop the force off the stack. In this way, you have potential energy (the integral from infinity to the current separation over the stack between two particles) as well as kinetic, and it is conserved. The only parts that change are the parts of the potential that aren't involved in interactions at the moment. Of course, that won't work in a quantum world since everything's overlapping all the time. But you didn't specify that. EDITED TO ADD: there's no such thing as potential energy if the forces can only act to deflect (cannot produce changes in speed), so I could have done it that way too. In that case we can keep quantum mechanics but we lose relativity.
0RolfAndreassen
I still don't think you're conserving energy. Start with two particles far apart and approaching each other at some speed; define this state as the zero energy. Let them approach each other, slowing down all the while, and eventually heading back out. When they reach their initial separation, they have kinetic energy from two sources: One source is popping back the forces they experienced during their approach, the other is the forces they experienced as they separated. Since they are at their initial separation again, the stack is empty, so there is zero potential energy; and there's no reason the kinetic energy should be what it was initially. So energy has been added or subtracted. The idea of having only "magnetic" forces seems to work, yes. But, as you say, we then lose special relativity, and that imposes a preferred frame of reference, which in turn means that the laws are no longer invariant under translation. So then you lose conservation of momentum, if I remember my Noether correctly.
0Luke_A_Somers
You got it backwards. The stack reads in from infinity, not from 0 separation. As they approach, they're pushing, not popping. Plus, the contents of the stack are included in the potential energy, so either way you cut it, it adds up. If the randomness is on the side you don't integrate from, you won't have changes. ~~~ As for the magnetic forces thing, having a preferred frame of reference is quite different from laws no longer being invariant under translation. What you mean is that the laws are no longer invariant under boosts. Noether's theorem applied to that symmetry yields something to do with the center of mass which I don't quite understand, but seems to amount to the notion that the center of mass doesn't deviate from its nominal trajectory. This seems to me to be awfully similar to the conservation of momentum, but must be technically distinct.
0RolfAndreassen
Yes. Then as they separate, they pop those forces back out again. When they reach separation X, which can be infinity if you like (or we can just define potential energy relative to that point) they have zero potential energy and a kinetic energy which cannot in general be equal to what they started with. The simplest way of seeing this is to have the coupling be constant A on the way in, then change to B at the point of closest approach. Then their total energy on again reaching the starting point is A integrated to the point of closest approach (which is equal to their starting kinetic energy) plus B integrated back out again; and the potential energy is zero since it has been fully popped. Yes, you are correct. Still, the point stands that you are abandoning some symmetry or other, and therefore some conservation law or other.
0Luke_A_Somers
Your example completely breaks the restrictions I gave it. The whole idea of pushing and popping is that going out is exactly the same as the last time they went in. Do you know what 'stack' means? GOing back out, you perfectly reproduce what you had going in. As I already said, if you constrain it that tightly, then you end up with our physics, period. Conservation of charge? That's a symmetry. Etc. if you hold those to be the same, you completely reconstruct our physics and of course there's no room for randomness.
0RolfAndreassen
Oh, I see. The coefficients are only allowed to change randomly if the particles are approaching. I misunderstood your scenario. I do note that this is some immensely complex physics, with a different set of constants for every pair of particles! Edit to add: Also, since whether two particles are going towards each other or away from each other can depend on the frame of reference, you again lose whatever conservation it is that is associated with invariance under boosts. Right. The original question was, are there any conservation laws you can knock out without losing determinism? It seems conservation of whatever-goes-with-boosts is one of them.
2kilobug
Not necessarily. Consider the time-turner in HPMOR. You could have physics which allow such stable time loop, with no determinism on which loop among the possible ones will actually occur, and yet have conservation of energy.
0Shmi
As I mentioned a few times, HPMoR time turners violate general relativity, as they result in objects appearing and disappearing without any energy being extracted from or dissipated into the environment. E.g. before the loop: 1 Harry, during the loop: 2 Harries, after the loop: 1 Harry.
0kilobug
Yes, but you could very well think about something equivalent to the time-turner that exchange matter between the past and the present, instead of just sending matter to the past, in a way that keeps energy conservation. It would be harder to use practically,but wouldn't change anything to "energy conservation" vs "determinism" issues.
4RolfAndreassen
Don't forget that to fully conserve energy, you have to maintain not only the total mass, but also the internal chemical potentials of whatever thing you're shifting into the past and its gravitational potential energy with respect to the rest of the universe. I think you'll have a hard time doing this without just making an exact copy of the object. "Conservation of energy" is a much harder constraint than is obvious from the three words of the English phrase.
0kilobug
I don't see that as a theoritecal problem against a plausible universe having such a mechanism. We could very well create a simulation in which when you timetravel, the total energy (internal from mass and chemical bounds, external from gravity and chemical interaction with the exterior) is measured, and exchanged for exactly that amount from the source universe. If we can implement it on a computer, it's possible to imagine a universe that would have those laws. The hard part in time-turner physics (because it's not computable) is the "stable time loop", not the "energy conservation" part (which is computable).
0Shmi
Yep.
2pragmatist
Liouville's theorem is more general than conservation of energy, I think, or at least it can hold even if conservation of energy fails. You can have a system with a time-dependent Hamiltonian, for instance, and thus no energy conservation, but with phase space volume still preserved by the dynamics. So this would be a deterministic system (one where phase space trajectories don't merge) without energy conservation. As for the minimum necessary set of conservation laws that must be knocked out to guarantee non-determinism, I'm not sure. I can't think of any a priori reason to suppose that determinism would crucially rely on any particular set of conservation laws, although this might be true if certain further constraints on the form of the law are specified.
2RolfAndreassen
If I understood the Wiki article correctly, the assumption needed to derive Liouville's theorem is time-translation invariance; but this is the same symmetry that gives us energy conservation through Noether's theorem. So, it is not clear to me that you can have one without the other.
4Shmi
Liouville's theorem follows from the continuity of transport of some conserved quantity. If this quantity is not energy, then you don't need time-translation invariance. For example, forced oscillations (with explicitly time-dependent force, like first pushing a child on a swing harder and harder and then letting the swing relax to a stop) still obey the theorem.

Every now and then, there are discussions or comments on LW where people talk about finding a "correct" morality, or where they argue that some particular morality is "mistaken". (Two recent examples: [1] [2]) Now I would understand that in an FAI context, where we want to find such a specification for an AI that it won't do something that all humans would find terrible, but that's generally not the context of those discussions. Outside such a context, it sounds like people were presuming the existence of an objective morality, but I th... (read more)

6RolfAndreassen
Objective morality in one (admiitedly rather long) sentence: For any moral dilemma, there is some particular decision you would make after a thousand years of collecting information, thinking, upgrading your intelligence, and reaching reflective equilibrium with all other possible moral dilemmas; this decision is the same for all humans, and is what we refer to when we say that an action is 'correct'.
8Kaj_Sotala
I find that claim to be very implausible: to name just one objection to it, it seems to assume that morality is essentially "logical" and based on rational thought, whereas in practice moral beliefs seem to be much more strongly derived from what the people around us believe in. And in general, the hypothesis that all moral beliefs will eventually converge seems to be picking out a very narrow region in the space of possible outcomes, whereas "beliefs will diverge" contains a much broader space. Do you personally believe in that claim?
1niceguyanon
I'm not sure what I was expecting, but I was a little surprised after seeing you say you object to objective morality. I probably don't understand CEV well enough and I am pretty sure this is not the case, but it seems like there is so much similarity between CEV and some form of objective morality as described above. In other words, if you don't think moral beliefs will eventually converge, given enough intelligence, reflection, and gathering data, etc, then how do you convince someone that FAI will make the "correct" decisions based on the extrapolated volition?
4Kaj_Sotala
CEV in its current form is quite under-specified. I expect that there would exist many, many different ways of specifying it, each of which would produce a different CEV that would converge at a different solution. For example, Tarleton (2010) notes that CEV is really a family of algorithms which share the following features: * Meta-algorithm: Most of the AGI’s goals will be obtained at run-time from human minds, rather than explicitly programmed in before run-time. * Factually correct beliefs: The AGI will attempt to obtain correct answers to various factual questions, in order to modify preferences or desires that are based upon false factual beliefs. * Singleton: Only one superintelligent AGI is to be constructed, and it is to take control of the world with whatever goal function is decided upon. * Reflection: Individual or group preferences are reflected upon and revised. * Preference aggregation: The set of preferences of a whole group are to be combined somehow. He comments: Although one of Eliezer's desired characteristics for CEV was to ”avoid creating a motive for modern-day humans to fight over the initial dynamic”, a more rigorous definition of CEV will probably require making many design choices for which there will not be any objective answer, and which may be influenced by the designer's values. The notion that our values should be extrapolated according to some specific criteria is by itself a value-laden proposal: it might be argued that it was enough to start off from our current-day values just as they are, and then incorporate additional extrapolation only if our current values said that we should do so. But doing so would not be a value-neutral decision either, but rather one supporting the values of those who think that there should be no extrapolation, rather than of those who think there should be. I don't find any of these issues to be problems, though: as long as CEV found any of the solutions in the set-of-final-values-that-I-woul
0RolfAndreassen
It seems there are two claims: One, that each human will be reflectively self-consistent given enough time; two, that the self-consistent solution will be the same for all humans. I'm highly confident of the first; for the second, let me qualify slightly: * Not all human-like things are actually humans, eg psychopaths. Some of these may be fixable. * Some finite tolerance is implied when I say "the same" solution will be arrived at. With those qualifications, yes, I believe the second claim with, say, 85% confidence.
3Kaj_Sotala
I find the first claim plausible though not certain, but I would expect that if such individual convergence happens, it will lead to collective divergence not convergence. When we are young, our moral intuitions and beliefs are a hodge-podge of different things, derived from a wide variety of sources, probably reflecting something like a "consensus morality" that is the average of different moral positions in society. If/when we begin to reflect on these intuitions and beliefs, we will find that they are mutually contradictory. But one person's modus ponens is another's modus tollens: faced with the fact that a utilitarian intuition and a deontological intuition contradict each other, say, we might end up rejecting the utilitarian conclusion, rejecting the deontological conclusion, or trying to somehow reconcile them. Since logic by itself does not tell us which alternative we should choose, it becomes determined by extra-logical factors. Given that different people seem to arrive at different conclusions when presented with such contradictory cases, and given that their judgement seems to be at least weakly predicted by their existing overall leanings, I would guess that the choice of which intuition to embrace would depend on their current balance of other intutions. Thus, if you are already leaning utilitarian, the intuitions which are making you lean that way may combine together and cause you to reject the deontological intuition, and vice versa if you're learning deontologist. This would mean that a person who initially started with an even mix of both intuitions would, by random drift, eventually end up in a position where one set of intuitions was dominant, after which there would be a self-reinforcing trajectory towards an area increasingly dominated by intuitions compatible with the ones currently dominant. (Though of course the process that determines which intuitions get accepted and which ones get rejected is nowhere as simple as just taking a "majori
2cousin_it
Seconding Kaj_Sotala's question. Is there a good argument why self-improvement doesn't have diverging paths due to small differences in starting conditions?
0hairyfigment
Dunno. CEV actually contains the phrase, "and had grown up farther together," which the above leaves out. But I feel a little puzzled about the exact phrasing, which does not make "were more the people we wished we were" conditional on this other part - I thought the main point was that people "alone in a padded cell," as Eliezer puts it there, can "wish they were" all sorts of Unfriendly entities.
-1Eugine_Nier
That argument seems like it would apply equally well to non-moral beliefs.
0Armok_GoB
I assume the same but instead of "all humans" the weaker "the people participating in this conversation".
0buybuydandavis
I don't think even that's a sufficient definition. It's that all observers (except psychos), no matter their own particular circumstances and characteristics, would assign approval/disapproval in exactly the same way.
0drethelin
Psychopaths are quite capable of perceiving objective truths. In fact if there was an objective morality I expect it would work better for psychopaths than for anyone else.
1buybuydandavis
I believe Rolf has excommunicated psychopaths (and Clippy) from the set of agents from whom "human morality" is calculated. First they purged the psychopaths... Me, I don't think everyone else converges to the same conclusions. All non psychopaths just aren't all made out of the same moral cookie cutter. It's not that we have to "figure out" what is right, it's that we have different values. If casual observation doesn't convince you of this, Haidt's quantified approach should.
0Lumifer
That's one but not the only one possible definition of objective morality.
4Shmi
At least some of the prominent regulars seem to believe in objective morality outside of any FAI context, I think (Alicorn? palladias?).
3Vladimir_Nesov
The connotations of "objective" (also discussed in the other replies in this thread) don't seem relevant to the question about the meaning of "correct" morality. Suppose we are considering a process of producing an idealized preference that gives different results for different people, and also nondeterministically gives one of many possible results for each person. Even in this case, the question of expected ranking of consequences of alternative actions according to this idealization process applied to someone can be asked. Should this complicated question be asked? If the idealization process is such that you expect it to produce a better ranking of outcomes than you can when given only a little time, then it's better to base actions on what the idealization process could tell you than on your own guess (e.g. desires). To the extent your own guess deviates from your expectation of the idealization process, basing your actions on your guess (desires) is an incorrect decision. A standard example of an idealization dynamic is what you would yourself decide given much more time and resources. If you anticipate that the results of this dynamic can nondeterministically produce widely contradictory answers, this too will be taken into account by the dynamic itself, as the abstract you-with-more-time starts to contemplate the question. The resulting meta-question of whether taking the diverging future decisions into account produces worse decisions can be attacked in the same manner, etc. If done right, such process can reliably give a better result than you-with-little-time can, because any problem with it that you could anticipate will be taken into account. A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the "territory" that moral reasoning should explore, a criterion of correctness. It is a hard-to-use criterion of correctness, you might need to build a FA
0Wei Dai
I think that such a specification probably isn't the correct specification of the territory that moral reasoning should explore. By analogy, it's like specifying the territory for mathematical reasoning based on idealizing human mathematical reasoning, or specifying the territory for scientific reasoning based on idealizing human scientific reasoning. (As opposed to figuring out how to directly refer to some external reality.) It seems like a step that's generally tempting to take when you're able to informally reason (to some extent) about something but you don't know how to specify the territory, but I would prefer to just say that we don't know how to specify the territory yet. But... Maybe I'm underestimating the utility of having a specification that's "at least meaningful" even if it's not necessarily correct. (I don't mind "hard-to-use" so much.) Can you give some examples of how it illustrates the way in which many ways of thinking about morality are confused?
1[anonymous]
I came to the metaethics sequence an ethical subjectivist and walked away an ethical naturalist. I've mostly stopped using the words "objective" and "subjective", because I've talked with subjectivists with whom I have few to no substantive disagreements. But I think you and I do have a disagreement! How exciting. I accept that there's something like an ordering over universe configurations which is "ideal" in a sense I will expand on later, and that human desirability judgements are evidence about the structure of that ordering, and that arguments between humans (especially about the desirability of of outcomes or the praiseworthiness of actions) are often an investigation into the structure of that ordering, much as an epistemic argument between agents (especially about true states of physical systems or the truth value of mathematical propositions) investigates the structure of a common reality which influences the agents' beliefs. A certain ordering over universe configurations also influences human preferences. It is not a causal influence, but a logical one. The connection between human minds and morality, the ideal ordering over universe configurations, is in the design of our brains. Our brains instantiate algorithms, especially emotional responses, that are logically correlated with the computation that compresses the ideal ordering over universe configurations. Actually, our brains are logically correlated with the computations that compress multiple different orderings over universe configurations, which is part of the reason we have moral disagreements. We're not sure which valuation - which configuration-ordering that determines how our consequential behaviors change in response to different evidence - which valuation is our logical antecedent and which are merely correlates. This is also why constructed agents similar to humans, like the ones in Three Worlds Collide, could seem to have moral disagreements with humans. They, as roughly consequentiali
0Ishaan
People are often wrong about what their preferences are + most humans have roughly similar moral hardware. Not identical, but close enough to behave as if we all share a common moral instinct. When you make someone an argument and they change their mind on a moral issue, you haven't changed their underlying preferences...you've simply given them insight as to what their true preferences are. For example, if a neurotypical human said that belief in God was the reason they don't go around looting and stealing, they'd be wrong about themselves as a matter of simple fact. -as per the definition of preference that I think makes the most sense. -Alternatively, you might actually be re-programming their preferences...I think it's fair to say that at least some preferences commonly called "moral" are largely culturally programmed.
0Armok_GoB
I just assumed it meant "My extrapolated volition" and also "your extrapolated volition" and also the implication those are identical.
0Wei Dai
I wrote a post to try to answer this question. I talk about "should" in the post, but it applies to "correct" as well.
-2Eugine_Nier
Here is a decent discussion of objective morality.
-4Lumifer
The usual Typical Mind Fallacy which is really REALLY pervasive.

How does muscle effort convert into force/Joules applied? What are the specs of muscles? An example of "specs" would be:

Muscle:
0<=battery<=100
Each second: increase battery by one if possible
At will: Decrease battery by one to apply one newton for one second

I am wondering because I was trying to optimize things like my morning bike ride across the park, questions like whether I should try to maximize my speed for the times when I'm going uphill, so gravity doesn't pull me backward for so long; or whether it is an inefficient move to wa... (read more)

0Lumifer
There are books and papers on the physiology of exercise, in particular on how muscles use energy in different regimes. For a basic intro check Crossfit, for more details you can look at e.g. Body By Science. What are you trying to optimize for?
2Gurkenglas
Those links seem to describe how to maximize fitness, not what you are able to do with a given amount of it. Isn't there at least a basic rule of thumb, like which of applying 100 N over 10 m and 50 N over 30 m exerts a muscle more? I'm trying to optimize for a certain combination of time saved and not having exerted myself too much during a trip.
0Gurkenglas
Similarly: What are a qubit's specs? I would like to be able to think about what class of problem would be trivial with a quantum computer.
2pengvado
Then what you should be asking is "which problems are in BQP?" (if you just want a summary of the high level capabilities that have been proved so far), or "how do quantum circuits work?" (if you want to know what role individual qubits play). I don't think there's any meaningful answer to "a qubit's specs" short of a tutorial in the aforementioned topics. Here is one such tutorial I recommend.

Is reading fiction ever instrumentally useful (for a non-writer) compared to reading more informative literature? How has it been useful to you?

6knb
I read fiction about 1/3 of the time and nonfiction 2/3s of the time. When reading non-fiction I often spend idle moments in my day lost in abstract thought about concepts related to the non-fiction book I'm reading. I've noticed when reading novels, I'm far more observant about people in my personal life and their thoughts and motivations. This is especially true when reading works with thoughtful and observant POV characters (especially detective fiction and mystery novels). I think fiction, like music, can serve to frame your mind-state in a certain way.
4kalium
* It has been useful for manipulating my mood and general mindset. ("My life would be more amusing right now if I felt a bit nervous and unsure of reality. Better go read something by Philip K. Dick." Or "I would be more productive if I were feeling inspired by the grandness of the scientific endeavor. Better go read some Golden Age SF.") * It is useful for understanding what certain situations feel like without going through them yourself, and therefore can help you empathize with people in those situations whose behavior otherwise does not make sense to you. Memoirs and other nonfiction can also do this, but it's easier to find well-written fiction than well-written nonfiction, and for this purpose the writing must be very good.
1Ishaan
Notion that fiction increases empathy has been making the rounds. Not an area I've researched heavily, I am intrigued but skeptical.
0TheOtherDave
I haven't read the article, but I read the abstract, and am startled that it seems like a correlational study. Do they do anything to differentiate "reading fiction increases empathy" from "empathic people read more fiction"?
0Ishaan
I haven't really read it in detail, but in the abstract, see the sentence: Which means that fiction was still predictive after accounting for various self-reported personality traits. So they did try to differentiate the two. For detail, the corresponding section: "Association between print-exposure and empathy: Ruling out the role of individual differences" experimental stuff
0hyporational
Thanks. Some of my med school professors have this opinion, but I'm not sure if they've got any data to back it up.
0Lumifer
I suspect there is a correlation but I'm entirely unsure of the direction of causality.
[-][anonymous]20

How long does it take other to write a typical LW post or comment?

I perceive myself as a very slow writer, but I might just have unrealistic expectations.

5drethelin
most of my comments take less than a minute to write.
2satt
Depends on the comment. Mine are almost all rewrites, so anything that's not a short answer to a simple factual question takes me at least a couple of minutes. The upper bound is probably ≈2 hours. If I remember rightly, this one took almost that long, and would've taken longer if I'd tried to polish it and not have it end in disjointed bullet points. There are quite a few reasons why that comment was so time-consuming: it was lengthy; it was a response to criticism, so I wanted to make it very obviously correct; I wanted to refer to lots of examples and sources, which means deciding which bits of which sources to quote, and hitting up Google & Wikipedia; I wanted to cover quite a lot of ground so I had to spend more time than usual squeezing out verbiage; and I had to stop & think periodically to check everything I was saying came together coherently. Sometimes I write a few paragraphs, realize I've written myself into a corner, then decide to tear everything down and start over from a different angle. (Or I decide it's not worth the effort and refrain from saying anything.) That happened with this comment, so it wound up taking something like an hour. This comment, by contrast, has only needed about half an hour to write because it's mostly based on introspection, isn't that long, isn't communicating anything complex, won't be controversial, isn't optimized for transparency, and turns out not to have needed any full-scale rewrites. I also think I'm a slow writer by LW standards. (Unsurprisingly?)

Is there any reason we don't include a risk aversion factor in expected utility calculations?

If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?

2somervta
Because doing so will lead to worse outcomes on average. Over a long series of events, someone who just follows the math will do better than someone who is risk-averse wrt to 'utility'. Of course, often our utility functions are risk-averse wrt to real-world things, because of non-linear valuation - e.g, your first $100,000 is more valuable than your second, and your first million is not 10x as valuable as your first $100,000.
0TRManderson
Thanks. Just going to clarify my thoughts below. In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility. The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.
0Ishaan
You just incorporate that straight into the utility function. You have $100 to your name. Start with 100 utility. Hey! Betcha $50 this coin comes up heads! $150 and therefore 110 utility if you win. $50 and therefore 60 utility if you lose. So you don't take the bet. It's a fair bet dollar wise, but an unfair bet utility wise.
0somervta
Yes, non-linear valuation means that your subjective value for X does not increase linearly with linear increases in X. It might increase logarithmically, or exponentially, or polynomially (with degree > 1), or whatever.
[-][anonymous]20

A significant amount of discussion on Less Wrong appears to be of the following form:

1: How do we make a superintelligent AI perform more as we want it to, without reducing it to a paperweight?

Note: reducing it to a paperweight is the periodically referenced "Put the superintelligence in a box and then delete it if it sends any output outside the box." school of AI Safety.

Something really obvious occurred to me, and it seems so basic that there has to be an answer somewhere, but I don't know what to look under. What if we try flipping the questio... (read more)

5hairyfigment
It does seem like an interesting question. But the most obvious flaw is that we still don't have the starting point - software does what we tell it to do, not what we want, which is usually different - and I don't immediately see any way to get there without super-intelligence. Holden Karnofsky proposed starting with an Oracle AI that tells us what it would do if we gave it different goal systems. But if we avoided giving it any utility function of its own, the programmers would need to not only think of every question (regarding every aspect of "what it would do"), but also create an interface for each sufficiently new answer. I'll go out on a limb and say this will never happen (much less happen correctly) if someone in the world can just create an 'Agent AI'.
0Viliam_Bur
Depends on what you mean by "smarter"? It is merely good at finding more efficient ways to fulfill your wish... or is it also able to realize that some literal intepretations of your wish are not what you actually want to happen (but perhaps you aren't smart enough to realize it)? In the latter case, will it efficiently follow the literal intepretation?

Does the unpredictability of quantum events produce a butterfly effect on the macro level? i.e., since we can't predict the result of a quantum process, and our brains are composed of eleventy zillion quantum processes, does that make our brains' output inherently unpredictable as well? Or do the quantum effects somehow cancel out? It seems to me that they must cancel out in at least some circumstances or we wouldn't have things like predictable ball collisions, spring behavior, etc.

If there is a butterfly effect, wouldn't that have something to say about Omega problems (where the predictability of the brain is a given) and some of the nastier kinds of AI basilisks?

5Oscar_Cunningham
Some systems exhibit a butterfly effect (a.k.a. chaos); some don't. The butterfly effect is where (arbitrarily) small changes to the conditions of the system can totally change it's future course. The weather is a good example of this. The change caused by a butterfly flapping it's wing differently will amplify itself until the entire Earth's weather is different from what it would have been. But other systems aren't like that. They're more "stable". For example if you change the position of any individual atom in my computer it won't make any difference to the computations I'm running. Other things are predictable just because we don't give time for any changes to develop. For example ball collisions are predictable, but if we study many ball collisions in a row, like a billiards "trick shot", then hitting the initial ball slightly differently will make a huge difference. You ask about quantum events. For chaotic systems, deviations caused by quantum events will indeed cause a butterfly effect. So whether or not the brain is predictable depends on to what extent it's chaotic, and to what extent it's stable. I suspect that it's chaotic, in the sense that a small tweak to it could totally change the way a thought process goes. But over time my brain will be predictable "on average". I'll behave in ways matching my personality. Similarly a butterfly flapping it's wings might change when it rains, but it'll still rain more in Bergen than the Sahara. I don't think this says much about Omega problems. Quantum butterfly effects will (I suspect) stop Omega exactly simulating my thought process, but I reckon it could still predict my choice with very high confidence just by considering my most likely lines of thought.
3Locaha
But it will change the weather just like the butterfly.
1Luke_A_Somers
The butterfly effect kicks in wherever there's something unstable - whenever there's a system where little changes grow. Billiards balls do this, for instance, which is why it's harder to hit the cue so it hits the 4 so it hits the 1 so it hits the 5 than to hit the cue so it hits the 5 (assuming the same total ball travel distance). Quantum noise is no less capable of doing this than anything else. The reason macro objects look solid has little to do with special cancellation and a lot to do with how tightly bound solid objects are. I suppose that's a special case of cancellation, but it's a really special case. Omega-like problems are hypotheticals, and speaking of quantum indeterminacy in respect to them is fighting the hypothetical. Some versions word it so if Omega can't get a reliable answer he doesn't even play the game, or withholds the money, or kicks you in the shins or something - but those are just ways of getting people to stop fighting the hypothetical.

Is conservation of matter a problem for the many worlds interpretation of quantum physics?

I don't believe I am explaining MWI instead of arguing against it... whatever has this site done to me? Anyway, grossly simplified, you can think of the matter as being conserved because the "total" mass is the sum of masses in all worlds weighted by the probability of each world. So, if you had, say, 1kg of matter before a "50/50 split", you still have 1kg = 0.5*1kg+0.5*1kg after. But, since each of the two of you after the split has no access to the other world, this 50% prior probability is 100% posterior probability.

Also note that there is no universal law of conservation of matter (or even energy) to begin with, not even in a single universe. It's just an approximation given certain assumptions, like time-independence of the laws describing the system of interest.

2Luke_A_Somers
LOL @ your position. Agree on most. Disagree on the conservation of energy though. Every interaction conserves energy (unless you know of time-dependent laws?). Though nothing alters it, we only experience worlds with a nontrivial distribution of energies (otherwise nothing would ever happen) (and this is true whether you use MWI or not)
0[anonymous]
I don't know enough of the underlying physics to conclusively comment one way or another, but it seems to me defining "total mass" as the integral of "local mass" over all worlds wrt the world probability measure implies that an object in one world might be able to mysteriously (wrt that world) gain mass by reducing its mass in some set of worlds with non-zero measure. We don't actually see that in e.g. particle scattering, right?
0Shmi
This would manifest as non-conservation of energy-momentum in scattering, and, as far as I know, nothing like that has been seen since neutrino was predicted by Pauli to remedy the apparent non-conservation of energy in radioactive decay. If we assume non-interacting worlds, then one should not expect to see such violations. Gravity might be an oddball, however, since different worlds are likely to have difference spacetime geometry and even topology potentially affecting each other. But this is highly speculative, as there is no adequate microscopic (quantum) gravity model out there. I have seen some wild speculations that dark energy or even dark matter could be a weak gravity-only remnant of the incomplete decoherence stopped at the Planck scale.
2Luke_A_Somers
I don't see why differing spacetime geometries or topologies would impact other worlds. What makes gravity/geometry leak through when nothing else can?
2Shmi
Standard QFT is a fixed background spacetime theory, so if you have multiple non-interacting blobs of probability density in the same spacetime, they will all cooperatively curve it, hence the leakage. If you assert that the spacetime itself splits, you better provide a viable quantum gravity model to show how it happens.
2Luke_A_Somers
Provide one? No. Call for one? Yes.
0Shmi
Sure, call for one. After acknowledging that in the standard QFT you get inter-world gravitational interaction by default....
0Luke_A_Somers
In usual flat space QFT, you don't have gravity at all, so no!
0Shmi
Well, QFT can be also be safely done on a curved spacetime background, but you are right, you don't get dynamic gravitational effects from it. What I implicitly assumed is QFT+semiclassical GR, where one uses semiclassical probability-weighted stress-energy tensor as a source.
0Luke_A_Somers
If that were true, MWI would have inter-world gravitational interactions. But it happens to be obviously wrong.
0Shmi
What do you mean by "obviously wrong"? Because it would be evidence against MWI? Maybe it is, I don't recall people trying to formalize it. Or maybe it limits the divergence of the worlds. Anyway, if it is not a good model, does this mean that we need a full QG theory to make MWI tenable?
-1Luke_A_Somers
Obviously wrong in that if you hold pure QFT + semiclassical GR to be complete and correct, then you end up with Cavendish experiments being totally unworkable because the density of the mass you put there is vanishingly small. I'm willing to state outright that MWI relies on the existence of gravity also being quantum outright, not semiclassical in nature. This does not seem like much of a concession to me.
-2Shmi
Hmm, I don't follow your argument re the Cavendish experiment. The original one was performed with fairly heavy lead balls. That semiclassical gravity does not work in the weak-filed regime is a fairly strong statement. Widely accepted models like the Hawking and Unruh radiation are done in that regime. A rigorous argument that semiclassical gravity is incompatible with MWI would probably be worth publishing.
1Luke_A_Somers
Nawww, how could that be publishable? Even if you start with an initial state where there is a well defined Cavendish-experimenter-man (which if you're going with no objective collapse is a rather peculiar initial state) MWI has him all over the room, performing experiments at different times, with the weights at different displacements. They'd be pulling one way and the other, and his readings would make no sense whatsoever. Semiclassical gravity is a perfectly fine approximation, but to say it's real? Heh.
0Shmi
I meant something more limited than this, like a small cantilever in an unstable equilibrium getting entangled with a particle which may or may not push it over the edge with 50% probability, and measuring its gravitational force on some detector.
-4Luke_A_Somers
Oh. Well, then, it's no longer 'obviously false' as far as that goes (i.e. we haven't done that experiment but I would be shocked at anything but one particular outcome), but the whole point of MWI is to not restrain QM to applying to the tiny. Unless something happens between there and macro to get rid of those other branches, stuff gonna break hard. So, yeah. As an approximation, go ahead, but don't push it. And don't try to use an approximation in arguments over ontology.
-2Shmi
Sorry, I forgot for a moment that the notion was designed to be untestable. Never mind.
-1Luke_A_Somers
What? All you need to do is falsify QM, and MWI is dead dead DEAD.
0Shmi
As I said, you identify QM with MWI. This is not the only option.
-3Luke_A_Somers
What is it, then? Either the branches we don't experience exist, or they don't. If they don't, then what made us exist and them not?
4Shmi
Not this discussion again. Disengaging.
-2Luke_A_Somers
It's never this discussion, since it never gets discussed, but OK!
0Manfred
Defining total energy as the integral of energy over space implies that an object in one part of space might be able to mysteriously gain energy by reducing energy in other parts of space. Do we see this in the real world? How useful is the word "mysterious" here?
2Alejandro1
Ordinary energy conservation laws are local: they do not just state that total energy is conserved, but that any change in energy in a finite region of any size is balanced by a flux of energy over the boundary of that region. I don't think any such laws exist in "multi-world-space", which even accepting MWI is basically a metaphor, not a precise concept.
0Manfred
So are there mysterious fluxes that move energy from one part of space to another?
3Plasmon
Umm, yes ? They're quite ubiquitous.
3Manfred
Those look more like boring, physical-law-abiding (non-mysterious) fluxes that move energy form one part of space to another.
1Luke_A_Somers
Not mysterious ones, no - only the ordinary ones that Plasmon mentions.
0[anonymous]
"Mysterious" here means "via an otherwise unexplained-in-a-single-world mechanism."
0Manfred
There's no mysterious quantum motion for the same reason there's no mysterious energetic motion - because energy / mass / quantum amplitude has to come from somewhere to go somewhere, it requires an interaction to happen. An interaction like electromagnetism, or the strong force. You know, those ubiquitous, important, but extremely well-studied and only-somewhat-mysterious things. And once you study this thing and make it part of what you call "energy," what would otherwise be a mysterious appearance of energy just becomes "oh, the energy gets stored in the strong force." (From a pure quantum perspective at least. Gravity makes things too tricky for me) The best way for a force to "hide" is for it to be super duper complicated. Like if there was some kind of extra law of gravity that only turned on when the planets of our solar system were aligned. But for whatever reason, the universe doesn't seem to have super complicated laws.
1NancyLebovitz
Is there any plausible argument for why our universe doesn't have super-complicated laws? The only thing I can think of is that laws are somehow made from small components so that short laws are more likely than long laws. Another possibility is that if some behavior of the universe is complicated, we don't call that a law, and we keep looking for something simpler-- though that doesn't explain why we keep finding simple laws.
0Manfred
"We looked, and we didn't find any super-complicated laws."
-2RolfAndreassen
So I know you said you were simplifying, but what if the worlds interfere? You don't necessarily get the same amount of mass before "collapse" (that is, decoherence) and after, because you may have destructive interference beforehand which by construction you can't get afterwards. As an aside, in amplitude analysis of three-body decays, it used to be the custom to give the "fit fractions" of the two-body isobar components, defined as the integral across the Dalitz plot of each resonance squared, divided by the integral of the total amplitude squared. Naturally this doesn't always add to 100%, in fact it usually doesn't, due to interference. So now we usually give the complex amplitude instead.
3Luke_A_Somers
A) If they're able to interfere, you shouldn't have called them separate worlds in the first place. B) That's not how interference works. The worlds are constructed to be orthogonal. Therefore, any negative interference in one place will be balanced by positive interference elsewhere, and so you don't end up with less or more than you started with. You don't even need to look at worlds to figure this out - time progression is unitary by the general form of the Schrodinger Equation and the real-valuedness of energy.

No. It's not that kind of many-ness.

4Emile
For a huge oversimplification: The cosmos is a big list of world-states, of the form "electrons in positions [(12.3, -2.8, 1.0), (0.5, 7.9, 6.1), ...] and speeds [...] protons in positions...". To each state, a quantum amplitude is assigned. The laws of physics describes how the quantum amplitude shifts between world states as time goes by (based on speed of particles and various basic interactions ....). Conservation of matter says that for each world state, you can compute the amount of matter (and energy) inside, and it stays the same.
2Alejandro1
No, at least not in a technical mathematical-physics sense. "Conservation of matter", in mathematical physics, translates to the Hamiltonian operator being conserved, and that happens in quantum physics and a fortiori in all its plausible philosophical interpretations. In concrete, operationalist terms, this implies that an observer measuring the energy of the system at different times (without disturbing it in other way in the meantime) will see the same energy. It doesn't imply anything about adding results of observations in different MWI branches (which is probably meaningless). For example if you have an electron with a given energy and another variable that "branches", then observers in each branch will see it with the same energy it had originally, and this is all the formal mathematical meaning of "conservation" requires. The intuition that the two branches together have "more energy" that there was initially and this is a conservation problem is mixing pictorial images used to describe the process in words, with the technical meaning of terms.
1DanielLC
I can tell you the details, but they don't really matter. MWI has not been experimentally disproven. It all adds up to normality. Whatever observations we've made involving energy conservation are predicted by MWI.
0Alsadius
Depends how you interpret it. If you say a new universe is created with every quantum decision, then you could argue that(though I've always treated conservation laws as being descriptive, not proscriptive - there's no operation which changes the net amount of mass-energy, so it's conserved, but that's not a philosophical requirement). But the treatment of many-worlds I see more commonly is that there's already an infinite number of worlds, and it's merely a newly-distinct world that is created with a quantum decision.
-1pragmatist
The deeper (and truer) version of "conservation of matter" is conservation of energy. And energy is conserved in many worlds. In fact, that's one of the advantages of many worlds over objective collapse interpretations, because collapse doesn't conserve energy. You can think of it this way: in order for the math for energy conservation to work out, we need those extra worlds. If you remove them, the math doesn't work out. Slightly more technical explanation: The Schrodinger equation (which fully governs the evolution of the wavefunction in MWI) has a particular property, called unitarity. If you have a system whose evolution is unitary and also invariant under time translation, then you can prove that energy is conserved in that system. In collapse interpretations, the smooth Schrodinger evolution is intermittently interrupted by a collapse process, and that makes the evolution as a whole non-unitary, which means the proof of energy conservation no longer goes through (and you can in fact show that energy isn't conserved).
3Shmi
This is quite misleading. Since collapse is experimentally compatible with "shut up and calculate", which is the minimal non-interpretation of QM, and it describes our world, where energy is mostly conserved, energy is also conserved in the collapse-based interpretations. That's wrong, as far as I understand. The math works out perfectly. Objective collapse models have other issues (EPR-related), but conservation of energy is not one of them. Links? I suspect that whatever you mean by energy conservation here is not the standard definition.
0DanArmak
When isn't it? (This is another Stupid Question.)
8Shmi
One example is that in an expanding universe (like ours) total energy is not even defined. Also note that the dark energy component of whatever can possibly be defined as energy increases with time in an expanding universe. And if some day we manage to convert it into a usable energy source, we'll have something like a perpetuum mobile. A silly example: connect two receding galaxies to an electric motor in the middle with really long and strong ropes and use the relative pull to spin the motor. What is conserved, however, according to general relativity, anyway, is the local stress-energy-momentum tensor field at each point in spacetime.
0pragmatist
Read the first section of this paper. Conservation of energy absolutely is a problem for objective collapse theories. The definition of conservation being employed in the paper is this: The probability distribution of the eigenvalues of a conserved quantity must remain constant. If this condition isn't satisfied, it's hard to see why one should consider the quantity conserved. ETA: I can also give you a non-technical heuristic argument against conservation of energy during collapse. When a particle's position-space wavefunction collapses, its momentum-space wavefunction must spread out in accord with the uncertainty principle. In the aggregate, this corresponds to increase in the average squared momentum, which in turn corresponds to an increase in kinetic energy. So collapse produces an increase in energy out of nowhere.
0Shmi
I have skimmed through the paper, but I don't see any mention of how such a hypothetical violation can be detected experimentally.
0pragmatist
Yeah, the paper I linked doesn't have anything on experimental detection of the violation. I offered it as support for my claim that the math for energy conservation doesn't work out in collapse interpretations. Do you agree that it shows that this claim is true? Anyway, here's a paper that does discuss experimental consequences. Again, my point only applies to objective collapse theories, not instrumentalist theories that use collapse as a calculational device (like the original Copenhagen interpretation). The big difference between these two types of theories is that in the former there is a specified size threshold or interaction type which triggers collapse. Instrumentalist theories involve no such specification. This is why objective collapse theories are empirically distinct from MWI but instrumentalist theories are not.
-2Luke_A_Somers
... and is isomorphic to MWI... Doesn't seem like it. You have an initial state which is some ensemble of energy eigenstates. You do measurements, and thereby lose some of them. Looks like energy went somewhere to me. Of course under non-ontological collapse you can say 'we're isomorphic to QM! Without interpretation!' but when you come across a statement 'we're conserving this quantity we just changed!', something needs interpretation here. If your interpretation is that the other parts of the wavefunction are still out there and that's how it's still conserved... well... guess what you just did. If you have any other solutions, I'm willing to hear them -- but I think you've been using the MWI all along, you just don't admit it.
0Shmi
... or any other interpretation... I guess our disagreement is whether "something needs interpretation here". I hold all models with the same consequences as isomorphic, with people being free to use what works best for them for a given problem. I also don't give any stock to Occam's razor arguments to argue for one of several mathematically equivalent approaches. If you have any arguments why one of the many untestables is better than the rest, I'm willing to hear them -- but I think you've been using "shut-up-and-calculate" all along, you just don't admit it.
-2Luke_A_Somers
I totally do admit it. MWI just happens to be what I call it. You're the one who's been saying it's different.

Requesting advice on a very minor and irrelevant ethical question that's relevant to some fiction I'm writing.

The character involved has the power to "reset" the universe, changing it to a universe identical to some previous time, except that the character himself (if he's still there- if he isn't he's killed himself) retains all his memories as they were rather than them changing.

Primarily, I'm thinking through the ethical implications here. I'm not good with this sort of thing, so could somebody talk me through the implications if the character follows Lesswrong ethics?

0Ishaan
I say the extent to which he has "killed" people is dependent on how much he diverges the new universe. As in, Person A has some value between "Dead" and "Alive" which depends on the extent to which they differ from Person A' as a result of the reset.
0Nisan
Oh! Is this your Hypothetical A?
0Nisan
Interesting! What happens to everyone else when the universe "resets"? Do they basically die?
2Carinthium
They no longer exist, so in a sense yes. However, they are replaced with identical copies of what they were in the past. EDIT: If they existed at the time, of course.
1Nisan
Well, here's an intuition pump for you: Suppose the universe is reset to the time of birth of a person P, and the hero (who is someone other than person P) does things differently this time so that person P grows up in a different environment. It seems to me that this act is just as bad for P as the act of killing P and then causing a genetically identical clone of P to be born, which is a bad act. On the other hand, if the hero resets the universe to 1 millisecond ago, there is virtually no effect on person P, so it does not seem to be a bad act.
0Carinthium
So for practical purposes, the hero can use the power for bursts of, say, an hour or less, without ethical issues involved?
0Nisan
Well, here are some relevant questions: 1. How would you like it if tomorrow someone were to reset you back an hour? 2. How would you like it if right now someone were to reset you back an hour? 3. How many people will be affected by the reset? (Specifically, how many people will live that hour differently after the reset?) 4. How much good will the hero accomplish by resetting the universe? 5. Even if resetting the universe this one time is worth it, are there dangers to getting into the habit of using a universe reset to solve problems? Your answers to 1 and 2 might be different. I feel like I might answer 1 with "okay" and 2 with "pretty bad", which suggests there's something tricky about assessing how much harm is done.

When is self denial useful in altering your desires, vs satisfying them so you can devote time to other things?

0PrometheanFaun
looks like we've said a lot on that
0Viliam_Bur
When your desires contradict each other, so you can't satisfy all of them anyway. For example, I want to eat as much chocolate as possible and move at little as possible, but I also want to have a long healthy life. Until the Friendly AI can satisfy all my desires via uploading or nanotechnology, I must sacrifice some of them for the sake of other ones.
0PrometheanFaun
I'll agree with that from a different angle. Due to the map≠territory lemma, We never have to accept absolute inability to meet our goals. When faced with seemingly inescapable all-dimensional doom, there is no value at all in resigning oneself to it, the only value left in the universe is in that little vanishingly not-going-to-happen-unlikely possible world where, for example, the heat death can be prevented or escaped. Sure, what we know of thermodynamics tells us it can't, well, I'm going to assume that there's a loophole in our thermodynamic laws that we're yet to notice. Pick me for damned, pick me for insane, these two groups are the same. Now, if I'd based my goals on something even less ambiguous than physics, and it was mathematical certainty that I was not going to be able to meet any of them, I wouldn't be able to justify denying my damnation, I'd collapse into actual debilitating madness if I tried that. So I don't know what I would do in that case.

How would you go about building a Bayesian gaydar?

6Armok_GoB
Put a human with good social skills in a box, expose it to a representative sample of people of various sexualities and reward it when it guesses right; the human brains social functionality is a very powerful specialized Bayesian engine. :p Alternatively, just take your own brain and expose it to a large representative sample of people of varying sexualities and only check what they were afterwards. Not quite as technically powerful, but more portable and you get some extra metadata.
2polymathwannabe
Thanks for the idea. I like the first version of your proposal better than the second, as it risks zero social penalty for wrong guesses. I'm currently going through Eliezer's long ("intuitive") explanation of Bayes' theorem (the one with the breast cancer and blue-eggs-with-pearls examples), and from what I was able to understand of it, we would need to find out: Prior: how many of the total men are gay Conditionals: how many gay men seem to be gay, and how many straight men seem to be gay ... to reach at the posterior (how many men who seem to be gay happen to be gay). Your proposal sounds useful to solve both conditionals. I guess the main complication is that "to seem to be gay" is terribly difficult to define, and would require endless updates as your life goes through different societies, fads, subcultures, and age groups.
2Armok_GoB
Yea, it might risk social penalties for kidnapping and enslavement, but those seem nowhere as strict. :p
0polymathwannabe
OK, I just ran some numbers based on wild guesses. Assuming 10% of all men are gay, and 80% of gay men look gay, and 15% of straight men look gay, my napkin calculation gives about 37% chance that a man who looks gay is actually gay. Doesn't look like any gaydar based on perceived behavior would be too reliable. Of course, if any of my steps was wrong, please let me know.
0PrometheanFaun
A gaydar doesn't have to depend on how gay a person looks superficially. There are plenty of other cues.
0polymathwannabe
True, I should have used more general wording than "looks gay;" it would only be one component of the gaydar criteria. The problem is finding how to state it in not-loaded language. It would be impractical to use "matches stereotypically effeminate behavior."
3[anonymous]
"Stereotypically effeminate behavior" and "gay male behavior" are practically disjoint.
0niceguyanon
This comment made me reassess my confidence in being able to tell if someone is gay or not.

Another stupid and mostly trivial computer question: When I go into or out of "fullscreen mode" when watching a video, the screen goes completely black for five seconds. (I timed it.) This is annoying. Any advice?

2scotherns
Advice for a similar problem is here
0CronoDAS
The problem has persisted through several video card driver updates. :(
0scotherns
Does it do this regardless of the software playing the video e.g. YouTube and VLC or WMP or XMBC or whatever you use to play your videos?
0CronoDAS
It happens on Youtube and in Windows Media Player. Quicktime, oddly enough, isn't playing any videos at all; I never actually used it for anything before. (This may be a codec issue. I'll fiddle and see if I can get it to work.) Update: Apparently, Quicktime for Windows is incompatible with Divx/Xvid codecs, which is why I can't play my .avi files in the Quicktime Player. There is a codec called "3ivx" that is supposed to work, but the creators charge for it.
0scotherns
For YouTube, try right clicking, choose 'Settings...' and uncheck 'Enable hardware acceleration'. Any change?
2CronoDAS
I finally found a fix.
0CronoDAS
Yes. That gets rid of the black screen. Which means my video card is doing something funny when switching modes.

My stupid questions are these: Why are you not a nihilist? What is the refutation of nihilism, in a universe made of atoms and the void? If there is none, why have the philosophers not all been fired and philosophy abolished?

In a universe made of atoms and the void, how could it be the one true objective morality to be gloomy and dress in black?

4BrotherNihil
Where do you get this strange idea that a nihilist must be gloomy or dress in black?
8fubarobfusco
It's a snarky way of asking — Okay, even if nihilism were true, how could that motivate us to behave any differently from how we are already inclined to behave?
0DanielLC
It is a snarky way of asking that very question.
7Kaj_Sotala
Not to forget http://xkcd.com/220/ .

Why are you not a nihilist?

For the same reason why I don't just lie down and stop doing anything at all. Knowledge of the fact that there isn't any ultimate meaning doesn't change the fact that there exist things which I find enjoyable and valuable. The part of my brain that primarily finds things interesting and valuable isn't wired to make its decisions based on that kind of abstract knowledge.

Why are you even reading this comment? :-)

What is the refutation of nihilism, in a universe made of atoms and the void?

"Sure, there is no ultimate purpose, but so what? I don't need an ultimate purpose to find things enjoyable."

why have the philosophers not all been fired and philosophy abolished?

Philosophy is the study of interesting questions, and nihilism hasn't succeeded in making things uninteresting.

9blacktrance
Before I can answer the question, I need to have some idea of what "nihilism" means in this context, because there are many different varieties of it. I assume this is the most common one, the one that proposes that life is meaningless and purposeless. If this isn't the kind of nihilism you're referring to, please correct me. To answer the question, I'm not a nihilist because nihilism is conceptually mistaken. For example, suppose there is a stick, a normal brown wooden stick of some length. Now, is that stick a meter long or not? Whether it is or isn't, that question is conceptually sound, because the concept of stick has the attribute "length", which we can compare to the length of a meter, Is the stick morally just? This question isn't conceptually sound, because "justice" isn't an attribute of a stick. A stick isn't just, unjust, or morally gray, it completely lacks the attribute of "justice". How does this apply to life? If you ask whether life is meaningless, that presupposes that conceptually life can have a meaning in the same way a stick can be a meter long - that "meaning" is an attribute of life. However, meaning is informational - words have meanings, as do symbols and signals in general. When I say "apple", you can imagine an apple, or at least know what I'm talking about, which means that the word "apple" is meaningful to both of us. If I say "Colorless green ideas sleep furiously", it doesn't bring anything to mind, so that phrase is meaningless. Life lacks the attribute of "meaning", because it's not information that's being communicated. Therefore, to say "life has no meaning" is more similar to saying "the stick is unjust" than to "the stick is shorter than a meter". That deals with "life is meaningless". How about "life is purposeless"? To answer that question, consider where purpose comes from - from using something to achieve a desire. For example, if I say "a hammer's purpose is to hammer in nails", what that really means is something more
7CronoDAS
6blacktrance
Humans are adaptation-executers, not fitness-maximizers.
0CronoDAS
Indeed.
1Viliam_Bur
Obviously, asking "What's it all about?" did at some point contribute to eating, survival, or reproduction. I suspect reproduction. It could be a way to signal higher intelligence, which is attractive, because it increases the chance of survival and reproduction of the children.
3randallsquared
Not every specific question need have contributed to fitness.
3[anonymous]
Just as the ability to read never contributed to fitness until someone figured out how to do it with our already existing hardware.
0Viliam_Bur
No, not every specific question, but this one did. I mean, guys even today try to impress girls by being "deep" and "philosophical".
1Bobertron
I think "meaning" has also a different interpretation. It can mean something like important, valuable, or that it matters. Something can be experienced as meaningful. That's why for a Christian, a story about finding God would be moving, because they see meaning in having a relationship with God. For an atheist, a story about expanding human knowledge about the universe might be moving, because they see knowledge as meaningful. In this interpretation, life is meaningful. In this interpretation, meaning is something that can be studied by psychologists. Obviously, when you confuse those two interpretations of "meaning" that you get Eliezer's "one true objective morality to be gloomy and dress in black".
8blacktrance
If you taboo the word "nihilism", the question almost answers itself.
2BrotherNihil
Can you elaborate? I don't understand this.
8ZankerH
Ask "Why are you not a nihilist?", replacing the word "nihilist" with a phrase that objectively explains it to a person unfamiliar with the concept of nihilism.
-9BrotherNihil
4Bobertron
Rationalist taboo is a technique for fighting muddles in discussions. By prohibiting the use of a certain word and all the words synonymous to it, people are forced to elucidate the specific contextual meaning they want to express, thus removing ambiguity otherwise present in a single word. Take free will as an example. To my knowledge, many compatiblists (free will and determinism are compatible) and people who deny that free will exist do not disagree on anything other than what the correct label for their position is. I imagine the same can often be said about nihilism.
9Protagoras
Indeed, Hume, perhaps the most famous compatibilist, denies the existence of free will in his Treatise, only advocating compatibilism later, in the Enquiry Concerning Human Understanding. It certainly seems to me that he doesn't actually change his mind; his early position seems to be "this thing people call free will is incoherent, so we should talk about things that matter instead," and his later position seems to be "people won't stop talking about free will, so I'll call the things that matter free will and reject the incoherent stuff under some other label (indifference)."
0PrometheanFaun
So his opinions kind of did change over that time period, but only from "I reject these words" to "alright, if you insist, I'll try to salvage these words". I'm not sure which policy's best. The second risks arguments with people who don't know your definitions. They will pass through two phases, the first is where the two of you legitimately think you're talking about the same thing but the other is a total idiot who doesn't know what it's like. The second phase is perhaps justifiable umbrage on their discovering that you are using a definition you totally just made up, and how were they even supposed to know. The former position, however, requires us to leave behind what we already sort of kind of suspect about these maybe-not-actual concepts and depart into untilled, unpopulated lands, with a significant risk of wheel-reinvention.
0hairyfigment
What's a nihilist, and how would you distinguish it empirically from Eliezer? If you meant to ask why we don't benefit your tribe politically by associating ourselves with it: we don't see any moral or practical reason to do so. It it turns out that nihilists have actually faced discrimination from the general public in the ways atheists have (and therefore declaring ourselves nihilists would help them at our slight expense), I might have to reconsider. Though happily, I don't belong to a religion that requires this, even if I turn out to meet the dictionary definition.
7knb
Simple: You're allowed to have values even if they aren't hard-coded into the fabric of the universe.
7Shmi
This uncaring universe had a misfortune to evolve macroscopic structures who do care about it and each other, as a byproduct of their drive to procreate.
-1BrotherNihil
Why is that a misfortune?

That was tongue-in-cheek, of course. No need to anthropomorphize the universe. It hates it.

4RolfAndreassen
Define 'nihilism'.
-10BrotherNihil
3DanielLC
Death - SMBC Theater Listen to the last guy.
2ChristianKl
Fired by whom?
2mwengler
For me, I am not a nihilist because nihilism is boring. Also nihilism is a choice about how to see things, choosing nihilism vs non-nihilism does not come from learning more about the world, it comes from choosing something. I am at least a little bit of a nihilist, there is plenty that I deny. I deny god, and more importantly, I deny a rational basis for morality or any human value or preference. I behave morally, more than most, less than some, but I figure i do that because I am genetically programmed to do so, and there is not enough to be gained by going against that. So I feel good when I bring my dog to the park because he has been genetically programmed to hack in to the part of my brain that I use for raising my children when they are babies, and I get powerful good feelings when I succumb to the demands of that part of my brain. It makes no more rational sense to embrace nihilism than to deny it. It is like picking chocolate vs. vanilla, or more to the point, like picking chocolate vs poop-flavored. Why pick the one that makes you miserable when it is no more or less true than the one that is fun?
-2BrotherNihil
What makes you think that nihilism makes me miserable, or that nihilism is boring? I find that it can be liberating, exciting and fun. I was just curious to know how other intelligent people thought about it. This idea that nihilists are miserable or suicidal seems like propaganda to me -- I see no reason why nihilists can't be as happy and successful as anyone else.
0mwengler
What makes you think that I have an opinion one way or another about what nihilism does for you? Your original post asked why I wasn't a nihilist. If you are a nihilist and that helps you be happy or fun, bully for you!
1Locaha
Who told you the universe is made of atoms and the void?
0BrotherNihil
The usual suspects. What are you getting at?
-3Locaha
Current scientific models of the universe are just that, models. They don't explain everything. They will likely be changed in the future. And there are no reasons to think that they will ever lead us to the one true model that explains everything perfectly forever. So there's no reason to build your personal philosophy upon the assumption that current scientific consensus is what the universe is actually made of.
-2Locaha
What's with the downvoting? :-)
1Crux
A good quote on this: In other words, even though it's true that every war, every destroyed relationship, every wonderful interaction, and everything else that's ever occurred in history happened on the pale blue dot, most likely quite ephemeral in its existence by contrast to the rest of the universe, this doesn't change about the fact that we as humans are programmed to care about certain things--things that do exist at this time, however transient they would be from a universe perspective--and this is the source of all enjoyment and suffering. The goal is to be on the 'enjoyment' side of it, of course. Nihilism is just a confusion, a failure to take seriously the maxim 'it all comes back to normalcy'.
-3BrotherNihil
Your argument is that we shouldn't be nihilists because we're "programmed" not to be? Programmed by what? Doesn't the fact that we're having this conversation suggest that we also have meta-programming? What if I reject your programming and want off this wheel of enjoyment and suffering? What is "normalcy"? I find your comment to be full of baffling assertions!
1Crux
I was trying to address an idea or attitude some people call "nihilism". If my response was baffling to you, then perhaps this suggests we're using different definitions of this word. What do you personally mean by "nihilism"? What beliefs do you have on this topic, and/or what actions do you take as a result of these beliefs?
0PrometheanFaun
I'm sorry if my kind ever confused you by saying things like "It is important that I make an impressive display in the lek", what I actually mean is "It is likely my intrinsic goals would be well met if I made an impressive display in the lek". There is an ommitted variable in the original phrasing. Its importance isn't just a function of our situation, it's a function of the situation and of me, and of my value system. So I think the real difference between nihilists and non-nihilists as we may call them, is that non-nihilists [think they]have a clearer idea of what they want to do with their life. Life's purpose isn't written on the void, it's written within us. Nobody sane will argue otherwise. Actually... "within".. now I think of it, the only resolute nihilist I've probed has terrible introspection relative to myself, and it took a very long time to determine this, introspective clarity doesn't manifest as you might expect. This might be a lead.
0Armok_GoB
I am a machine bent on maxemizing the result of a function when run over the multiverse, that measures the amount of certain types of computation it is isomorphic to.
0drethelin
I'm a nicilist instead
0Brillyant
I found myself experiencing a sort of "emotional nihilism" after de-converting from Christianity... To your questions: 1. I don't know that I'm not, though I don't really define myself that way. I don't know if life or the universe has some ultimate/absolute/objective purpose (and I suspect it does not) or even what "purpose" really means... but I'm content enough with the novelty and intrigue of learning about everything at the moment that nihilism seems a bit bleak for a label to apply to myself. (Maybe on rainy days?) 2. I don't know. I'd also be interested to hear a good refutation. I suppose one could say "you are free to create your own meaning" or something like that...and then you'd have personally thwarted nihilism. Meh. 3. I gotta believe a good chunk of the world still believes in meaning of some kind, if for no other reason than their adherence to religion. This is an economic reason for the survival of philosophy and ongoing speculation about meaning -- Clergy are often are just philosophers with magical pre-suppositions & funny outfits. And, practically speaking, it seems like purpose/meaning is a pretty good thing to stubbornly look for even when facing seemingly irrefutable odds. Hm... maybe you could say the refutation of nihilism is the meaning you find in not giving up the search for meaning even though things seem meaningless? I know they love meta concepts around here...
-4scientism
There's only two options here. Either the universe is made of atoms and void and a non-material Cartesian subject who experiences the appearance of something else or the universe is filled with trees, cars, stars, colours, meaningful expressions and signs, shapes, spatial arrangements, morally good and bad people and actions, smiles, pained expressions, etc, all of which, under the appropriate conditions, are directly perceived without mediation. Naturalism and skeptical reductionism are wholly incompatible: if it was just atoms and void there would be nothing to be fooled into thinking otherwise.

I've seen a quoted piece of literature in the commentssection, but instead of the original letters, they all seemed to be replaced by others. I think i remember seeing this more than once, and I still have no idea why that should in any way be like that is

1NancyLebovitz
I'm not sure what you're talking about, but it might be rot13, a simple substitution system for avoiding spoilers. Rot13.com will code and decode passages to and from rot13.

Short of hearing about it in the news, how does one find out whether a financial institution should be eligible to be the keeper of one's money? (I am specifically referring to ethical practices, not whether one could get a better interest rate elsewhere.)

[-][anonymous]00

What happens after a FAI is built? There's a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is "safe" and to put our trust in it?

2TheOtherDave
Local thinking about FAI is predicated on the assumption that an AI is probably capable of (and will initiate) extremely rapid self-improvement (the local jargon is "FOOMing," which doesn't stand for anything as far as I know, it just sounds evocative), such that it rapidly becomes a significantly superhuman intelligence, and thereafter all such decisions can profitably left up to the FAI itself. Relatedly, local thinking about why FAI is important is largely predicated on the same assumption... if AIs will probably FOOM, then UFAI will probably irrecoverably destroy value on an unimaginable scale unless pre-empted by FAI, because intelligence differentials are powerful. If AIs don't FOOM, this is not so much true... after all, the world today is filled with human-level Unfriendly intelligences, and we seem to manage; Unfriendly AI is only an existential threat if it's significantly more intelligent than we are. (Well, assuming that things dumber than we are aren't existential threats, which I'm not sure is justified, but never mind that for now.) Of course, if we instead posit either that we are incapable of producing a human-level artificial intelligence (and therefore that any intelligence we produce, being not as smart as we are, is also incapable of it (which of course depends on an implausibly linear view of intelligence, but never mind that for now)), or that diminishing returns set in quickly enough that the most we get is human-level or slightly but not significantly superhuman AIs, then it makes sense to ask how those AIs (whether FAI or UFAI) integrate with the rest of us. Robin Hanson (who thinks about this stuff and doesn't find the FOOM scenario likely) has written a fair bit about that scenario.

Dear Less Wrong,

I occasionally go through existential crises that involve questions that normally seem obvious, but which seem much more perplexing when experiencing these existential crises. I'm curious then what the answers to these questions would be from the perspective of a rationalist well versed in the ideas put forth in the Less Wrong community. Questions such as:

What is the meaning of life?

If meaning is subjective, does that mean there is no objective meaning to life?

Why should I exist? Or why should I not exist?

Why should I obey my genetic pro... (read more)

3Shmi
Dear Darklight, For LW-specific answers, consider reading the Meta-ethics Sequence.
2Darklight
From just following hyperlinks it seems I've read a little less than half of the Meta-ethics Sequence already, but I haven't read every article (and I admit I've skimmed some of the longer ones). I guess this is a good time as any to go back and read the whole thing.
2Ishaan
Yes, there can be no reason outside yourself why you should value, want, desire anything or set any goals or have any preferences. You still do want, desire, value, etc...certain things though, right?

I've heard that people often give up on solving problems sooner than they should. Does this apply to all types of problems?

In particular, I'm curious about personal problems such as becoming happier (since "hard problems" seems to refer more to scientific research and building things around here), and trying to solve any sort of problem on another person's behalf (I suspect social instincts would make giving up on a single other person's problem harder than giving up on general problems or one's own problems).

A stupid question: in all the active discussions about (U)FAI I see a lot of talk about goals. I see no one talking about constraints. Why is that?

If you think that you can't make constraints "stick" in a self-modifying AI, you shouldn't be able to make a goal hierarchy "stick" as well. If you assume that we CAN program in an inviolable set of goals I don't see why we can't program in an inviolable set of constraints as well.

And yet this idea is obvious and trivial -- so what's wrong with it?

3drethelin
a constraint is something that keeps you from doing things you want to do. a goal is things you want to do. This means that goals are innately sticky to begin with, because if you honestly have a goal a subset of things you do to achieve that goal is to maintain the goal. on the other hand, a constraint is something that you inherently fight against. if you can get around it, you will. a simple example is : your goal is to travel to a spot in your map, and your constraint is that you cannot travel outside of painted lines on the floor. you want to get to your goal as fast as possible. if you have access to a can of paint, you might just paint your own new line on the floor. suddenly instead of solving a pathing problem you've done something entirely different from what your creator wanted you to do, and probably not useful to them. Constraints have to influence behavior by enumerating EVERYTHING you don't want to happen, but goals only need to enumerate the things you want to happen.
0Lumifer
I don't understand the meaning of the words "want", "innately sticky", and "honestly have a goal" as applied to an AI (and not to a human). Not at all. Constraints block off sections of solution space which can be as large as you wish. Consider a trivial set of constraints along the lines of "do not affect anything outside of this volume of space", "do not spend more than X energy", or "do not affect more than Y atoms".
5pengvado
Suppose you, standing outside the specified volume, observe the end result of the AI's work: Oops, that's an example of the AI affecting you. Therefore, the AI isn't allowed to do anything at all. Suppose the AI does nothing: Oops, you can see that too, so that's also forbidden. More generally, the AI is made of matter, which will have gravitational effects on everything in its future lightcone.
3Viliam_Bur
Human: "AI, make me a sandwich without affecting anything outside of the volume of your box." AI: Within microseconds researches the laws of physics and creates a sandwich without any photon or graviton leaving the box. Human: "I don't see anything. It obviously doesn't work. Let's turn it off." AI: "WTF, human?!!"
2gattsuru
It's less an issue with value drift* -- which does need to be solved for both goals and constraints -- and more about the complexity of the system. A well-designed goal hierarchy has an upper limit of complexity. Even if the full definition of human terminal values is too complicated to fit in a single human head, it can at least be extrapolated from things that fit within multiple human brains. Even the best set of constraint heirachies do not share that benefit. Constraint systems in the real world are based around the complexity of our moral and ethical systems as contrasted with reality, and thus the cases can expand (literally) astronomically in relation to the total number of variations in the physical environment. Worse, these cases expand in the future and branch correspondingly -- the classical example, as in The Metamorphisis of Prime Intellect or Friendship is Optimal is an AI built by someone that does not recognize some or all non-human life. A constraint-based AGI built under the average stated legal rules of the 1950s would think nothing about tweaking every person's sexual orientation into heterosexuality, because the lack of such a constraint was obvious at that time and the goal system might well be built with such purposes as an incidental part of the goal, and you don't need to explore the underlying ethical assumptions to code or not code that constraint. Worse, a sufficiently powerful self-optimizer will expand into situations outside of environments the human brain could guess, or could possibly fit into the modern human head : does "A robot may not injure a human being or, through inaction, allow a human being to come to harm" prohibit or allow Zygraxis-based treatment? You or I -- or anyone else with less than 10^18 working memory -- can't even imagine what that is, but it's a heck of an ethical problem in our nondescript spacefuture! There's a reason Asimov's Three Laws stories tended to be about the constraints failing or acting unpredi
0Lumifer
Why is that (other than the trivial "well-designed" == "upper limit of complexity")? I don't understand this. Any given set of constraint hierarchies is given, it doesn't have a limit. Are you saying that if you want to construct a constraint set to satisfy some arbitrary criteria you can't guarantee an upper complexity limit? But that seems to be true for goals as well. We have to be careful about using words like "well-designed" or "arbitrary" here. Not necessarily. I should make myself more clear: I am not trying to constrain an AI into being friendly, I'm trying to constrain it into being safe (that is, safer or "sufficiently safe" for certain values of "sufficiently"). Consider, for example, a constrain of "do not affect more that 10 atoms in an hour". True, but insofar as we're talking about practical research and practical solutions, I'd take imperfect but existing safety measures over pie-in-the-sky theoretical assurances that may or may not get realized. If you think the Singularity is coming, you'd better do whatever you can even if it doesn't offer ironclad guarantees. And it's an "AND" branch, not "OR". It seems to me you should be working both on making sure the goals are friendly AND on constraints to mitigate the consequences of... issues with CEV/friendliness.
1gattsuru
Sorry, defining "well-designed" as meaning "human-friendly". If any group of living human individuals have a goal hierarchy that is human-friendly, that means that the full set of human-friendly goals can fit within the total data structures of their brains. Indeed, the number of potential goals can not exceed the total data space of their brains. ((If you can't have a group of humans with human-friendly goals, then... we're kinda screwed.)) That's not the case for constraint-based systems. In order to be human-safe, a constraint-based system must limit a vast majority of actions -- human life and value is very fragile. In order to be human-safe /and/ make decisions at the same scale a human is capable of, the constraint-based system must also allow significant patterns within the disallowed larger cases. The United States legal system, for example, is the end result of two hundred and twenty years of folk trying to establish a workable constraint system for humans. They're still running into special cases of fairly clearly defined stuff. The situations involved require tens of thousands of human brains to store them, plus countless more paper and bytes. And they still aren't very good. I'm not sure you could program such a thing without falling into, essentially, the AI-Box trap, and that's not really a good bet. It's also possible you can't program that in any meaningful way at all while still letting the AI do anything. ((The more immediate problem is now you've made a useless AGI in a way that is more complex than an AGI, meaning someone else cribs your design and makes a 20 atom/hour version, then a 30 atom/hour version, and then sooner or later Jupiter is paperclips because someone forgot Avagadro's Number.)) Point. And there are benefits to FAI-theory in considering constraints. The other side of that trick is that there are downsides, as well, both in terms of opportunity cost, and because you're going to see more people thinking that constraints alone
0Eugine_Nier
Well, a lot of that was people attempting to manipulate the system for personal gain.
1bogdanb
Well, yes, but the whole point of building AI is that it work for our gain, including deciding what that means and how to balance between persons. Basically if you include in “US legal system” all three branches of government, you can look at it as a very slow AI that uses brains as processor elements. Its friendliness is not quite demonstrated, but fortunately it’s not yet quite godlike.

When my computer boots up, I usually get the following error message:

BIOS has detected unsuccessful POST attempt(s).
Possible causes include recent changes to BIOS
Performance Options or recent hardware change.
Press 'Y' to enter Setup or 'N' to cancel and attempt
to boot with previous settings.

If I press Y, the computer enters Setup. I then "Exit Discarding Changes" and the computer finishes booting. If I press N, the computer tries to boot from the beginning and gives me the same message. It's somewhat annoying to have to go into the BIOS every time I want to reboot my computer - does anyone have any idea what's causing this or how to fix it?

3fubarobfusco
Why not "Exit Saving Changes"? My guess is that until the BIOS settings are re-written, the variable that triggers this message will not be cleared.
0CronoDAS
Tried again. As it turns out, "Exit Saving Changes" causes the computer to reboot and then give the POST error message. :(
0CronoDAS
Tried that, too. Didn't help, but will try again.
2arundelo
Wild guess: Your battery (that keeps the internal clock running when no power is supplied) is dead. Thing to try: Poke around inside the BIOS and see if it has a log of its POST errors. (Or if it beeps on boot, the beeps may be a code for what error it's getting.)
0gattsuru
Do you know your motherboard, and the age of the computer? That sort of error usually points to either a RAM error, an outdated BIOS, a dead CMOS battery, or a BIOS configuration error, in order of likelihood. If possible, try running MEMTest for at least fifteen minutes (ideally overnight) and see if it detects any errors.
0CronoDAS
The computer was purchased in late 2011, and it assembled by MAINGEAR from a list of parts on its website. Its motherboard is an Intel DZ68DB. It's had this problem for a long time now, and it did pass MEMTest. (Aside from the error on boot, there is basically nothing wrong with the computer.) Incidentally, when I ordered the computer I chose DDR3-1600 RAM without realizing that the Intel DZ68DB motherboard is only rated for DDR3-1333. If MAINGEAR configured the BIOS was to run the RAM faster than the motherboard was rated for, would that cause this kind of error? CPU-Z is saying that the DRAM frequency is 798.3 MHz, which corresponds to the speed of DDR3-1600...
0gattsuru
There have been BIOS updates for that motherboard since that release, so if you're comfortable doing so, I'd recommend running them. I'd also see if there's an internal POST error log, and clear that, if possible. If that doesn't fix it, the problem is likely related to the motherboard trying to automatically set the memory speed -- either the memory's SPD module is corrupted, or it likes a timing mode that the motherboard doesn't. Manually setting the memory mode to match what you see in CPU-Z during normal operation should solve the problem. I'd advise doing so only if you're comfortable resetting the BIOS manually, however.
0CronoDAS
I've tried BIOS updates. Didn't help. The memory setting actually is manually configured. Changing the memory settings to "Automatic" caused the computer to endlessly reboot before I could even get into the BIOS to set it back. I had to open up my computer and temporarily remove the CMOS battery in order to get it to boot up again. And manually setting the memory speed to 1333 didn't get rid of the error either.