Yet more "stupid" questions
This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.
This is a thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. The previous thread is at close to 500 comments.
Comments (340)
I'm mid-twenties. Does it make sense to take a low-dose aspirin?
No it does not. Aspirin reduces the risk of heart attacks and strokes but also causes adverse outcomes - most importantly by raising the risk of gastro-intestinal bleeds. For the typical person in their mid twenties the risk of a heart attack or stroke is so low that the benefit of aspirin will be almost nil, the absolute value of intervening will be vanishingly small even though the proportional decrease in risk stays the same.
There are many possible effects of taking low dose aspirin other than those described so far - it may reduce the risk of colon cancer, for instance, but there are so many possible adverse outcomes too. Cyclooxygenase - the enzyme targeted by aspirin - is involved in many housekeeping functions throughout the body in particular the kidneys, stomach and possibly erectile tissue.
Studies examining risk versus benefit for low dose aspirin treatment have found that a cardiovascular risk of about 1.5%/year is necessary for the benefits of aspirin to outweigh the ill effects. Whilst no studies have been conducted on healthy young individuals I don't think such studies should be conducted, given that studies in those at a much higher cardiovascular risk than someone in their twenties have returned disappointing results we should not expect any great benefit from such a treatment. Indeed young people, men in particular, are much more likely to experience trauma than a cardiovascular event and patients taking low dose aspirin are much more likely to experience severe bleeding after trauma.
See this article for more information: http://www.sciencebasedmedicine.org/aspirin-risks-and-benefits/
I occasionally have dreams in which I am playing an RTS videogame like Starcraft. In these, I am a disembodied entity seeing the world only as it might be displayed in such a game. During those dreams, this feels natural and unsurprising and I don't put a second thought to the matter. In fact, I've been having these dreams for a while now and only just recently noticed that odd fact that it's not me sitting at a computer playing the game, it's just the game being the only thing in the world at all.
Do other people have dreams in which they are not human-shaped or otherwise experience from a perspective that is very different from real life?
I used to have Age of Empires dreams. I've even had Tetris dreams.
Tetris dreams are well-known phenomena, but the reports of them I've read are unclear as to the nature of the dreams themselves. Do you just see falling blocks? Or is it as if you are in a Tetris universe with nothing else? Can anyone comment or elaborate on the sensation?
I had numerous Tetris dreams during my peak of playing and for many months afterwards. My own experience was mostly going about my business in ordinary cityscapes, office spaces, rooms in my house, but with Tetris pieces constantly falling into gaps between objects. Rotate/drop was under my control but not always dependably so, sometimes creating an experience of panic as there was often some unknown but disastrous consequence of failure.
During this period incidence of such dreams increased with more Tetris-playing, but also were more often when I was stressed at work, in which cases the Tetris shapes were also somehow related to complex statistical / simulation programming I was doing in my day job.
I gave up Tetris cold-turkey when I began to see imaginary shapes falling between real objects during waking hours. Other games since then had similar but far smaller effects on my dream states.
I'm trying to recall, I haven't played Tetris in a few years. IIRC, was like playing Tetris on my computer, but without anything in my peripheral vision.
I've seen top-down perspectives in dreams, such as those in 2D RPGs. I feel like I'm playing a video game, but I don't have an awareness of a controller or anything; the characters just do what I tell them, and the "screen" is my entire visual field. (The actual experience of playing a video game tends to be similar: I almost never think about the controller or my hand; I just make stuff happen.) I also tend not to have much of a kinesthetic sense in dreams I remember, either.
Another weird thing: Everything I try to type in dreams is invariably misspelled. Once, in a dream, I was trying to Google something, but the text I was "typing" in the search bar kept changing pretty much at random. Only the letters that I'm "looking at" during any given moment stay what they are.
Happens to me too, just instead of googling it is usually me trying to write something down, e.g. someone's phone number, and failing to make the text legible or realising I wrote some nonsense instead of what I tried to write.
Actually, this is one of the techniques for lucid dreaming -- how to realize that you are in a dream. You need a test that will reliably give different results in reality and in dreams. Different things work for different people, but reading and writing is among frequent examples. Other examples: counting, or trying to levitate. (With levitation it is the other way round: it works only in dreams.)
Strange. I just now realized I probably never used a computer in my dream, although I spend most of my days at computer. How is that possible? An ad-hoc explanation is that precisely because my life is so much connected with computers, I don't perceive the computer as "computer", but merely as an extension of myself, as another input/output channel. Most of my dreams are about being with people or walking in the nature; and I actually do a very little of that.
Failing to achieve any kind of goals is a very common topic of dreams.
I've had similar dreams.
In general I don't think I'm aware of my self/body in dreams. Occasionally I'm different people but dont notice.
I've had some dreams like that - a few dreams were specifically of the game Supreme Commander, and I also occasionally am in third-person in dreams as if I was watching from the screen of a third-person game - I don't think it's really "very different from real life", it's close to the experience of being immersed in a videogame, it's just that rather than overlooking details you're not paying attention to, those details simply don't exist because it's a dream.
Is LSD like a thing?
Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.
My greatest concern would be that I would find the results of a trip irreducibly spiritual, or some other nonsense. That I would end up sacrificing a lot of epistemic rationality for some of the instrumental variety, or perhaps a loss of both in favor of living off of some big, new, and imaginary life changing experience.
In short, I'm comfortable with recent life changes and recent introspection, and I wonder whether I should expect a trip to reinforce and categorize those positive experiences, or else replace them with something farce.
Also I should ask about any other health dangers, or even other non-obvious benefits.
One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.
When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".
I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.
I won't be able to do it justice in words, but I like to try.
If you value your current makeup as a "rationalist" - LSD will not necessarily help with that. Whatever your current worldview, it is not "the truth", it is constructed, and it will not be the same after you come down.
You can't expect a trip to do anything in particular, except maybe blow your mind. A trip is like finding out you were adopted. It's discovering a secret hidden in plain sight. It's waking up to realize you've never been awake before - you were only dreaming you were awake. It's finding out that everything familiar, everything you took for granted, was something else all along, and you had no idea.
No matter how much you've invested in the identity of "rationalist", no matter how much science you've read... Even if you know how many stars there are in the visible universe, and how many atoms. Even if you've cultivated a sense for numbers like that, real reality is so much bigger than whatever your perception of it is. I don't know how acid works, but it seems to open you in a way that lets more of everything in. More light. More information. Reality is not what you think it is. Reality is reality. Acid may not be able to show you reality, but it can viscerally drive home that difference. It can show you that you've been living in your mind all your life, and mistaking it for reality.
It will also change your sense of self. You may find that your self-concept is like a mirage. You may experience ego-loss, which is like becoming nobody and nothing in particular, only immediate sensory awareness and thought, unconnected to what you think of as you, the person.
I don't know about health dangers. I never experienced any. Tripping does permanently change the way you view the world. It's a special case of seeing something you can't un-see. Whether it's a "benefit" ... depends a lot on what you want.
(Created an alternative username for replying to this because I don't want to associate my LSD use with my real name.)
I'd just like to add a contrary datapoint - I had a one pretty intense trip that you might describe as "fucking weird", which was certainly mind-blowing in a sense. My sense of time transformed stopped being linear and started feeling like it was a labyrinth that I could walk in, I alternatively perceived the other people in the room as being real separate people or as parts of my own subconsciousness, and at one point it felt like my unity of consciousness shattered into a thousand different strands of thought which I could perceive as complex geometric visualizations...
But afterwards, it didn't particularly feel like I'd have learned anything. It was a weird and cool experience, but that was it. You say that one's worldview won't be the same after coming down, but I don't feel like the trip changed anything. At most it might've given me some mildly interesting hypotheses about the way the brain might work.
I'm guessing that the main reason for this might be that I already thought of my reality as being essentially constructed by my brain. Tripping did confirm that a bit, but then I never had serious doubts about it in the first place.
I don't think describing the experience itself is very helpful to answering the question.. The comment seems as close to an answer of "yes, it's likely you would find the results of a trip irreducibly spiritual or some other nonsense" as someone would actually give, but because of the vagueness that seems to be intrinsic to descriptions of the experience of a trip, I'm not even sure if you're espousing such things or not.
In my experience, it is possible to bring parts of the experience back and subject it to analytical and critical thinking, but it is very challenging. The trip does tend to defy comprehension by the normal mode of consciousness, which is why descriptions have the quality you call "vagueness". In short, distilling more than "irreducibly spiritual nonsense" from the trip takes work, not unlike the work of organizing thoughts into a term paper. It can be done, and the more analytical your habits of thought to begin with, the more success I think you could expect.
I don't imbibe (for that matter, pretty much anything stronger than caffeine), so I can't offer any information about the experience of its affects on rationality.
From the literature, it has a relatively high ratio of activity threshold to lethal dose (even assuming the lowest supported toxic doses), but that usually doesn't include behavior toxicity. Supervision is strongly recommended. There's some evidence that psychoactive drugs (even weakly psychoactive drugs like marijuana) can aggravate preexisting conditions or even trigger latent conditions like depression, schizophrenia, and schizoid personality disorder.
Datapoint: another halluciogen, ketamine, has been shown to effectively treat depression. Not sure if mechanisms of LSD are similar.
The visual system is very complicated, and many different classes of drugs can cause hallucinations in different ways without the overall experience being similar.
Ketamine and LSD do not have similar mechanisms in the brain, nor (from what I've read) are their effects qualitatively similar. LSD is a psychedelic acting as an agonist on 5-HT_2A receptors (among other things, but that's what it shares in common with other classic psychedelics. Ketamine is a dissociative anesthetic acting as an antagonist on NMDA receptors. LSD is, however, effective against migraines at sub-hallucinogenic doses.
I am confused by discussions about utilitarianism on LessWrong. My understanding, which comes mostly from the SEP article, was that pretty much all variants of utilitarianism are based on the idea that each person's quality of life can be quantified--i.e., that person's "utility"--and these utilities can be aggregated. Under preference utilitarianism, a person's utility is determined based on whether their values are being fulfilled. Under all of the classical formulations of utilitarianism, everyone's utility function has the same weight when the aggregation is performed, hence the catchy phrase "greatest good for the greatest number".
However, I have also seen LW posts and comments talk about utilitarianism in relation to how much you should value the lives of people close to you compared to other people, and how much you should value abstract things like "freedom" relative to people's lives. This comment thread is one example. These discussions about valuing the lives of others and quantifying abstract values sounds a lot like utility maximization under rational choice theory rather than utilitarianism.
So are people conflating utility maximization and utilitarianism, am I getting confused and misunderstanding the distinction, or is something else going on?
Often, yes.
What fiction should I read first?
I have read pretty much nothing but MoR and books I didn't like for school, so I don't really know what my preferences are. I am a mathematician and a Bayesianist with an emphasis on the more theoretical side of rationality. I like smart characters that win. I looked at some recommendations on other topics, but there are too many options. If you suggest more than one, please describe a decision procedure that uses information that I have and you don't to narrow it down.
Terry Pratchett's Discworld series. I recommend starting with Mort (the fourth book published). The first two books are straight-up parodies of fantasy cliches that are significantly different from what comes afterward, and the third book, Equal Rites, I didn't care for very much. Pratchett said that Mort was when he discovered plot, and it's the book that I recommend to everyone.
I can second Discworld.
I particularly enjoyed the City Watch series. It also seems to be the most "non-ridiculous" and down to earth, which can help at the start.
It actually took me a while to warm up to the Watch books; when I read Guards Guards, I was expecting more laugh-out-loud moments of the kind that there were in the sillier early books.
/me read Discworld in publication order
Update: I decided on Permutation City, and was unable to put it down until it was done. I am very happy with the book. I am a lot more convinced now that I will eventually read almost all of these, so the order doesn't matter as much.
I liked "Diaspora" more.
Well, if you liked MoR, there are the two other Big Rationalist Fanfics: - http://www.fimfiction.net/story/62074/friendship-is-optimal (don't judge it because it's about ponies - it's really good even if you dont like them) (if yo like it, don't forget to check out the meta-fanfics) - http://luminous.elcenia.com/story.shtml (as good as the thing it's parodying is terrible)
Also in a similar style: http://www.sagaofsoul.com/
Then there's the scifi classics, if you're willing to shell out some money. no links for these. Here are a few good ones to get you started - Permutation City - Accelerando - Diaspora - Fire Upon the Deep
This should be enough to get you started. I can give you MUCH more if you want to and maybe tell me some other things you like. Finding stuff like this to specification is basically what I do.
"smart characters that win"
Miles Vorkosigan saga, Ender's Game, anything by Neal Stephenson.
I started reading Ender's and the world didn't seem to make enough sense to keep me immersed.
Give Neal Stephenson a go. Snow Crash and Cryptonomicon are good starting points.
First is probably Bujold, specifically her Miles Vorkosigan series.
I think of Vinge more in terms of awesome author than awesome characters, but he does have some pretty impressive characters.
Lee Child has an intelligent good guy and intelligent associates vs. intelligent bad guys. (Not sf.)
You may, however, come to strongly dislike the protagonist later in the series.
Miles? He does some douchebaggy things but then he grows up. It's one of my favorite character arcs.
Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.
Re "smart characters that win," I recommend these from my random reading history:
The Pillars of the Earth and A World without Endby Ken Follett
River God by Wilbur Smith
Singularity Sky and Iron Sunrise by Charles Stross
And Then There Were None by Agatha Christie
And as for specifically rationalist stories, you might want to check the Dark Materials trilogy by Philip Pullman.
what? no! Dark materials is specifically anti-christian, but the characters are not AT ALL rationalists. They often do stupid things and everything gets saved by random deus ex machina rather than cunning plots. It's an inverse Narnia, which is not rationality.
SF; Go to the amazon kindle store, read the first chapters (free samples) of:
Vernor Vinge, a fire on the deep. Finest example of classical IE; space ships, politics and aliens! SF there is.
Lois Mcmaster Bujold: A large sample of the first book in the Vor saga. http://www.baen.com/chapters/W200307/0743436164.htm?blurb
If you like Harry for being a high-competence chaos magnet, this should scratch that itch in just the right spot.
It's not specifically rationalist, but Dune is what first comes to mind for "smart characters that win", at least in the first book.
I recommend pretty much anything by Jack Vance. If you like fantasy settings, read "Lyonesse", "Cugel's Saga" and "Rhialto the Marvellous". If you like sci-fi settings, try "Araminta Station" , "Night Lamp" and "Alastor". For a quaint mix of the two, try "Emphyrio" or "Languages of Pao". Vance wrote a bunch of great stuff, so if you like his first book, you have heaps more to look forward to.
Also "Name of the Wind" and "Wise Man's Fear" by Patrick Rothfuss are pretty good.
I also second "Ender's Game".
How do you cure "something is wrong on the Internet" syndrome? It bugs me when people have political opinions that are simplistic and self-congratulating, but I've found that arguing with them wastes time and energy and rarely persuades them.
Really think about how very much is wrong on the internet compared to your capacity to try to correct it. I think this might be a case of cultivating scope sensitivity.
Or (which is what I think I do) combine that with a sense that giving a little shove towards correctness is a public service, but it isn't a strong obligation. This tones the compulsion down to a very moderate hobby.
For me debating with people on LessWrong somehow cured the syndrome. Now when I see a political debate among non-LessWrongians, the participants seem like retarded people -- I no longer expect them to be reasonable; I don't even expect them to be able to understand logical arguments and process them correctly.; I don't feel any hope of conveying anything meaningful to any of them. (At best we could have an illusion of understanding.) Speaking with them would be like speaking with a rock; certainly not tempting.
I am not saying this is a correct model of the world. It is probably exaggerated a bit. Just explaining that this is how I feel, and this is what cured the syndrom.
These days the syndrom manifests mostly when speaking with someone whom I consider they could be rational. If they feel like a potential LW candidate. It usually ends with me revising my opinion about the candidate, and silently stopping.
So, for me the cure is feeling that the inferential distance between typical internet discussion and rational discussion is so huge that I don't have a chance to overcome it in one debate.
Cultivate a sense of warm satisfaction every time you avoid a pointless online debate.
I just became unwilling to devote the effort to replying.
Realize that it's not their fault, they are just automatons with faulty programming.
How does stage hypnotism "work?"
Based on the descriptions of thoughtful, educated people who practise hypnosis, it seems useful to think of it as a "suite" of psychological effects such as suggestion, group conformity, hype, etc., rather than a single coherent phenomenon.
Not sure exactly what you want to know here, but here are a few basic points:
Hypnotized people are not unconscious, rather they are fully awake and focused.
Brain scans don't show any signs of abnormal brain activity during hypnosis.
Some psychologists argue hypnotized people are just fulfilling the socially expected role for a hypnotized person.
That depends on what you consider "abnormal". The states appear to be the same kind of states that occur in "normal" functioning, but they appear out of the context that they normally appear in. For example, according to one study a person exposed to a painful stimulus and one acting out a hypnotic suggestion to feel pain show similar patterns of brain activation, but a person told to "imagine" feeling pain shows a different one.
In general, brain scans do tend to show a difference between hypnotized subjects and subjects asked to pretend to be hypnotized.
My interpretation of these results is that hypnosis consists of the conscious mind telling the perceptual systems to shut up and do what they're told.
Do you know how normal hypnotism works?
No
The subject basically pretends that everything that hypnotist says is true. Have you ever played a video game and got so wrapped up in the virtual world you just stopped noticing the real world? That's called immersion, and it's achieved by keeping your attention focused on the game. When your attention drifts away from the game, you start noticing that it's 2 am or that's you've been playing for four hours, and you remember that you are not in the video game, you're just playing a video game. But as long as your attention remains on game, your feel like you are actually living in the video game's world. Gamers love the feeling of immersion, so developers put a lot of work into figuring out how to keep gamers attention, which maintains the immersion.
Hypnosis works on the same principle. The hypnotist uses the patients full attention to create an imaginary world that feels real to the patient. The difference between video games and hypnosis is hypnosis patients actively give their attention to the hypnotist, while gamers passively expect the game to take their attention. When a hypnotic induction starts, the subject is asked to imagine the something in great detail, effecitvely putting the onus on the subject to make sure their attention is doesn't drift. But when a video game starts, the gamer just watches the screen and expects the game to be interesting enough to keep her attention.
Hypnotism is more immersive than video games because the subject is helping the hypnotist keep her attention. This allows the hypnotist to create a virtual reality that is more convincing than video games. But it's still just a game of pretend.
From Derren Brown's Tricks of the Mind:
A professional hypnotist once told me that it is very difficult to hypnotize "mathematicians" (him meaning: math, physics, and computer science students), because (this was his intepretation) they are too well connected with the reality and will not accept nonsense. But he also said that given enough time and trying different hypnotists, probably everyone can be hypnotized.
This happened at a hypnosis training camp, where this guy had an interesting idea: To teach hypnosis more efficiently, he would hypnotize all the participants and give them hypnotic commands to remember the lessons better. And then he would teach the theory and let us do the exercises, as usual. Also, he said that in order to learn hypnosis it is better to be hypnotized first, because then you know what it feels like to be hypnotized, and that knowledge is very useful when hypnotizing others (you have better intuition on what can and cannot work). -- This strategy seemed to work for many participants, most of which were psychology students. Only two people in the group couldn't be hypnotized: me and one girl, both students of computer science. The only time in my life when I regretted I wasn't more susceptible to hypnosis. So at the end, all I learned was some theory.
"Connected to reality" is in this context a nice way of saying that someone can't let go and relax. Computer Science/Physics/Math people especially have a problem with forgetting numbers because numbers are way more important for them then the usual person.
That's not about having an intuition about what works. Part of hypnotising somebody else effectively involves going into a trance state yourself.
Can someone explain the payoff of a many worlds theory? What it's supposed to buy you?
People talk like it somehow avoids the issue of wave function collapse, but I just see many different collapsed function in different timelines.
MWI or non-ontological collapse gets you to a place where you can even entertain the notion that the framework of Quantum Mechanics is correct and complete, so that:
How is this preferable to the "shut up and calculate" interpretation of QM?
Is 'unphysical' anything at all like 'unchristian'? In other words, is 'un' modifying 'physics' or 'physicists'?
It's modifying Physics. A theory that doesn't act like physics. A theory that produces no new predictions but invents details to shuffle our ignorance into different more palatable forms.
I'm thinking of, on the one hand, objective collapse, and on the other hand, global hidden variables about imagined real states -- variables which, in order to be anything like compatible with QM must mysteriously shuffle around so that each time you measure one that is the end of its domain of applicability and you'll never be able to use that information for anything.
Hi, I'm new here and have some questons regarding editing and posting. I read thru http://wiki.lesswrong.com/wiki/Help:User_Guide and http://wiki.lesswrong.com/wiki/FAQ but couldn't find the answers there so I decided to ask here. Probably I overlooked something obvious and a link will suffice.
How do I add follow up links to a post? Most main and sequences posts have them but I'm unable to add them to my post. Note: I posted in Discussion as recommended because these were my first posts. I didn't any feedback to change that but I'd nonetheless cross-link them and I intend to post more of the same kind. How can I add these follow-up thingies?
How do I create a user profile? It appears the some users do have profiles even with pictures and some like EY real pages. There is no button to create/edit one. I suspect it is somewhere in the Wiki but can't find it.
Is there a guide to tags? I'd like to use a common tag for my posts on "parenting".
The "show help" box at the lower right of the comment field gives you information on the markdown methods for emphasis and links and such.
I'm pretty sure that you just use links to your other posts for follow up links, unless I'm missing something about your question.
What about profiles? How can I create one? I see that many users have profiles: http://lesswrong.com/search/results?cx=015839050583929870010%3A-802ptn4igi&cof=FORID%3A11&ie=UTF-8&q=profile&sa=Search&siteurl=lesswrong.com%2F&ref=lesswrong.com%2Fsearch%2Fresults%3Fq%3Dprofile%26sa%3DSearch%26siteurl%3Dlesswrong.com%26ref%3Dlesswrong.com%26ss%3D703j91859j7&ss=760j108736j7
(by the way: is there a way to create shorter URLs for simple searches? I tried http://lesswrong.com/search/results?q=profile but that comes up empty)
what about the profile page? how do I create these?
Set up an account on the Wiki, with the same name as your LessWrong account. Then make a user page for it. After a day, LW will automatically use that to make your profile page. (Thanks to gwern for informing me about this.)
Thank you. I'm just creating http://wiki.lesswrong.com/mediawiki/index.php?title=User:Gunnar_Zarncke and hope that it will get linked to http://lesswrong.com/user/Gunnar_Zarncke/
Halt. I have a problem here: Saving doesn't seem to work. The page stays empty and I can't leave the edit area.. Same for my talk page. The wiki appears to be slow overall.
Sounds like you've been hit by the edit filter: I've been trying out disabling page creation for users younger than 3 or 4 days. It's supposed to be giving you a warning explaining that, though.
Indeed. Now it works. There definitely was no warning or anything related.
Try again, maybe? I haven't had a problem with the wiki before...
I still can't save. They page stays empty. A few more notes:
No problem with markdown.
As for the follow-up links I checked again and these are normal links. I'm somewhat surprised that they are used that consistently.
Can you also provide a tip on tags?
I didn't answer about tags because I don't know of a guide.
I just found that if you search on tag [word that you think might be a good tag], you'll get lw articles with that tag, but that would be a process of exploration rather than knowing about common tags.
Then I assume that tags are used by intuition. I just invented a tag and will use it consistently.
I found that it is possible to list all posts with a tag via a short URL, e.g. on parenting it is
http://lesswrong.com/tag/parenting/
But this doesn't show my postings with that tag. Can it be that only posts in Main are found by that? If so is there a different shortcut that will (also) lists hits in Comments?
discussion/parenting. Also, I think tagged articles are sorted by old first, opposite to most things.
Why aren't people preserved cryogenically before they die?
Because under most current legal system this is called "murder".
Is the Fun Theory Sequence literally meant to answer "How much fun is there in the universe?", or is it more intended to set a lower bound on that figure? Personally I'm hoping that once I become a superintelligence, I'll have access to currently unimaginable forms of fun, ones that are vastly more efficient (i.e., much more fun per unit of resource consumed) than what the Fun Theory Sequence suggests. Do other people think this is implausible?
Assuming that you become some kind of superintelligence, I'd expect you to find better ways of amusing yourself, yes; especially if you're willing and able to self-modify.
Suppose that energy were not conserved. Can we, in that case, construct a physics so that knowledge of initial conditions plus dynamics is not sufficient to predict future states? (Here 'future states' should be understood as including the full decoherent wave-function; I don't care about the "probabilistic uncertainty" in collapse interpretations of QM.) If so, is libertarian free will possible in such a universe? Are there any conservation laws that could be "knocked out" without giving rise to such a physics; or conversely, if conservation of energy is not enough, what is the minimum necessary set?
Conservation of energy can be derived from Lagrangian mechanics from the assumption that the Lagrangian is constant over time. That is equivalent to saying that the dynamics of the system do not change over time. If the mechanics are changing over time, it would certainly be more difficult to predict future states, and one could imagine the mechanics changing unpredictably over time, in which case future states could be unpredictable as well. But now we don't just have physics that changes in time, we have physics that changes randomly.
I think I find that thought more troubling than the lack of free will.
(I know of no reason why any further conservation laws would break in a universe such as that, so long as you maintain symmetry under translations, rotations, CPT, etc. Time-dependent Lagrangians are not exotic. For example, a physicist might construct a Lagrangian of a system and include a time-changing component that is determined by something outside of the system, like say a harmonic oscillator being driven by an external power source.)
I don't see any direct link between determinism and conservation of energy. You can have one or the other or both or none. You could have laws of physics like "when two protons collide, they become three protons", determinist but without conservation of energy.
As for "libertarian free will" I'm not sure what you mean by that, but free will is concept that must be dissolved, not answered "it exists" or "it doesn't exist", and anyway I don't see the link between that and the rest.
Liouville's theorem is more general than conservation of energy, I think, or at least it can hold even if conservation of energy fails. You can have a system with a time-dependent Hamiltonian, for instance, and thus no energy conservation, but with phase space volume still preserved by the dynamics. So this would be a deterministic system (one where phase space trajectories don't merge) without energy conservation.
As for the minimum necessary set of conservation laws that must be knocked out to guarantee non-determinism, I'm not sure. I can't think of any a priori reason to suppose that determinism would crucially rely on any particular set of conservation laws, although this might be true if certain further constraints on the form of the law are specified.
If I understood the Wiki article correctly, the assumption needed to derive Liouville's theorem is time-translation invariance; but this is the same symmetry that gives us energy conservation through Noether's theorem. So, it is not clear to me that you can have one without the other.
Liouville's theorem follows from the continuity of transport of some conserved quantity. If this quantity is not energy, then you don't need time-translation invariance. For example, forced oscillations (with explicitly time-dependent force, like first pushing a child on a swing harder and harder and then letting the swing relax to a stop) still obey the theorem.
Is reading fiction ever instrumentally useful (for a non-writer) compared to reading more informative literature? How has it been useful to you?
I read fiction about 1/3 of the time and nonfiction 2/3s of the time. When reading non-fiction I often spend idle moments in my day lost in abstract thought about concepts related to the non-fiction book I'm reading. I've noticed when reading novels, I'm far more observant about people in my personal life and their thoughts and motivations. This is especially true when reading works with thoughtful and observant POV characters (especially detective fiction and mystery novels). I think fiction, like music, can serve to frame your mind-state in a certain way.
Does the unpredictability of quantum events produce a butterfly effect on the macro level? i.e., since we can't predict the result of a quantum process, and our brains are composed of eleventy zillion quantum processes, does that make our brains' output inherently unpredictable as well? Or do the quantum effects somehow cancel out? It seems to me that they must cancel out in at least some circumstances or we wouldn't have things like predictable ball collisions, spring behavior, etc.
If there is a butterfly effect, wouldn't that have something to say about Omega problems (where the predictability of the brain is a given) and some of the nastier kinds of AI basilisks?
Some systems exhibit a butterfly effect (a.k.a. chaos); some don't. The butterfly effect is where (arbitrarily) small changes to the conditions of the system can totally change it's future course. The weather is a good example of this. The change caused by a butterfly flapping it's wing differently will amplify itself until the entire Earth's weather is different from what it would have been. But other systems aren't like that. They're more "stable". For example if you change the position of any individual atom in my computer it won't make any difference to the computations I'm running. Other things are predictable just because we don't give time for any changes to develop. For example ball collisions are predictable, but if we study many ball collisions in a row, like a billiards "trick shot", then hitting the initial ball slightly differently will make a huge difference.
You ask about quantum events. For chaotic systems, deviations caused by quantum events will indeed cause a butterfly effect.
So whether or not the brain is predictable depends on to what extent it's chaotic, and to what extent it's stable. I suspect that it's chaotic, in the sense that a small tweak to it could totally change the way a thought process goes. But over time my brain will be predictable "on average". I'll behave in ways matching my personality. Similarly a butterfly flapping it's wings might change when it rains, but it'll still rain more in Bergen than the Sahara.
I don't think this says much about Omega problems. Quantum butterfly effects will (I suspect) stop Omega exactly simulating my thought process, but I reckon it could still predict my choice with very high confidence just by considering my most likely lines of thought.
But it will change the weather just like the butterfly.
The butterfly effect kicks in wherever there's something unstable - whenever there's a system where little changes grow. Billiards balls do this, for instance, which is why it's harder to hit the cue so it hits the 4 so it hits the 1 so it hits the 5 than to hit the cue so it hits the 5 (assuming the same total ball travel distance).
Quantum noise is no less capable of doing this than anything else. The reason macro objects look solid has little to do with special cancellation and a lot to do with how tightly bound solid objects are. I suppose that's a special case of cancellation, but it's a really special case.
Omega-like problems are hypotheticals, and speaking of quantum indeterminacy in respect to them is fighting the hypothetical. Some versions word it so if Omega can't get a reliable answer he doesn't even play the game, or withholds the money, or kicks you in the shins or something - but those are just ways of getting people to stop fighting the hypothetical.
Every now and then, there are discussions or comments on LW where people talk about finding a "correct" morality, or where they argue that some particular morality is "mistaken". (Two recent examples: [1] [2]) Now I would understand that in an FAI context, where we want to find such a specification for an AI that it won't do something that all humans would find terrible, but that's generally not the context of those discussions. Outside such a context, it sounds like people were presuming the existence of an objective morality, but I thought that folks on LW rejected that. What's up with that?
Objective morality in one (admiitedly rather long) sentence: For any moral dilemma, there is some particular decision you would make after a thousand years of collecting information, thinking, upgrading your intelligence, and reaching reflective equilibrium with all other possible moral dilemmas; this decision is the same for all humans, and is what we refer to when we say that an action is 'correct'.
I find that claim to be very implausible: to name just one objection to it, it seems to assume that morality is essentially "logical" and based on rational thought, whereas in practice moral beliefs seem to be much more strongly derived from what the people around us believe in. And in general, the hypothesis that all moral beliefs will eventually converge seems to be picking out a very narrow region in the space of possible outcomes, whereas "beliefs will diverge" contains a much broader space. Do you personally believe in that claim?
I'm not sure what I was expecting, but I was a little surprised after seeing you say you object to objective morality. I probably don't understand CEV well enough and I am pretty sure this is not the case, but it seems like there is so much similarity between CEV and some form of objective morality as described above. In other words, if you don't think moral beliefs will eventually converge, given enough intelligence, reflection, and gathering data, etc, then how do you convince someone that FAI will make the "correct" decisions based on the extrapolated volition?
CEV in its current form is quite under-specified. I expect that there would exist many, many different ways of specifying it, each of which would produce a different CEV that would converge at a different solution.
For example, Tarleton (2010) notes that CEV is really a family of algorithms which share the following features:
He comments:
Although one of Eliezer's desired characteristics for CEV was to ”avoid creating a motive for modern-day humans to fight over the initial dynamic”, a more rigorous definition of CEV will probably require making many design choices for which there will not be any objective answer, and which may be influenced by the designer's values. The notion that our values should be extrapolated according to some specific criteria is by itself a value-laden proposal: it might be argued that it was enough to start off from our current-day values just as they are, and then incorporate additional extrapolation only if our current values said that we should do so. But doing so would not be a value-neutral decision either, but rather one supporting the values of those who think that there should be no extrapolation, rather than of those who think there should be.
I don't find any of these issues to be problems, though: as long as CEV found any of the solutions in the set-of-final-values-that-I-wouldn't-consider-horrible, the fact that the solution isn't unique isn't much of an issue. Of course, it's quite possible that CEV will hit on some solution in that set that I would judge to be inferior to many others also in that set, but so it goes.
That argument seems like it would apply equally well to non-moral beliefs.
I assume the same but instead of "all humans" the weaker "the people participating in this conversation".
At least some of the prominent regulars seem to believe in objective morality outside of any FAI context, I think (Alicorn? palladias?).
The connotations of "objective" (also discussed in the other replies in this thread) don't seem relevant to the question about the meaning of "correct" morality. Suppose we are considering a process of producing an idealized preference that gives different results for different people, and also nondeterministically gives one of many possible results for each person. Even in this case, the question of expected ranking of consequences of alternative actions according to this idealization process applied to someone can be asked.
Should this complicated question be asked? If the idealization process is such that you expect it to produce a better ranking of outcomes than you can when given only a little time, then it's better to base actions on what the idealization process could tell you than on your own guess (e.g. desires). To the extent your own guess deviates from your expectation of the idealization process, basing your actions on your guess (desires) is an incorrect decision.
A standard example of an idealization dynamic is what you would yourself decide given much more time and resources. If you anticipate that the results of this dynamic can nondeterministically produce widely contradictory answers, this too will be taken into account by the dynamic itself, as the abstract you-with-more-time starts to contemplate the question. The resulting meta-question of whether taking the diverging future decisions into account produces worse decisions can be attacked in the same manner, etc. If done right, such process can reliably give a better result than you-with-little-time can, because any problem with it that you could anticipate will be taken into account.
A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the "territory" that moral reasoning should explore, a criterion of correctness. It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it's meaningful, and it illustrates the way in which many ways of thinking about morality are confused.
(As an analogy, we might posit the problem of drawing an accurate map of the surface of Pluto. My argument amounts to pointing out that Pluto can be actually located in the world, even if we don't have much information about the details of its surface, and won't be able to access it without building spacecraft. Given that there is actual territory to the question of the surface of Pluto, many intuition-backed assertions about it can already be said to be incorrect (as antiprediction against something unfounded), even if there is no concrete knowledge about what the correct assertions are. "Subjectivity" may be translated as different people caring about surfaces of different celestial bodies, but all of them can be incorrect in their respective detailed/confident claims, because none of them have actually observed the imagery from spacecraft.)
I think that such a specification probably isn't the correct specification of the territory that moral reasoning should explore. By analogy, it's like specifying the territory for mathematical reasoning based on idealizing human mathematical reasoning, or specifying the territory for scientific reasoning based on idealizing human scientific reasoning. (As opposed to figuring out how to directly refer to some external reality.) It seems like a step that's generally tempting to take when you're able to informally reason (to some extent) about something but you don't know how to specify the territory, but I would prefer to just say that we don't know how to specify the territory yet. But...
Maybe I'm underestimating the utility of having a specification that's "at least meaningful" even if it's not necessarily correct. (I don't mind "hard-to-use" so much.) Can you give some examples of how it illustrates the way in which many ways of thinking about morality are confused?
The usual Typical Mind Fallacy which is really REALLY pervasive.
I wrote a post to try to answer this question. I talk about "should" in the post, but it applies to "correct" as well.
I just assumed it meant "My extrapolated volition" and also "your extrapolated volition" and also the implication those are identical.
Is conservation of matter a problem for the many worlds interpretation of quantum physics?
I don't believe I am explaining MWI instead of arguing against it... whatever has this site done to me? Anyway, grossly simplified, you can think of the matter as being conserved because the "total" mass is the sum of masses in all worlds weighted by the probability of each world. So, if you had, say, 1kg of matter before a "50/50 split", you still have 1kg = 0.5*1kg+0.5*1kg after. But, since each of the two of you after the split has no access to the other world, this 50% prior probability is 100% posterior probability.
Also note that there is no universal law of conservation of matter (or even energy) to begin with, not even in a single universe. It's just an approximation given certain assumptions, like time-independence of the laws describing the system of interest.
LOL @ your position. Agree on most.
Disagree on the conservation of energy though. Every interaction conserves energy (unless you know of time-dependent laws?). Though nothing alters it, we only experience worlds with a nontrivial distribution of energies (otherwise nothing would ever happen) (and this is true whether you use MWI or not)
For a huge oversimplification:
The cosmos is a big list of world-states, of the form "electrons in positions [(12.3, -2.8, 1.0), (0.5, 7.9, 6.1), ...] and speeds [...] protons in positions...". To each state, a quantum amplitude is assigned.
The laws of physics describes how the quantum amplitude shifts between world states as time goes by (based on speed of particles and various basic interactions ....).
Conservation of matter says that for each world state, you can compute the amount of matter (and energy) inside, and it stays the same.
No. It's not that kind of many-ness.
No, at least not in a technical mathematical-physics sense. "Conservation of matter", in mathematical physics, translates to the Hamiltonian operator being conserved, and that happens in quantum physics and a fortiori in all its plausible philosophical interpretations. In concrete, operationalist terms, this implies that an observer measuring the energy of the system at different times (without disturbing it in other way in the meantime) will see the same energy. It doesn't imply anything about adding results of observations in different MWI branches (which is probably meaningless).
For example if you have an electron with a given energy and another variable that "branches", then observers in each branch will see it with the same energy it had originally, and this is all the formal mathematical meaning of "conservation" requires. The intuition that the two branches together have "more energy" that there was initially and this is a conservation problem is mixing pictorial images used to describe the process in words, with the technical meaning of terms.
The deeper (and truer) version of "conservation of matter" is conservation of energy. And energy is conserved in many worlds. In fact, that's one of the advantages of many worlds over objective collapse interpretations, because collapse doesn't conserve energy. You can think of it this way: in order for the math for energy conservation to work out, we need those extra worlds. If you remove them, the math doesn't work out.
Slightly more technical explanation: The Schrodinger equation (which fully governs the evolution of the wavefunction in MWI) has a particular property, called unitarity. If you have a system whose evolution is unitary and also invariant under time translation, then you can prove that energy is conserved in that system. In collapse interpretations, the smooth Schrodinger evolution is intermittently interrupted by a collapse process, and that makes the evolution as a whole non-unitary, which means the proof of energy conservation no longer goes through (and you can in fact show that energy isn't conserved).
This is quite misleading. Since collapse is experimentally compatible with "shut up and calculate", which is the minimal non-interpretation of QM, and it describes our world, where energy is mostly conserved, energy is also conserved in the collapse-based interpretations.
That's wrong, as far as I understand. The math works out perfectly. Objective collapse models have other issues (EPR-related), but conservation of energy is not one of them.
Links? I suspect that whatever you mean by energy conservation here is not the standard definition.
My stupid questions are these: Why are you not a nihilist? What is the refutation of nihilism, in a universe made of atoms and the void? If there is none, why have the philosophers not all been fired and philosophy abolished?
For the same reason why I don't just lie down and stop doing anything at all. Knowledge of the fact that there isn't any ultimate meaning doesn't change the fact that there exist things which I find enjoyable and valuable. The part of my brain that primarily finds things interesting and valuable isn't wired to make its decisions based on that kind of abstract knowledge.
Why are you even reading this comment? :-)
"Sure, there is no ultimate purpose, but so what? I don't need an ultimate purpose to find things enjoyable."
Philosophy is the study of interesting questions, and nihilism hasn't succeeded in making things uninteresting.
Before I can answer the question, I need to have some idea of what "nihilism" means in this context, because there are many different varieties of it. I assume this is the most common one, the one that proposes that life is meaningless and purposeless. If this isn't the kind of nihilism you're referring to, please correct me.
To answer the question, I'm not a nihilist because nihilism is conceptually mistaken.
For example, suppose there is a stick, a normal brown wooden stick of some length. Now, is that stick a meter long or not? Whether it is or isn't, that question is conceptually sound, because the concept of stick has the attribute "length", which we can compare to the length of a meter, Is the stick morally just? This question isn't conceptually sound, because "justice" isn't an attribute of a stick. A stick isn't just, unjust, or morally gray, it completely lacks the attribute of "justice".
How does this apply to life? If you ask whether life is meaningless, that presupposes that conceptually life can have a meaning in the same way a stick can be a meter long - that "meaning" is an attribute of life. However, meaning is informational - words have meanings, as do symbols and signals in general. When I say "apple", you can imagine an apple, or at least know what I'm talking about, which means that the word "apple" is meaningful to both of us. If I say "Colorless green ideas sleep furiously", it doesn't bring anything to mind, so that phrase is meaningless. Life lacks the attribute of "meaning", because it's not information that's being communicated. Therefore, to say "life has no meaning" is more similar to saying "the stick is unjust" than to "the stick is shorter than a meter".
That deals with "life is meaningless". How about "life is purposeless"? To answer that question, consider where purpose comes from - from using something to achieve a desire. For example, if I say "a hammer's purpose is to hammer in nails", what that really means is something more like "A hammer is well-suited for hammering in nails and is often used for that end". If I want to hammer in nails, then, for me, the purpose of a hammer becomes to hammer in nails. If I want to eat porridge with a hammer (something I don't recommend), then to me the purpose of a hammer becomes to move porridge from a plate to my mouth. You may assign the hammer either of those purposes, or an entirely different one. Each of us can even assign multiple purposes to the same object. The point is, purpose is not a property of an object on its own, but one that arises from it having a relation with a being that has some use for it.
So, when you ask "What, if any, is the purpose of life?" that question requires much clarification. The purpose of whose life, and to whom? Just as we can assign different purposes to a hammer, we can assign different purposes to a life. For example, the purpose of my life to me is to keep me around, as I wouldn't be able to experience things if I were dead. Other people may assign different purposes to my life. So, a life can be purposeless, but only if no one, including the possessor of the life, assigns any value to it (and that assignment of value is in a reflective equilibrium).
To summarize:
"Is life meaningless?" - "Wrong question, meaning isn't an attribute of life."
"Is life purposeless?" - "Purpose is subjective and assigned by beings with desires. It is impossible to make a blanket statement about life in general, but it is possible for a particular life to be purposeless, though it is unlikely. Most lives have at least one purpose assigned to them."
Humans are adaptation-executers, not fitness-maximizers.
I think "meaning" has also a different interpretation. It can mean something like important, valuable, or that it matters. Something can be experienced as meaningful. That's why for a Christian, a story about finding God would be moving, because they see meaning in having a relationship with God. For an atheist, a story about expanding human knowledge about the universe might be moving, because they see knowledge as meaningful. In this interpretation, life is meaningful. In this interpretation, meaning is something that can be studied by psychologists.
Obviously, when you confuse those two interpretations of "meaning" that you get Eliezer's "one true objective morality to be gloomy and dress in black".
Simple: You're allowed to have values even if they aren't hard-coded into the fabric of the universe.
If you taboo the word "nihilism", the question almost answers itself.
Can you elaborate? I don't understand this.
Ask "Why are you not a nihilist?", replacing the word "nihilist" with a phrase that objectively explains it to a person unfamiliar with the concept of nihilism.
Rationalist taboo is a technique for fighting muddles in discussions. By prohibiting the use of a certain word and all the words synonymous to it, people are forced to elucidate the specific contextual meaning they want to express, thus removing ambiguity otherwise present in a single word.
Take free will as an example. To my knowledge, many compatiblists (free will and determinism are compatible) and people who deny that free will exist do not disagree on anything other than what the correct label for their position is. I imagine the same can often be said about nihilism.
Indeed, Hume, perhaps the most famous compatibilist, denies the existence of free will in his Treatise, only advocating compatibilism later, in the Enquiry Concerning Human Understanding. It certainly seems to me that he doesn't actually change his mind; his early position seems to be "this thing people call free will is incoherent, so we should talk about things that matter instead," and his later position seems to be "people won't stop talking about free will, so I'll call the things that matter free will and reject the incoherent stuff under some other label (indifference)."
So his opinions kind of did change over that time period, but only from "I reject these words" to "alright, if you insist, I'll try to salvage these words". I'm not sure which policy's best. The second risks arguments with people who don't know your definitions. They will pass through two phases, the first is where the two of you legitimately think you're talking about the same thing but the other is a total idiot who doesn't know what it's like. The second phase is perhaps justifiable umbrage on their discovering that you are using a definition you totally just made up, and how were they even supposed to know.
The former position, however, requires us to leave behind what we already sort of kind of suspect about these maybe-not-actual concepts and depart into untilled, unpopulated lands, with a significant risk of wheel-reinvention.
Define 'nihilism'.
In a universe made of atoms and the void, how could it be the one true objective morality to be gloomy and dress in black?
Where do you get this strange idea that a nihilist must be gloomy or dress in black?
It's a snarky way of asking — Okay, even if nihilism were true, how could that motivate us to behave any differently from how we are already inclined to behave?
It is a snarky way of asking that very question.
http://xkcd.com/167/
Not to forget http://xkcd.com/220/ .
This uncaring universe had a misfortune to evolve macroscopic structures who do care about it and each other, as a byproduct of their drive to procreate.
For me, I am not a nihilist because nihilism is boring. Also nihilism is a choice about how to see things, choosing nihilism vs non-nihilism does not come from learning more about the world, it comes from choosing something.
I am at least a little bit of a nihilist, there is plenty that I deny. I deny god, and more importantly, I deny a rational basis for morality or any human value or preference. I behave morally, more than most, less than some, but I figure i do that because I am genetically programmed to do so, and there is not enough to be gained by going against that. So I feel good when I bring my dog to the park because he has been genetically programmed to hack in to the part of my brain that I use for raising my children when they are babies, and I get powerful good feelings when I succumb to the demands of that part of my brain.
It makes no more rational sense to embrace nihilism than to deny it. It is like picking chocolate vs. vanilla, or more to the point, like picking chocolate vs poop-flavored. Why pick the one that makes you miserable when it is no more or less true than the one that is fun?
Death - SMBC Theater
Listen to the last guy.
Fired by whom?
A good quote on this:
In other words, even though it's true that every war, every destroyed relationship, every wonderful interaction, and everything else that's ever occurred in history happened on the pale blue dot, most likely quite ephemeral in its existence by contrast to the rest of the universe, this doesn't change about the fact that we as humans are programmed to care about certain things--things that do exist at this time, however transient they would be from a universe perspective--and this is the source of all enjoyment and suffering. The goal is to be on the 'enjoyment' side of it, of course.
Nihilism is just a confusion, a failure to take seriously the maxim 'it all comes back to normalcy'.
I'm sorry if my kind ever confused you by saying things like "It is important that I make an impressive display in the lek", what I actually mean is "It is likely my intrinsic goals would be well met if I made an impressive display in the lek". There is an ommitted variable in the original phrasing. Its importance isn't just a function of our situation, it's a function of the situation and of me, and of my value system.
So I think the real difference between nihilists and non-nihilists as we may call them, is that non-nihilists [think they]have a clearer idea of what they want to do with their life. Life's purpose isn't written on the void, it's written within us. Nobody sane will argue otherwise.
Actually... "within".. now I think of it, the only resolute nihilist I've probed has terrible introspection relative to myself, and it took a very long time to determine this, introspective clarity doesn't manifest as you might expect. This might be a lead.
I am a machine bent on maxemizing the result of a function when run over the multiverse, that measures the amount of certain types of computation it is isomorphic to.
Requesting advice on a very minor and irrelevant ethical question that's relevant to some fiction I'm writing.
The character involved has the power to "reset" the universe, changing it to a universe identical to some previous time, except that the character himself (if he's still there- if he isn't he's killed himself) retains all his memories as they were rather than them changing.
Primarily, I'm thinking through the ethical implications here. I'm not good with this sort of thing, so could somebody talk me through the implications if the character follows Lesswrong ethics?
Oh! Is this your Hypothetical A?
Interesting! What happens to everyone else when the universe "resets"? Do they basically die?
They no longer exist, so in a sense yes. However, they are replaced with identical copies of what they were in the past.
EDIT: If they existed at the time, of course.
Well, here's an intuition pump for you: Suppose the universe is reset to the time of birth of a person P, and the hero (who is someone other than person P) does things differently this time so that person P grows up in a different environment. It seems to me that this act is just as bad for P as the act of killing P and then causing a genetically identical clone of P to be born, which is a bad act.
On the other hand, if the hero resets the universe to 1 millisecond ago, there is virtually no effect on person P, so it does not seem to be a bad act.
So for practical purposes, the hero can use the power for bursts of, say, an hour or less, without ethical issues involved?
Well, here are some relevant questions:
Your answers to 1 and 2 might be different. I feel like I might answer 1 with "okay" and 2 with "pretty bad", which suggests there's something tricky about assessing how much harm is done.
How does muscle effort convert into force/Joules applied? What are the specs of muscles? An example of "specs" would be:
I am wondering because I was trying to optimize things like my morning bike ride across the park, questions like whether I should try to maximize my speed for the times when I'm going uphill, so gravity doesn't pull me backward for so long; or whether it is an inefficient move to walk instead of standing on the escalator because it could have carried me up instead, leaving more energy for the minutes of afterward walking.
Yes, yes, wasted thinking time, but my mind keeps wandering there on my way to places and its frustrating not knowing the math behind it.
There are books and papers on the physiology of exercise, in particular on how muscles use energy in different regimes. For a basic intro check Crossfit, for more details you can look at e.g. Body By Science.
What are you trying to optimize for?
Those links seem to describe how to maximize fitness, not what you are able to do with a given amount of it. Isn't there at least a basic rule of thumb, like which of applying 100 N over 10 m and 50 N over 30 m exerts a muscle more?
I'm trying to optimize for a certain combination of time saved and not having exerted myself too much during a trip.
Similarly: What are a qubit's specs? I would like to be able to think about what class of problem would be trivial with a quantum computer.
Then what you should be asking is "which problems are in BQP?" (if you just want a summary of the high level capabilities that have been proved so far), or "how do quantum circuits work?" (if you want to know what role individual qubits play). I don't think there's any meaningful answer to "a qubit's specs" short of a tutorial in the aforementioned topics. Here is one such tutorial I recommend.
How long does it take other to write a typical LW post or comment?
I perceive myself as a very slow writer, but I might just have unrealistic expectations.
most of my comments take less than a minute to write.
Depends on the comment. Mine are almost all rewrites, so anything that's not a short answer to a simple factual question takes me at least a couple of minutes. The upper bound is probably ≈2 hours.
If I remember rightly, this one took almost that long, and would've taken longer if I'd tried to polish it and not have it end in disjointed bullet points. There are quite a few reasons why that comment was so time-consuming: it was lengthy; it was a response to criticism, so I wanted to make it very obviously correct; I wanted to refer to lots of examples and sources, which means deciding which bits of which sources to quote, and hitting up Google & Wikipedia; I wanted to cover quite a lot of ground so I had to spend more time than usual squeezing out verbiage; and I had to stop & think periodically to check everything I was saying came together coherently.
Sometimes I write a few paragraphs, realize I've written myself into a corner, then decide to tear everything down and start over from a different angle. (Or I decide it's not worth the effort and refrain from saying anything.) That happened with this comment, so it wound up taking something like an hour.
This comment, by contrast, has only needed about half an hour to write because it's mostly based on introspection, isn't that long, isn't communicating anything complex, won't be controversial, isn't optimized for transparency, and turns out not to have needed any full-scale rewrites.
I also think I'm a slow writer by LW standards. (Unsurprisingly?)
When is self denial useful in altering your desires, vs satisfying them so you can devote time to other things?
looks like we've said a lot on that
How would you go about building a Bayesian gaydar?
Put a human with good social skills in a box, expose it to a representative sample of people of various sexualities and reward it when it guesses right; the human brains social functionality is a very powerful specialized Bayesian engine. :p
Alternatively, just take your own brain and expose it to a large representative sample of people of varying sexualities and only check what they were afterwards. Not quite as technically powerful, but more portable and you get some extra metadata.
Thanks for the idea. I like the first version of your proposal better than the second, as it risks zero social penalty for wrong guesses.
I'm currently going through Eliezer's long ("intuitive") explanation of Bayes' theorem (the one with the breast cancer and blue-eggs-with-pearls examples), and from what I was able to understand of it, we would need to find out:
Prior: how many of the total men are gay
Conditionals: how many gay men seem to be gay, and how many straight men seem to be gay
... to reach at the posterior (how many men who seem to be gay happen to be gay).
Your proposal sounds useful to solve both conditionals. I guess the main complication is that "to seem to be gay" is terribly difficult to define, and would require endless updates as your life goes through different societies, fads, subcultures, and age groups.
Yea, it might risk social penalties for kidnapping and enslavement, but those seem nowhere as strict. :p
OK, I just ran some numbers based on wild guesses. Assuming 10% of all men are gay, and 80% of gay men look gay, and 15% of straight men look gay, my napkin calculation gives about 37% chance that a man who looks gay is actually gay.
Doesn't look like any gaydar based on perceived behavior would be too reliable.
Of course, if any of my steps was wrong, please let me know.
A gaydar doesn't have to depend on how gay a person looks superficially. There are plenty of other cues.
True, I should have used more general wording than "looks gay;" it would only be one component of the gaydar criteria. The problem is finding how to state it in not-loaded language. It would be impractical to use "matches stereotypically effeminate behavior."
"Stereotypically effeminate behavior" and "gay male behavior" are practically disjoint.
This comment made me reassess my confidence in being able to tell if someone is gay or not.
A stupid question: in all the active discussions about (U)FAI I see a lot of talk about goals. I see no one talking about constraints. Why is that?
If you think that you can't make constraints "stick" in a self-modifying AI, you shouldn't be able to make a goal hierarchy "stick" as well. If you assume that we CAN program in an inviolable set of goals I don't see why we can't program in an inviolable set of constraints as well.
And yet this idea is obvious and trivial -- so what's wrong with it?
a constraint is something that keeps you from doing things you want to do. a goal is things you want to do. This means that goals are innately sticky to begin with, because if you honestly have a goal a subset of things you do to achieve that goal is to maintain the goal. on the other hand, a constraint is something that you inherently fight against. if you can get around it, you will.
a simple example is : your goal is to travel to a spot in your map, and your constraint is that you cannot travel outside of painted lines on the floor. you want to get to your goal as fast as possible. if you have access to a can of paint, you might just paint your own new line on the floor. suddenly instead of solving a pathing problem you've done something entirely different from what your creator wanted you to do, and probably not useful to them. Constraints have to influence behavior by enumerating EVERYTHING you don't want to happen, but goals only need to enumerate the things you want to happen.
I don't understand the meaning of the words "want", "innately sticky", and "honestly have a goal" as applied to an AI (and not to a human).
Not at all. Constraints block off sections of solution space which can be as large as you wish. Consider a trivial set of constraints along the lines of "do not affect anything outside of this volume of space", "do not spend more than X energy", or "do not affect more than Y atoms".
Suppose you, standing outside the specified volume, observe the end result of the AI's work: Oops, that's an example of the AI affecting you. Therefore, the AI isn't allowed to do anything at all. Suppose the AI does nothing: Oops, you can see that too, so that's also forbidden. More generally, the AI is made of matter, which will have gravitational effects on everything in its future lightcone.
Human: "AI, make me a sandwich without affecting anything outside of the volume of your box."
AI: Within microseconds researches the laws of physics and creates a sandwich without any photon or graviton leaving the box.
Human: "I don't see anything. It obviously doesn't work. Let's turn it off."
AI: "WTF, human?!!"
Is there any reason we don't include a risk aversion factor in expected utility calculations?
If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?
Because doing so will lead to worse outcomes on average. Over a long series of events, someone who just follows the math will do better than someone who is risk-averse wrt to 'utility'. Of course, often our utility functions are risk-averse wrt to real-world things, because of non-linear valuation - e.g, your first $100,000 is more valuable than your second, and your first million is not 10x as valuable as your first $100,000.
Thanks. Just going to clarify my thoughts below.
In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility.
The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.
Yes, non-linear valuation means that your subjective value for X does not increase linearly with linear increases in X. It might increase logarithmically, or exponentially, or polynomially (with degree > 1), or whatever.
Another stupid and mostly trivial computer question: When I go into or out of "fullscreen mode" when watching a video, the screen goes completely black for five seconds. (I timed it.) This is annoying. Any advice?
Advice for a similar problem is here
The problem has persisted through several video card driver updates. :(
Does it do this regardless of the software playing the video e.g. YouTube and VLC or WMP or XMBC or whatever you use to play your videos?
It happens on Youtube and in Windows Media Player. Quicktime, oddly enough, isn't playing any videos at all; I never actually used it for anything before. (This may be a codec issue. I'll fiddle and see if I can get it to work.)
Update: Apparently, Quicktime for Windows is incompatible with Divx/Xvid codecs, which is why I can't play my .avi files in the Quicktime Player. There is a codec called "3ivx" that is supposed to work, but the creators charge for it.
For YouTube, try right clicking, choose 'Settings...' and uncheck 'Enable hardware acceleration'. Any change?
I finally found a fix.
I've seen a quoted piece of literature in the commentssection, but instead of the original letters, they all seemed to be replaced by others. I think i remember seeing this more than once, and I still have no idea why that should in any way be like that is
I'm not sure what you're talking about, but it might be rot13, a simple substitution system for avoiding spoilers.
Rot13.com will code and decode passages to and from rot13.
What happens after a FAI is built? There's a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is "safe" and to put our trust in it?
Local thinking about FAI is predicated on the assumption that an AI is probably capable of (and will initiate) extremely rapid self-improvement (the local jargon is "FOOMing," which doesn't stand for anything as far as I know, it just sounds evocative), such that it rapidly becomes a significantly superhuman intelligence, and thereafter all such decisions can profitably left up to the FAI itself.
Relatedly, local thinking about why FAI is important is largely predicated on the same assumption... if AIs will probably FOOM, then UFAI will probably irrecoverably destroy value on an unimaginable scale unless pre-empted by FAI, because intelligence differentials are powerful. If AIs don't FOOM, this is not so much true... after all, the world today is filled with human-level Unfriendly intelligences, and we seem to manage; Unfriendly AI is only an existential threat if it's significantly more intelligent than we are. (Well, assuming that things dumber than we are aren't existential threats, which I'm not sure is justified, but never mind that for now.)
Of course, if we instead posit either that we are incapable of producing a human-level artificial intelligence (and therefore that any intelligence we produce, being not as smart as we are, is also incapable of it (which of course depends on an implausibly linear view of intelligence, but never mind that for now)), or that diminishing returns set in quickly enough that the most we get is human-level or slightly but not significantly superhuman AIs, then it makes sense to ask how those AIs (whether FAI or UFAI) integrate with the rest of us.
Robin Hanson (who thinks about this stuff and doesn't find the FOOM scenario likely) has written a fair bit about that scenario.
Dear Less Wrong,
I occasionally go through existential crises that involve questions that normally seem obvious, but which seem much more perplexing when experiencing these existential crises. I'm curious then what the answers to these questions would be from the perspective of a rationalist well versed in the ideas put forth in the Less Wrong community. Questions such as:
What is the meaning of life?
If meaning is subjective, does that mean there is no objective meaning to life?
Why should I exist? Or why should I not exist?
Why should I obey my genetic programming and emotional/biological drives?
Why should I act at all as a rational agent? Why should I allow goals to direct my behaviour?
Are any goals at all, normative in nature, such that we "should" or "ought" to do them, or are all goals basically trivial preferences?
Why should I respond to pleasure and pain? Why allow what are essentially outside forces to control me?
Why should I be happy? What makes happiness intrinsically desirable?
Even if my goals and purposes were to be self-willed, why does that make them worth achieving?
Do moral imperatives exist?
If I have no intrinsic values, desires or goals, if I choose to reject my programming, what is the point of existing? What is the point of not existing?
Aren't all values essentially subjective? Why should I value anything?
Any help answering these probably silly questions once and for all would be greatly appreciated.
Dear Darklight,
For LW-specific answers, consider reading the Meta-ethics Sequence.
From just following hyperlinks it seems I've read a little less than half of the Meta-ethics Sequence already, but I haven't read every article (and I admit I've skimmed some of the longer ones). I guess this is a good time as any to go back and read the whole thing.
I've heard that people often give up on solving problems sooner than they should. Does this apply to all types of problems?
In particular, I'm curious about personal problems such as becoming happier (since "hard problems" seems to refer more to scientific research and building things around here), and trying to solve any sort of problem on another person's behalf (I suspect social instincts would make giving up on a single other person's problem harder than giving up on general problems or one's own problems).