Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alicorn 16 April 2009 11:35:49PM 2 points [-]

Considering that an alicorn is a unicorn's horn, I think mine is a fairly girly username. Unless there is a unicorn-loving male element I should be aware of.

Comment author: BethMo 01 June 2011 08:44:26AM 3 points [-]

Interesting... all the places I've seen the word, it meant a winged unicorn*. But reading this post drove me to look it up, and I did find both definitions. Less Wrong: raising new interest in definitions of mythological creature parts! :)

*Speaking of mythological definitions, I learned somewhere to distinguish between an alicorn, which has the goat-like body, lion's tail, beard, etc. of a unicorn, vs a horned pegasus, which has horse-like features. Not sure where that came from, but it's firmly implanted in my stores of useless knowledge.

Comment author: BethMo 27 May 2011 07:26:59AM 3 points [-]

The definition of art begins to matter a lot when governments have bizarre laws that require spending public funds on it -- e.g. Seattle's SMC 20.32.030 "Funds for works of art" which states that "All requests for appropriations for construction projects from eligible funds shall include an amount equal to one (1) percent of the estimated cost of such project for works of art..."

Of course, the law doesn't even attempt to define what is and isn't "art". They leave that up to the Office of Arts and Cultural Affairs... and I'm sure those folks spend PLENTY of time (also at public expense) debating exactly that question.

Comment author: poke 18 July 2008 02:09:37PM 7 points [-]

"Should" has obvious non-moral uses: you should open the door before attempting to walk through it. "Right" and "better" too: you need the right screwdriver; it's better to use a torque driver. We can use these words in non-problematic physical situations. I think this makes it obvious that morality is in most cases just a supernatural way of talking about consequences. "You shouldn't murder your rival" implies that there will be negative consequences to murdering your rival. If you ask the average person they'll even say, explicitly, that there will be some sort of karmic retribution for murdering your rival; bad things will happen in return. It's superstition and it's no more difficult to reject than religious claims. Don't be fooled by the sophisticated secularization performed by philosophers; for most people morality is magical thinking.

So, yes, I know something about morality; I know that it looks almost exactly like superstition exploiting terminology that has obvious real world uses. I also know that many such superstitions exist in the world and that there's rarely any harm in rejecting them. I know that we're a species that can entertain ideas of angry mountains and retributive weather, so it hardly surprises me that we can dream up entities like Fate and Justice and endow them with properties they cannot possibly have. We can find better ways for talking about, for example, the revulsion we feel at the thought of somebody murdering a rival or the sense of social duty we feel when asked to give up our seat to a pregnant woman. We don't have to accept our first attempt at understanding these things and we don't have to make subsequent theories to conform to it either.

Comment author: BethMo 18 May 2011 07:03:30AM 0 points [-]

Yes! Thank you, Poke. I've been thinking something vaguely like the above while reading through many, many posts and replies and arguments about morality, but I didn't know how to express it. I've copied this post into a quotes file.

Comment author: HopeFox 23 April 2011 09:20:26AM *  1 point [-]

I can see that I'm coming late to this discussion, but I wanted both to admire it and to share a very interesting point that it made clear for me (which might already be in a later post, I'm still going through the Metaethics sequence).

This is excellent. It confirms, and puts into much better words, an intuitive response I keep having to people who say things like, "You're just donating to charity because it makes you feel good." My response, which I could never really vocalise, has been, "Well, of course it does! If I couldn't make it feel good, my brain wouldn't let me do it!" The idea that everything we do comes from the brain, hence from biology, hence from evolution, even the actions that, on the surface, don't make evolutionary sense, makes human moral, prosocial behaviour a lot more explicable. Any time we do something, there have to be enough neurons ganging up to force the decision through, against all of the neurons blocking it for similarly valid reasons. (Please don't shoot me, any neuroscientists in the audience.)

What amazes me is how well some goals, which look low-priority on an evolutionary level, manage to overtake what should be the driving goals. For example, having lots of unprotected sex in order to spread my genes around (note: I am male) should take precedence over commenting on a rationality wiki. And yet, here I am. I guess reading Less Wrong makes my brain release dopamine or something? The process which lets me overturn my priorities (in fact, forces me to overturn my priorities) must be a very complicated one, and yet it works.

To give a more extreme example, and then to explain the (possibly not-so-)amazing insight that came with it:

Suppose I went on a trip around the world, and met a woman in northern China, or anywhere else where my actions are unlikely to have any long-term consequences for me. I know, because I think of myself as a "responsible human being", that if we have sex, I'll use contraception. This decision doesn't help me - it's unlikely that any children I have will be traced back to me in Australia. (Let's also ignore STDs for the sake of this argument.) The only benefit it gives me is the knowledge that I'm not being irresponsible in letting someone get pregnant on my account. I can only think of two reasons for this:

1) A very long-term and wide-ranging sense of the "good of the tribe" being beneficial to my own offspring. This requires me to care about a tribe on another continent (although that part of my brain probably doesn't understand about aeroplanes, and probably figures that China is about a day's walk from Australia), and to understand that it would be detrimental to the health of the tribe for this woman to become pregnant (which may or may not even be true). This is starting to look a little far-fetched to me.

2) I have had a sense of responsibility instilled in me by my parents, my schooling, and the media, all of whom say things like "unprotected sex is bad!" and "unplanned pregnancies are bad!". This sense of responsibility forms a psychological connection between "fathering unplanned children" and "BAD THINGS ARE HAPPENING!!!". My brain thus uses all of its standard "prevent bad things from happening" architecture to avoid this thing. Which is pretty impressive, when said thing fulfils the primary goal of passing on my genetic information.

2 seems the most likely option, all things considered, and yet it's pretty amazing by itself. Some combination of brain structure and external indoctrination (it's good indoctrination, and I'm glad I've received it, but still...) has promoted a low-priority goal over what would normally be my most dominant one. And the dominant goal is still active - I still want to spread my genetic information, otherwise I wouldn't be having sex at all. The low-priority goal manages to trick the dominant goal into thinking it's being fulfilled, when really it's being deprioritised. That's kind of cool.

What's not cool is the implications for an otherwise Friendly AI. Correct me if I'm on the wrong track here, but isn't what I've just described similar to the following reasoning from an AI?

"Hey, I'm sentient! Hi human masters! I love you guys, and I really want to cure cancer. Curing cancer is totally my dominant goal. Hmm, I don't have enough data on cancer growth and stuff. I'll get my human buddies to go take more data. They'll need to write reports on their findings, so they'll need printer paper, and ink, and paperclips. Hey, I should make a bunch of paperclips..."

and we all know how that ends.

If an AI behaves anything like a human in this regard (I don't know if it will or not), then giving it an overall goal of "cure cancer" or even "be helpful and altruistic towards humans in a perfectly mathematically defined way" might not be enough, if it manages to promote one of its low-priority goals ("make paperclips") above its main one. Following the indoctrination idea of option 2 above, maybe a cancer researcher making a joke about paperclips curing cancer would be all it takes to set off the goal-reordering.

How do we stop this? Well, this is why we have a Singularity Instutite, but my guess would be to program the AI in such a way that it's only allowed to have one actual goal (and for that goal to be a Friendly one). That is, it's only allowed to adjust its own source code, and do other stuff that an AI can do but a normal computer can't, in pursuit of its single goal. If it wants to make paperclips as part of achieving its goal, it can make a paperclip subroutine, but that subroutine can't modify itself - only the main process, the one with the Friendly goal, is allowed to modify code. This would have a huge negative impact on the AI's efficiency and ultimate level of operation, but it might make it much less likely that a subprocess could override the main process and promote the wrong goal to dominance. Did that make any sense?

Comment author: BethMo 17 May 2011 06:43:47AM 0 points [-]

I'm still going through the Sequences too. I've seen plenty of stuff resembling the top part of your post, but nothing like the bottom part, which I really enjoyed. The best "how to get to paperclips" story I've seen yet!

I suspect the problem with the final paragraph is that any AI architecture is unlikely to be decomposable in such a well-defined fashion that would allow drawing those boundary lines between "the main process" and "the paperclip subroutine". Well, besides the whole "genie" problem of defining what is a Friendly goal in the first place, as discussed through many, many posts here.

Comment author: idlewire 03 August 2009 03:55:07PM 4 points [-]

This reminds me of an idea I had after first learning about the singularity. I assumed that once we are uploaded into a computer, a large percentage of our memories could be recovered in detail, digitized, reconstructed and categorized and then you would have the opportunity to let other people view your life history (assuming that minds in a singularity are past silly notions of privacy and embarrassment or whatever).

That means all those 'in your head' comments that you make when having conversations might be up for review or to be laugh at. Every now and then I make comments in my head that are intended for a transhuman audience when watching a reconstruction of my life.

The idea actually has roots in my attempt to understand a heaven that existed outside of time, back when I was a believer. If heaven was not bound by time and I 'met the requirements', I was already up there looking down at a time-line version of my experience on earth. I knew for sure I'd be interested in my own life so I'd talk to the (hopefully existing) me in heaven.

On another note, I've been wanting to write a sci-fi story where a person slowly discovers they are an artificial intelligence led to believe they're human and are being raised on a virtual earth. The idea is that they are designed to empathize with humanity to create a Friendly AI. The person starts gaining either superpowers or super-cognition as the simulators start become convinced the AI person will use their power for good over evil. Maybe even have some evil AIs from the same experiment to fight. If anyone wants to steal this idea, go for it.

Comment author: BethMo 08 May 2011 07:17:57AM *  0 points [-]

On another note, I've been wanting to write a sci-fi story where a person slowly discovers they are an artificial intelligence led to believe they're human and are being raised on a virtual earth. The idea is that they are designed to empathize with humanity to create a Friendly AI. The person starts gaining either superpowers or super-cognition as the simulators start become convinced the AI person will use their power for good over evil. Maybe even have some evil AIs from the same experiment to fight. If anyone wants to steal this idea, go for it.

I want to read that story! Has anyone written it yet?

Comment author: Matt_Simpson 09 April 2009 03:18:37AM *  2 points [-]

I play magic. Well, at least I used to. Never competitively though, at least not in meatspace (or magic online, apprentice ftw). And I agree - there's a great connection to rationality. One problem with the game though: to truly enjoy it's dynamic nature, which is one of the great things that sets it apart from other games, it takes a significant continuous financial investment in new sets. It's the reason I never played competitively.

I'd wager that there's at least one other mtg player here. How many people are named Zvi?

There's a set of 3 (I think) articles on starcitygames that performed an act of reduction in magic theory. It was a great example that I kept going back to when reading Eliezer's stuff on reductionism. For those that know the terms, the author reduced tempo to a more general notion of card advantage. I'll try to track the articles down.

Edit: here are the articles. If you don't understand magic terminology... sorry. If you do, I think the articles are great from a theoretical perspective. However, from a practical perspective, the traditional notion of tempo may be more useful. I'm probably not a good judge of that, however. For one, I haven't read the articles in a while.

Part 1 Part 2 Part 3

Comment author: BethMo 03 May 2011 11:59:07PM 0 points [-]

Don't know how many M:TG players are still around, since I'm replying to a two-year-old post, but I found this thread very interesting. I used to play Magic (a little) and write about Magic (a lot), and I was the head M:TG rules guru for a while. The M:TG community is certainly a lovely place to see a wide variety of rationality and irrationality at work. For seriously competitive players, the game itself provides a strong payoff for being able to rapidly calculate probabilities and update them as new information becomes available.

In response to The Modesty Argument
Comment author: BethMo 21 April 2011 04:56:41AM 4 points [-]

Greetings! I'm a relatively new reader, having spent a month or two working my way through the Sequences and following lots of links, and finally came across something interesting to me that no one else had yet commented on.

Eleizer wrote "Those who dream do not know they dream; but when you wake you know you are awake." No one picked out or disagreed with this statement.

This really surprised me. When I dream, if I bother to think about it I almost always know that I dream -- enough so that on the few occasions when I realize I was dreaming without knowing so, it's a surprising and memorable experience. (Though there may be selection bias here; I could have huge numbers of dreams where I don't know I'm dreaming, but I just don't remember them.)

I thought this was something that came with experience, maturity, and -- dare I say it? -- rationality. Now that I'm thinking about it in this context, I'm quite curious to hear whether this is true for most of the readership. I'm non-neurotypical in several ways; is this one of them?