In response to comment by [deleted] on Most-Moral-Minority Morality
Comment author: Peterdjones 27 June 2011 10:21:01PM *  -1 points [-]

I don't think that a good analogy. i've never heard of a carnivore who thought meat eating was morally better. Their argument is that meat eating is not so much worse that it becomes an ethical no-no, rather than a ethically neutral lifestyle choice. (Morally level ground).

People can even carry on doing something they think is morally wrong on the excuse of akrasia.

And gay marriage is becoming slowly accepted.

Comment author: anon895 28 June 2011 12:37:14AM 2 points [-]

i've never heard of a carnivore who thought meat eating was morally better.

I suspect that you either haven't looked very hard or very long.

Comment author: XiXiDu 17 June 2011 03:43:29PM *  2 points [-]

The strongest counterargument offered was that a scope-limited AI doesn't stop rogue unfriendly AIs from arising and destroying the world.

I don't quite understand that argument, maybe someone could elaborate.

If there is a rule that says 'optimize X for X seconds' why would an AGI make a difference between 'optimize X' and 'for X seconds'? In other words, why is it assumed that we can succeed to create a paperclip maximizer that cares strongly enough about the design parameters of paperclips to consume the universe (why would it do that as long as it isn't told to do so) but somehow ignores all design parameters that have to do with spatio-temporal scope boundaries or resource limitations?

I see that there is a subset of unfriendly AGI designs that would never halt, or destroy humanity while pursuing their goals. But how large is that subset, how many do actually halt or proceed very slowly?

Comment author: anon895 17 June 2011 09:31:02PM 3 points [-]

(I wrote this before seeing timtyler's post.)

If there is a rule that says 'optimize X for X seconds' why would an AGI make a difference between 'optimize X' and 'for X seconds'?

I does seem like you misinterpreted the argument, but one possible failure there is if the most effective way to maximize paperclips within the time period is to build paperclip-making Von Neumann machines. If it designs the machines from scratch, it won't build a time limit into them because that won't increase the production of paperclips within the period of time it cares about.

Comment author: [deleted] 18 March 2011 01:20:04PM 27 points [-]

A relationship between two rationalists can be much happier and freer of drama. If Eliezer's example isn't clear enough, here's another one.

"I'm worried about X."

Non-rationalist: "I've told you a million times, that's not gonna happen! Why can't you trust me?"

Rationalist: "Ok, let's go to Wikipedia, get some stats, and do the expected value calculation. Let me show you how unlikely this is."

Which conversation ends in a fight? Which conversation ends in both people actually feeling more at ease?

There are female memes to the effect "Men are endearing fools," and male memes to the effect "Women are beautiful fools." But a fool eventually gets frustrating. It is an incredible relief to meet someone who isn't foolish. "Whoa... you mean you can embrace an idea without being an uncritical fanatic? You mean you can actually make allowances for overconfidence bias, instead of taking reckless gambles? You can listen to the content of what I'm saying instead of the applause lights?" Having a rationalist partner means never having to say "Oh, you wouldn't understand."

Also, on cultishness: I saw an ad the other day for a new book on how to start a green activist organization. How to attract members, get speaking engagements, raise money, build momentum, etc. My first reaction was "Oh, that's nice; I'm sure that book would be handy for environmentalists." Then I thought "If we did half the stuff that tree-hugging college kids do, we'd call it Dark Arts and we'd be terrified of turning into a cult."

Comment author: anon895 18 March 2011 08:46:19PM 2 points [-]

Which conversation ends in a fight? Which conversation ends in both people actually feeling more at ease?

They don't sound meaningfully different to me; you're saying the same thing, just less emotively and more casually.

I saw someone recently suggest saying (in a sympathetic tone) "What are you planning to do?". (Possibly preceded by something like "Yeah, I can understand why you would be".) I wouldn't expect good results from it in real life, but I like it anyway (and it might be better than some alternatives).

Comment author: austhinker 14 March 2011 02:14:41PM -1 points [-]

And some people still believe that people choose to be homosexual.

If that were so, why would teenagers commit suicide instead of choosing to be heterosexual.

To me, a gay man is just less competition, and since lots of women are not interested in me anyway, what difference does it make if some of them are gay?

Comment author: anon895 15 March 2011 05:54:34PM -1 points [-]

Inherent flaws of moral codes based on non-deterministic ideas of free will aside, I don't think I've ever seen a version of that argument where the two sides admitted that they were using different definitions of "be homosexual".

Comment author: virtualAdept 04 March 2011 05:58:45AM 13 points [-]

Aside from being socialized to expect to be bad at analytical problems, I'd suggest (from aggregate reading about stereotype threat, feminist issues, and my experiences growing up) that part of the issue is that there's a lot of fear of being seen to try hard and fail. It's perfectly socially acceptable (unfortunately) for a young woman to doubt her own abilities to solve a problem and in so doing, decline to try it. However, if she's seen struggling with something, she's likely to encounter derision, with the implicit or explicit statement that she's reaching out of her depth. A self-effacing attitude, or the semblance of it, is socially necessary, because while young women are allowed to be Smart, they are not allowed to be Arrogant. I can provide references for these points if needed, though I believe it's pretty familiar ground for those at all versed in gender socialization norms.

Into purely personal territory now - take as you will - there was a time (around 4th grade through perhaps 10th) when I was that afraid of failing. If I tried a novel problem (even if no one else understood it), and couldn't immediately figure out what to do to solve it, my (male) peers jumped in with taunts along the lines of "she's not so smart after all." There were several years where it felt like any major failure would utterly ruin my credibility as a Bright Girl. It was far easier to assess the difficulty of a new problem, and quietly decline if I didn't think I could handle it.

Concerning the gender imbalance on the nerd spaces of the internet, I could probably go on all night about it, but I'm about to pass out and start drooling on my keyboard. Maybe I will go on all night about it in a separate post on a separate night.

Comment author: anon895 05 March 2011 10:27:36PM *  1 point [-]

I find that kind of interesting, since my mom's similar behavior comes off as extremely arrogant to me. Electronics and computer software of any kind are the Domain of Men, and any problems she has with them are our responsibility to solve, no matter how many thousands of hours she's been using a particular system and no matter how unfamiliar it is to us. If you try to guide her toward figuring something out herself, she'll eventually grin and throw up her hands and say "Confusing! Confusing!" and repeat the request just do it for her.

On further thought it's not strictly about doing things for her, but when she wants to know how to do something she wants specific, step-by-step instructions without trying to explain why those steps work (doing that will immediately trigger "Confusing! Confusing!"); i.e. "How do I check text messages on this phone which I've been using for years and which has simple and clearly labeled menus?".

...I'm probably using a thread as an excuse to vent again, but GIFT.

Comment author: Singuhilarity 12 January 2011 02:08:58PM 5 points [-]

I recently started a theoretically humorous webcomic about the Singularity entitled Singuhilarity. It's poorly drawn and a little rough in places, but it touches on a number of lesswrong-type subjects as well as some pretty standard science fiction tropes.

Here's the first comic and here's the latest.

Comment author: anon895 19 February 2011 11:26:25PM 1 point [-]

Followup to previous comment: I feel like this link from Reddit may apply.

Comment author: Alicorn 08 February 2011 09:01:45PM 7 points [-]

He can only be blackmailed with such photos if he would mind having them displayed to some third party.

Comment author: anon895 09 February 2011 02:39:36AM 2 points [-]

But he might benefit from having her think she's blackmailing him.

In response to Hand vs. Fingers
Comment author: bigjeff5 02 February 2011 02:54:30AM 2 points [-]

People interested in the discussion between Eliezer and Richard might find this Wikipedia article interesting: Depersonalization Disorder

Essentially, people behave as they otherwise would, except they don't have a sense of "self-awareness". That is, they did something, and they know they did something, but it doesn't feel as though it was them who did the thing. Often people feel as though they are automata, pre-programmed to respond to certain stimuli, but that there is no "self" driving them.

The disorder also tends to cause its inverse, which is derealization. That is, the individual perceives himself to be real, but nothing external is real.

This effect can be generated with drugs, and it can also be treated with drugs. This suggests to me that the entire "sense of self" is caused by a chemical interaction within the brain.

In response to comment by bigjeff5 on Hand vs. Fingers
Comment author: anon895 02 February 2011 04:36:11AM *  0 points [-]

Not wanting to open a possibly long article: is that the same thing as dissociation? Is dissociation the symptom and depersonalization a cluster of symptoms that includes it?

Comment author: jacob_cannell 02 February 2011 02:04:38AM 2 points [-]

That could still be a great thing for us provided that current human minds were uploaded into the resulting computronium explosion.

Comment author: anon895 02 February 2011 03:21:37AM 2 points [-]

...which won't happen if the computronium is the most important thing and uploading existing minds would slow it down. The AI might upload some humans to get their cooperation during the early stages of takeoff, but it wouldn't necessarily keep those uploads running once it no longer depended on humans, if the same resources could be used more efficiently for itself.

Comment author: jacob_cannell 01 February 2011 07:31:14AM 0 points [-]

The test isn't about faking human language, it's about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.

What's a better test for AI than the turing test?

Comment author: anon895 01 February 2011 07:55:34AM 0 points [-]

Possibly relevant: AIXI-style IQ tests.

View more: Prev | Next