Comment author: dlthomas 04 November 2011 04:55:08PM 1 point [-]

I don't think so - I mean, he was lashed to the mast so he couldn't influence the sailing of the ship. And it's not like he could shout orders, what with everyone else's ears plugged.

Comment author: Torvaun 07 November 2011 04:48:12PM 6 points [-]

When he stopped thrashing about trying to free himself so that he could go to the Sirens, the crew could know the danger had passed.

Comment author: Miller 02 May 2011 12:33:50PM 3 points [-]

I would say that the circuits that generate the neuroticism reaction are deep seated and pretty resistant to training. We can probably develop conscious methodology to understand and work around them, but not train them directly. I think in general it's one of those evolutionary circuits who's design is a bit outdated. Mostly I think it was tuned for an environment where angering people higher in the social order had much higher stakes for your survival, not for an environment with police and 401k's.

Comment author: Torvaun 02 May 2011 03:22:02PM 5 points [-]

Rejection therapy seems to be designed for training the neuroticism reaction. I haven't used it myself, so I might be getting some specifics wrong (including about the efficacy of it) but one of the methods I've seen is a box of cards with instructions on them. "Before purchasing something, ask for a discount." In my part of the US, at least, haggling is more or less not done. Following the instruction will break the standard social mold, and I'd expect in most cases, you won't get the discount. You would, however, be taking a risk, having it not pay off, and having the end result be underwhelming compared to the social cost anticipated by your neuroticism circuits. I'd imagine having an instruction on a card would apply pressure to conform to it, as well, a la Milgram. If nothing else, in the long term I'd expect it to give you a lot more evidence to draw from when anticipating the social cost of any given action.

In response to Prices or Bindings?
Comment author: Torvaun 28 March 2011 10:50:06PM 1 point [-]

If ethics must be held to in the face of the annihilation of everything, then I will proudly state that I have no ethics, only value judgments. Would I kill babies? Yes, to save the life of the mother. Would I kill innocents who had helped me? Yes, to save more. On an interesting aside, I would not torture an innocent for 40 years to prevent 3^^^^3 people from getting a speck of dust in their eyes assuming no further consequences from any of that dust. I would not walk away from Omelas, I would stay to tear it down.

In response to Feeling Rational
Comment author: Torvaun 21 February 2011 02:11:05PM 0 points [-]

Recently, there were rape allegations cast at Julian Assange, founder of Wikileaks. Some people in positions of power saw fit to expose identifying personal information about the accusers to the Internet and therefore, the world at large. This resulted in the accusers receiving numerous death threats and other harassment.

When safety can be destroyed by truth, should it be?

In response to Rationalization
Comment author: MoreOn 26 December 2010 07:32:51AM *  1 point [-]

Try answering this without any rationalization:

In my middle school science lab, a thermometer showed me that water boiled at 99.5 degrees C and not 100. Why?

In response to comment by MoreOn on Rationalization
Comment author: Torvaun 12 February 2011 04:12:07PM 3 points [-]

My experience leads me to assume that the thermometer was mismarked. My high school chemistry teacher drilled into us that the thermometers we had were all precise, but of varying accuracy. A thermometer might say that water boils at 99.5 C, but if it did, it would also say that it froze at -0.5 C. Again, there are conditions that actually change the temperature at which water boils, so it's possible you were at a lower atmospheric pressure or that the water was contaminated. But, given that we have a grand total of one data point, I can't narrow it down to a single answer.

Comment author: bigjeff5 29 January 2011 01:28:20AM 0 points [-]

That is a good question, but it doesn't help Warren's reasoning.

His reasoning was not that there was a high probability that they had committed acts of subversion that were undectectible. His reasoning was that because there was no evidence of subversion, this was evidence of future subversion.

This line of reasoning invalidates itself as soon as the first evidence of subversion is discovered, since the reason subversion was imminent was because there was no evidence of subversion.

In its most simple form, Warren was saying: "Because there is no evidence that the ball is blue, the ball is blue."

Comment author: Torvaun 02 February 2011 02:46:09AM 0 points [-]

I don't make any claims about undetected sabotage, I believe it to be statistically meaningless for these purposes. The detection clause was intended to make my statements more precise. Undetectable sabotage only modifies the odds of detectable sabotage, because it's clearly preferable to strike unnoticed. The conditional statement "If the odds are very high..." eliminates all scenarios where those odds are not very high, which brings this down to Warren assuming an ordering factor in the absence of random events. If you'd like to include undetected sabotage, then you also need to consider the odds that untrained saboteurs would be capable of undetectable sabotage.

Warren wasn't saying "Because there is no evidence that the ball is blue, the ball is blue." He was saying "The sun should be in the sky. I cannot see the sun. Therefore, it has been eaten by a dragon." He was wrong, as it turned out, the eclipse was caused by the moon, and the dragon he feared never existed. But if the dragon he predicted did exist, the world might look much like it did at the time of the predictions.

Comment author: Torvaun 18 January 2011 02:17:31AM 0 points [-]

I have to think that there is another question to be considered: What are the odds that Japanese-Americans would commit sabotage we could detect as sabotage? If the odds are very high that detectable sabotage would occur, then the absence of sabotage would be evidence in favor of something preventing sabotage. A conspiracy which collaborates with potential saboteurs and encourages them to wait for the proper time to strike then becomes a reasonable hypothesis, if such a conspiracy would believe that an initial act of temporally focused sabotage would be effective enough to have greater utility than all the acts of sabotage which would otherwise occur before the time of the sabotage spree.

Comment author: Eliezer_Yudkowsky 09 October 2008 09:31:05AM 15 points [-]

Nominull: Second, you can't possibly have a generally applicable way to force humans to do things. While it is in theory possible that our brains can be tricked into executing arbitrary code over the voice channel, you clearly don't have that ability. If you did, you would never have to worry about finding donors for the Singularity Institute, if nothing else. I can't believe you would use a fully-general mind hack solely to win the AI Box game.

I am once again aghast at the number of readers who automatically assume that I have absolutely no ethics.

Part of the real reason that I wanted to run the original AI-Box Experiment, is that I thought I had an ability that I could never test in real life. Was I really making a sacrifice for my ethics, or just overestimating my own ability? The AI-Box Experiment let me test that.

And part of the reason I halted the Experiments is that by going all-out against someone, I was practicing abilities that I didn't particularly think I should be practicing. It was fun to think in a way I'd never thought before, but that doesn't make it wise.

And also the thought occurred to me that despite the amazing clever way I'd contrived, to create a situation where I could ethically go all-out against someone, that probably they didn't really understand that, and there wasn't really informed consent.

McCabe: More importantly, at least in me, that awful tension causes your brain to seize up and start panicking; do you have any suggestions on how to calm down, so one can think clearly?

That part? That part is straightforward. Just take Douglas Adams's Advice. Don't panic.

If you can't do even that one thing that you already know you have to do, you aren't going to have much luck on the extraordinary parts, are you...

Prakash: Don't you think that this need for humans to think this hard and this deep would be lost in a post-singularity world? Imagine, humans plumbing this deep in the concept space of rationality only to create a cause that would make it so that no human need ever think that hard again. Mankind's greatest mental achievement - never to be replicated again, by any human.

Okay, so no one gets their driver's license until they've built their own Friendly AI, without help or instruction manuals. Seems to me like a reasonable test of adolescence.

Comment author: Torvaun 05 December 2010 05:06:45PM 7 points [-]

Hopefully this isn't a violation of the AI Box procedure, but I'm curious if the strategy used would be effective against sociopaths. That is to say, does it rely on emotional manipulation rather than rational arguments?