Comment author: Eugine_Nier 27 August 2013 05:29:56AM 1 point [-]

NASCAR, or professional wrestling.

Have you actually watched either of these, or are you going by the stereotype that kind of person who watches them is low status?

Comment author: Martin-2 29 August 2013 02:04:07PM 1 point [-]

I believe that is the point of the exercise.

Comment author: Martin-2 06 August 2013 02:00:35PM 0 points [-]

''unconscious or dimly perceived finagling is probably endemic in science, since scientists are human beings rooted in cultural contexts, not automatons directed toward external truth''

Somehow this post has actually increased my confidence in Gould's claim here.

Comment author: Martin-2 06 August 2013 02:31:22PM 1 point [-]

Further reading suggests Gould is not representative of scientists. My confidence has gone back down.

Comment author: Martin-2 06 August 2013 02:00:35PM 0 points [-]

''unconscious or dimly perceived finagling is probably endemic in science, since scientists are human beings rooted in cultural contexts, not automatons directed toward external truth''

Somehow this post has actually increased my confidence in Gould's claim here.

Comment author: Document 03 August 2013 08:33:36AM 0 points [-]

Do arguments themselves "improve", rather than simply being right or wrong?

Comment author: Martin-2 03 August 2013 09:14:00AM *  3 points [-]

Maybe, since arguments have component parts that can be individually right or wrong; or maybe not, since chains of reasoning rely on every single link; or maybe, since my argument improves (along with my beliefs) as I toss out and replace the old one.

Come to think of it, if "trees grow roots most strongly when wind blows through them" because the trees with weak roots can't survive in those conditions then this would make a very bad metaphor for people.

Comment author: Document 03 August 2013 02:26:06AM *  5 points [-]

Is that true (for trees or people)?

Edit: For one example, this person currently linked in the sidebar isn't sure.

Comment author: Martin-2 03 August 2013 08:25:13AM 1 point [-]

If this quote were about people improving through adversity I wouldn't have posted it (I also read that article). But I think it's true for arguments. The last sentence does a better job of fitting the character than illuminating the point so I could have left it out.

Comment author: Martin-2 02 August 2013 08:58:57PM *  0 points [-]

Elayne blinked in shock. “You would have actually done it? Just… left us alone? To fight?”

"Some argued for it," Haman said.

“I myself took that position,” the woman said. “I made the argument, though I did not truly believe it was right.”

“What?” Loial asked [...] “But why did you-“

“An argument must have opposition if it is to prove itself, my son,” she said. “One who argues truly learns the depth of his commitment through adversity. Did you not learn that trees grow roots most strongly when wind blows through them?”

Covril, The Wheel of Time

Comment author: Martin-2 01 August 2013 10:19:17PM 4 points [-]

It is not July. It is August.

Comment author: George_Weinberg2 05 October 2008 06:06:00PM 5 points [-]

Thank you for a correct statement of the problem which indeed gives the 1/3 answer. Here's the problem I have with the malformed version: I agree that it's reasonable to assume that if the children were a boy and a girl it is equally likely that the parent would say "at least one is a boy" as "at least one is a girl". But I guess you're assuming the parent would say "at least one boy" if both were boys, "at least one girl" if both were girls, and either "at least one boy" or "at least one girl" with equal probability in the one of each case.

That's the simplest set of assumptions consistent with the problem. But the quote itself is inconsistent with the normal rules of social interaction. Saying "at least one is a boy" takes more words to convey less information than saying "both boys" or "one of each". I think it's perfectly reasonable to draw some inference from this violation of normal social rules, although it is not clear to me what inference should be drawn.

Comment author: Martin-2 17 July 2013 11:36:12PM 3 points [-]

Keep in mind this is a hypothetical character behaving in an unrealistic and contrived manner. If she doesn't heed social norms or effective communication strategies then there's nothing we can infer from those considerations.

Comment author: [deleted] 16 July 2013 02:02:50PM *  3 points [-]

Note: The following post is a cross of humor and seriousness.

After reading another reference to an AI failure, it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."

I felt like I should also consider potential solutions that look at the next type of failure. My initial reasoning is: Assuming that a bunch of AI researchers are determined to avoid that particular failure mode and only that one, they're probably going to run into other failure modes as they attempt (and probably fail) to bypass that.

For instance: AI Researchers build an AI that gains utility roughly equivalent to the Square Root(Median Human Prolifigacy) times Human Population times Time, and is dumb about Metaphysics, and has a fixed utility function.

It's not happier if the top Human doubles his energy consumption. (Note: Median Human Prolifigacy)

It's happier, but not twice as happy when Humans are using Twice as many Petawatthours per Year (Note: Square Root: This also helps prevent 1 human killing all other humans from space and setting the earth on fire be a good use of energy. This Skyrockets the Median, but it does not skyrocket the Square Root of the Median nearly as much.)

It's five times as happy if there are five times as many Humans, and ten times as happy when Humans are using the same amount of energy for year for 10 years as opposed to just 1.

Dumb about metaphysics is a reference to the following type of AI failure: "I'm not CERTAIN that there are actually billions of Humans, we might be in the matrix, and if I don't know that, I don't know if I'm getting utility, so let me computronium up earth really quick just to run some calculations to be sure of what's going on." Assume the AI just disregards those kinds of skeptical hypotheses, because it's dumb about metaphysics. Also assume it can't change it's utility function, because that's just too easy too combust.

As I stated, this AI has bunches of failure modes. My question is not "Does it Fail?" but "Does it even sound like it avoids having eat humans, make computronium be the most plausible failure? If so, what sounds like a plausible failure?"

Example Hypothetical Plausible Failure: The AI starts murdering environmentalists because it fears that environmentalists will cause an overall degradation in Median human energy use that will lower overall AI utility, and environmentalists also encourage less population growth, which further degrades AI utility, and while the AI does value the environmentalists human energy consumption which boosts utility, they're environmentalists, so they have a small energy footprint, and it doesn't value not murdering people in of itself.

After considering that kind of solution, I went up and changed 'my reasoning' to 'my initial reasoning' Because at some point I realized I was just having fun considering this kind of AI failure analysis and had stopped actually trying to make a point. Also, as Failed Utopia 4-2 points out in http://lesswrong.com/lw/xu/failed_utopia_42/ designing more interesting failures can be fun.

Edit for clarity: I AM NOT IMPLYING THE ABOVE AI IS OR WILL CAUSE A UTOPIA. I don't think it it could be read that way, but just in case there are inferential gaps, I should close them.

In response to comment by [deleted] on Open thread, July 16-22, 2013
Comment author: Martin-2 17 July 2013 12:53:21AM 2 points [-]

it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."

Really? I think the one I see most is "I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them". At least in Hollywood. You may be looking in more interesting places.

Per your AI, does it have an obvious incentive to help people below the median energy level?

Comment author: Martin-2 16 July 2013 04:33:24AM *  11 points [-]

Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.

*dun-dun-dun-dun

Steelman that shoddy argument

Mend its faults so they can't be seen

Help that bastard make more sense

A reformulation to see what they mean

View more: Prev | Next