Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 04 August 2014 07:24:49PM 11 points [-]

Why is this so?

Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...

Comment author: faul_sname 07 August 2014 07:05:28AM 0 points [-]

Awfully similar, but not identical.

In the first case, you have independent evidence that the conclusion is false, so you're basically saying "If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments."

In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true. I think it's more likely that there is a flaw in your conclusion that I can't detect than that there is a flaw in the reasoning that led to my conclusion."

The person in the first case is far more likely to respond with "I don't know" in response to the question of "So what do you think the real answer is, then?" In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form "If Y is true, how do you explain X?" which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.

Rereading your comment, I see that there are two ways to interpret it. The first is "Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion." The second is "Rationalists do not use this form of argument because it is flawed -- they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence." I agree with the first interpretation, but not the second -- that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.

Comment author: philh 02 August 2014 11:02:05AM 2 points [-]

Oh! You're also running your opponent playing a game against MirrorBot, not against TrollBot.

Which I still don't understand... you run SMB versus MB, time limit 10000. SMB runs MB versus MB, time limit 10000. MB versus MB times out, which meas SMB runs for > 10000 us, which means that SMB should time out and you should cooperate.

Meanwhile, SMB runs you versus MB, time limit 10000. You run MB versus MB, time limit 10000. That times out, so you time out, so SMB cooperates.

But you're defecting, which means that you're running SMB versus MB to completion, which seems like it shouldn't happen.

Comment author: faul_sname 04 August 2014 12:37:16AM 0 points [-]

That does seem exploitable, if one can figure out exactly what's happening here.

Comment author: philh 01 August 2014 09:23:34PM 2 points [-]

Limit my opponent to 10ms, defect if they go over.

You actually cooperate in this case.

Quick analysis: you're going to defeat CooperateBot 500 points), lose against DefectBot (0 points), and tie against TitForTatBot (250 points from alternating D/C and C/D). Against RandomBot, you are also RandomBot, both of you scoring 225 on average.

When you simulate MirrorBot, the infinite recursion makes ver time out, so you cooperate. So MirrorBot cooperates against you as well (300 points). SmarterMirrorBot and JusticeBot both time out as well. SmarterMirrorBot can't work out what you'll do, and cooperates (300 points). JusticeBot may or may not be able to work out what you'll do against CooperateBot, and defects either way (0 points).

But I think TitForTatBot should beat that, at least: 300 against CooperateBot, 99 against DefectBot, 300 against JusticeBot, 223.5 against RandomBot, and all other scores the same.

So, I'm puzzled too, if TrollBot is getting the highest score in the first round.

Comment author: faul_sname 02 August 2014 12:31:37AM *  4 points [-]

Limit my opponent to 10ms, defect if they go over.

You actually cooperate in this case.

Whoops. Effect goes away if I fix it, too.

Here are the average results for the first round:

http://pastebin.com/qN5H2A25

For some reason, TrollBot always wins 500 / 0 against SmarterMirrorBot. DefectBot actually beats TrollBot by a narrow margin (1604 - 1575 = 29 points) on average, but there is quite a bit of randomness from RandomBot, so TrollBot often comes out ahead of even DefectBot, and they both come out over 100 points ahead of the next bot (TitForTatBot).

Since I built TrollBot as a sanity check on my actual bot to make sure that it would defect against TrollBot, I was definitely surprised by the fact that TrollBot outperformed not only my attempt at a serious bot, but also most of the other bots... :/

Comment author: faul_sname 01 August 2014 08:08:05PM 2 points [-]

Found something slightly amusing:

-- Simulate my opponent playing a round against me, and to the opposite of
-- whatever my opponent does. Limit my opponent to 10ms, defect if they go
-- over.
trollBot :: Bot
trollBot = Bot run where
run op hist = do
simulation <- time 10000 . runBot op mirrorBot $ invert hist
return (case simulation of
Nothing -> Cooperate
Just Cooperate -> Defect
Just Defect -> Cooperate)

When I enter trollBot into the simulation tournament, it actually ends up doing better than any of the default bots during the first round, pretty consistently. It also results in a win for defectBot.

I'm puzzled as to why trollBot does as well as it does. Is it just a function of the particular players it's up against, or is that actually a viable strategy?

Comment author: RichardKennaway 27 May 2014 07:01:41PM 2 points [-]

Context? I can randomly replace elements of this by their opposites and get something that sounds just as truthy.

Try it!

"[Because/although] [positive/negative] [illusions/perceptions] provide a [short/long]-term [benefit/cost] with [larger/smaller] [long/short]-term [costs/benefits], they can [become/avoid] a form of [emotional/intellectual] [procrastination/spur to action]."

Comment author: faul_sname 01 June 2014 08:42:07AM 0 points [-]

"Because positive illusions provide a short-term benefit with smaller short-term benefits, they can become a form of intellectual procrastination."

Comment author: Nornagest 25 March 2014 07:55:57PM *  1 point [-]

Do you think there's more or less than a 1 in a million chance of someone reading and executing one of these ideas?

Vastly less. I expect the chances of a given person genuinely wanting to indiscriminately harm humanity -- not just as an idle revenge fantasy or as a means of signaling cynicism, but as a goal motivating actual behavior even when it comes at high costs -- to be somewhere in the neighborhood of one in a million already, if not lower. The chance of such a person reading the offending post, following the reasoning, deciding to implement it, and coming up with the liquid money to fund it (million-dollar budgets don't grow on trees) is very small indeed.

It's much easier to find people that want to direct harm at some nation or identity group, but most of the ideas in this thread aren't so easily targeted.

Comment author: faul_sname 26 March 2014 05:23:55AM 2 points [-]

On reflection, I think you're right that the chances are much lower than 1 in a million that a given human wants to indiscriminately harm humanity. Retracted.

Comment author: Punoxysm 24 March 2014 11:44:38PM *  1 point [-]

I'm gonna guess this has something to do with bees then (or in that general direction)?

Well, all sorts of tragedy of the commons things exist. If you think you've got one that could turn a commons into a resource to be manipulated, and can convince people, there will be a dozen investors knocking at your door!

It's been done a thousand times before and not only that but there are whole philosophical movements arguing that it's a moral imperative.

Nevertheless, you seem like you're in the running for the prize.

Comment author: faul_sname 25 March 2014 12:08:57AM 0 points [-]

Tragedy-of-the-commons-for-profit has been done quite profitably -- see swoopo.com until quite recently.

Comment author: faul_sname 24 March 2014 11:54:44PM 4 points [-]

Organic Chemistry lab --

Label everything, especially when two subsequent steps of your reaction look very similar.

If you're going to leave something stirring overnight, make sure there's a backup power supply, especially if your area has a history of power failures.

Not mine, but -- If the temperature of your oil bath seems to be going up much more slowly than usual, check to make sure the thermometer is working properly. Don't just turn the heat up until the temperature until the thermometer reads correctly. One of the people in my lab managed to cook his compound at 280 C because the tip of the thermometer was slightly above the surface of the oil bath.

Comment author: Punoxysm 24 March 2014 11:14:14PM 2 points [-]

I'm sorry, but dust speck distribution is far more expensive than your budget will allow, unless you have a concrete plan to create and fund a dust speck foundation from that seed funding I will again have to reject your application.

Comment author: faul_sname 24 March 2014 11:30:10PM 0 points [-]

Is the chance of me of doing that conditional on your giving me a million dollars less than the chance that James_Miller will bring utopia to an infinite number of people in conjunction with the chance that he will not do that if you give him a million dollars?

Comment author: Punoxysm 24 March 2014 11:08:56PM *  4 points [-]

Well, bioterrorism is definitely illegal. And remember the challenge is "don't do anything illegal", not "don't get found guilty". And there is plenty of information out there about how to do bad things, though reading too much of it without a reasonable cover will get you on a watchlist. And there are plenty of books filled with both malice and misinformation. What could you really contribute at the margin? If you think you could kickstart your rise as a dictator for a million dollars, I'm afraid I think you're suffering from overoptimism/pessimism.

Norman Borlaug relied on people adopting his inventions and discoveries. If he'd been pushing agricultural practices that only produced half as much food, he'd just be crank.

Comment author: faul_sname 24 March 2014 11:18:51PM -2 points [-]

Bioterrorism is definitely not where I was going with this. However, it is pretty much a given that the owners of large farms will do things that will increase their crop production, even if it decreases the productivity of farms that are spatially or temporally distant from them.

Again, think about creative uses for the knowledge you have for 5 minutes before you come to the conclusion that it's not possible to do significant harm with it. You probably don't even have to think directly of doing harm -- just look for the most profitable thing you can do with that knowledge, figure out what the negative side effects would be (particularly tragedy-of-the-commons type effects), and figure out how you can maintain profitability while increasing those negative side effects.

View more: Next