Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to "I don't know."
Comment author: robertskmiles 25 August 2015 04:12:26PM 5 points [-]

It seems to me that "I don't know" in many contexts really means "I don't know any better than you do", or "Your guess is as good as mine", or "I have no information such that sharing it would improve the accuracy of your estimate", or "We've neither of us seen the damn tree, what are you asking me for?".

This feels like a nothing response, because it kind of is, but you aren't really saying "My knowledge of this is zero", you're saying "My knowledge of this which is worth communicating is zero".

Comment author: Benquo 30 April 2014 03:00:31AM *  3 points [-]

This is my favored strategy right now - get lots of spare keys if they're cheap, and keep them everywhere plausible.

Comment author: robertskmiles 07 May 2014 02:49:35PM *  3 points [-]

Losing keys has two problems. The first is that you can't open the lock, the second is that there's a chance that now someone else can open the lock, if they find your keys and are nefarious. It reminds me of Type 1 and Type 2 errors. Having more keys reduces the risk of "An authorised person is not able to open the lock" by increasing the risk of "An unauthorised person is able to open the lock".

Consider this trade-off carefully.

Comment author: christopherj 23 October 2013 06:25:49PM 0 points [-]

This is almost exactly the argument I thought of as well, although of course it means cheating by pointing out that you are in fact not a dangerous AI (and aren't in a box anyways). The key point is "since there's a risk someone would let the AI out of the box, posing huge existential risk, you're gambling on the fate of humanity by failing to support awareness for this risk". This naturally leads to a point you missed,

  1. Publicly suggesting that Eliezer cheated, is a violation of your own argument. By weakening the fear of fallible guardians, you yourself are gambling the fate of humanity, and that for mere pride and not even $10.

I feel compelled to point out, that if Eliezer cheated in this particular fashion, it still means that he convinced his opponent that gatekeepers are fallible, which was the point of the experiment (a win via meta-rules).

Comment author: robertskmiles 07 May 2014 02:45:27PM 0 points [-]

How is this different from the point evand made above?

Comment author: foret 14 January 2013 09:52:52PM *  7 points [-]

In reference to Occam's razor:

"Of course giving an inductive bias a name does not justify it."

--from Machine Learning by Tom M. Mitchell

Interesting how a concept seems more believable if it has a name...

Comment author: robertskmiles 25 April 2013 05:09:10PM 0 points [-]

Or less. Sometimes an assumption is believed implicitly, and it's not until it has a name that you can examine it at all.

Comment author: Manfred 05 August 2012 06:56:35AM *  15 points [-]

No, it's because lukeprog did what seems like common sense: he bought a copy of Nonprofits for Dummies and did what it recommends.

There's a similar principle that I use sometimes when solving physics problems, and when building anything electronic. It's called "Do it the Right Way."

Most of the time, I take shortcuts. I try things that seem interesting. I want to rely on myself rather than on a manual. I don't want to make a list of things to do, but instead want to do things as I think of them.

This is usually fine - it's certainly fast when it works, and it's usually easy to check my answers. But as I was practicing physics problems with a friend, I realized that he was terrible at doing things my way. Instead, he did things the right way. He "used the manual." Made a mental list. Followed the list. Every time he made a suggestion, it was always the Right Way to do things.

With physics, these two approaches aren't all that far apart in terms of usefulness - though it's good to be able to do both. But if you want to do carpentry or build electronics, you have to be able to do things the Right Way.

Comment author: robertskmiles 07 March 2013 12:01:40PM *  1 point [-]

To add to that, if you want to Do Things The Right Way, don't use a mental list, use a physical list. Using a checklist is one of the absolute best improvements you can make in terms of payoff per unit of effort. The famous example is Gawande, who tested using a "safe surgery" checklist for surgeons, which resulted in a 36% reduction in complications and a 47% fall in deaths.

In response to The Fallacy of Gray
Comment author: James_Bach 07 January 2008 08:31:31AM 0 points [-]

It sounds like you are trying to rescue induction from Hume's argument that it has no basis in logic. "The future will be like the past because in the past the future was like the past" is a circular argument. He was the first to really make that point. Immanuel Kant spent years spinning elaborate philosophy to try to defeat that argument. Immanuel Kant, like lots of people, had a deep need for universal closure.

An easier way to go is to overcome your need for universal closure.

Induction is not logically justified, but you can make a different argument. You could point out that creatures who ignore the apparent patterns in nature tend to die pretty quick. Induction is a behavior that seems to help us stay alive. That's pretty good. That's why people can't just wave their hands and claim reality is whatever anyone believes-- if they do that, they will discover that acting on that belief won't necessarily, say, win them the New York lottery.

My concern with your argument is, again, structural. You are talking about "gray", and then you link that to probability. Wait a minute, that oversimplifies the metaphor. You present the idea of gray as a one-dimensional quantity, similar to probability. But when people invoke "gray" in rhetoric they are simply trying to say that there are potentially many ways to see something, many ways to understand and analyze it. It's not a one-dimensional gray, it's a many dimensional gray. You can't reduce that to probability, in any actionable way, without specifying your model.

Here's the tactic I use when I'm trying to stand up for a distinction that I want other people to accept (notice that I don't need to invoke "reality" when I say that, since only theories of reality are available to me). I ask them to specify in what way the issue is gray. Let's distinguish between "my spider senses are telling me to be cautious" and "I can think of five specific factors that must be included in a competent analysis. Here they are..."

In other words, don't deny the gray, explore it.

A second tactic I use is to talk about the practical implications of acting-as-if a fact is certain: "I know that nothing can be known for sure, but if we can agree, for the moment, that X, Y, and Z are 'true' then look what we can do... Doesn't that seem nice?"

I think you can get what you want without ridiculing people who don't share your precise worldview, if that sort of thing matters to you.

Comment author: robertskmiles 16 November 2012 01:40:03PM 19 points [-]

Induction is a behavior that seems to help us stay alive.

Well, it has helped us to stay alive in the past, though there's no reason to expect that to continue...

Comment author: [deleted] 13 November 2012 09:30:55AM *  2 points [-]

You've got to multiply those sanctions by the probability of getting caught, though. (ISTM that robertskmiles is thinking purely CDTically/act-consequentialistically, ignoring acausal/Kantian/golden rule/rule-consequentialist concerns.)

In response to comment by [deleted] on Rationality Quotes November 2012
Comment author: robertskmiles 16 November 2012 01:39:18PM 1 point [-]

That's accurate, yes.

Comment author: MugaSofer 10 November 2012 06:55:04PM 5 points [-]

ignoring social consequences of passing off fake money

Why?

Comment author: robertskmiles 12 November 2012 08:32:42PM 1 point [-]

Because it's immaterial to the central point. For a high enough level of "convincingness", fake money has significant real-world value.

Comment author: ZoneSeek 08 November 2012 01:34:46PM 7 points [-]

Currency is binary, either genuine or counterfeit. Ideas are on a continuum, some less wrong than others. Generally, bad ideas are dangerous because there's some truth or utility to them; few people are seduced by palpable nonsense. Parsing mixed ideas is a big part of rationality, and it's harder than spotting fake money.

Comment author: robertskmiles 10 November 2012 06:09:22PM *  4 points [-]

A technicality: Officially, currency is binary, but in practice that's not the case. Fake currency that is convincing still has value. A fake dollar bill with a 50% probability of going un-noticed is in practice worth 50 cents (ignoring social consequences of passing off fake money). Fake currency with 100% convincingness is 100% as valuable as real currency (until you make enough to cause inflation).

Comment author: robertskmiles 15 October 2012 06:09:12PM *  7 points [-]

An implicit assumption of this article which deserves to be made explicit:

"All negative effects of buying things from the banned store accrue to the individual who chose to purchase from the banned store"

In practical terms this would not be the case. If I buy Sulphuric Acid Drink from the store and discover acid is unhealthy and die, that's one thing. If I buy Homoeopathic Brake Pads for my car and discover they do not cause a level of deceleration greater than placebo, and in the course of this discovery run over a random pedestrian, that's morally a different thing.

The goal of regulation is not just to protect us from ourselves, but to protect us from each other.

View more: Next