Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Grognor 26 October 2013 06:01:25AM 3 points [-]

I suggest a new rule: the source of the quote should be at least three months old. It's too easy to get excited about the latest blog post that made the rounds on Facebook.

Comment author: Grognor 03 February 2013 09:59:37PM *  37 points [-]

It is because a mirror has no commitment to any image that it can clearly and accurately reflect any image before it. The mind of a warrior is like a mirror in that it has no commitment to any outcome and is free to let form and purpose result on the spot, according to the situation.

—Yagyū Munenori, The Life-Giving Sword

Comment author: [deleted] 04 September 2012 08:39:45AM *  17 points [-]

Neither side of the road is inherently superior to the other, so we should all choose for ourselves on which side to drive. #enlightenment

--Kate Evans on Twitter

In response to comment by [deleted] on Rationality Quotes September 2012
Comment author: Grognor 12 September 2012 05:37:03AM *  2 points [-]

You may find it felicitous to link directly to the tweet.

Comment author: Zvi 01 September 2012 09:10:38PM 17 points [-]

Subway ad: "146 people were hit by trains in 2011. 47 were killed."

Guy on Subway: "That tells me getting hit by a train ain't that dangerous."

  • Nate Silver, on his Twitter feed @fivethirtyeight
Comment author: Grognor 04 September 2012 01:26:39AM *  20 points [-]

This reminds me of how I felt when I learned that a third of the passengers of the Hindenburg survived. Went something like this, if I recall:

Apparently if you drop people out of the sky in a ball of fire, that's not enough to kill all of them, or even 90% of them.

Comment author: Grognor 20 August 2012 04:29:04PM *  -1 points [-]

I have become 30% confident that my comments here are a net harm, which is too much to bear and so I am discontinuing my comments here unless someone cares to convince me otherwise.

Edit: Good-bye.

Comment author: novalis 08 August 2012 03:20:00PM 0 points [-]

Intel kept throwing money at the project for years, indicating that they must have been planning on the basis of these predictions.

Comment author: Grognor 08 August 2012 04:34:46PM *  0 points [-]

Which is not the same thing as expecting a project to take much less time than it actually will.

Edit: I reveal my ignorance. Mea culpa.

Comment author: Grognor 08 August 2012 02:19:42PM 9 points [-]

Parts of this I think are brilliant, other parts I think are absolute nonsense. Not sure how I want to vote on this.

there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.

This strikes me as probably true but unproven.

My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology.

You are anthropomorphizing the universe.

Comment author: novalis 02 August 2012 08:04:11PM 1 point [-]
Comment author: Grognor 08 August 2012 06:14:26AM 1 point [-]

That isn't the planning fallacy.

Comment author: Grognor 08 August 2012 04:46:51AM 3 points [-]

This is a better explanation than I could have given for my intuition that physicalism (i.e. "the universe is made out of physics") is a category error.

Comment author: DaFranker 06 August 2012 09:34:41PM *  5 points [-]

The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.

Perhaps you'd like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?

Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter "Closed for not taking into account my objection!". If what is currently qualified as "the most common objections" is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time's "most common objections", and then repeat.

I'm sure this argument was made in better form somewhere else before, but I'm not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.

To (very poorly) paraphrase Eliezer*: "The obvious solution to you just isn't. It wasn't obvious to X, it wasn't obvious to Y, and it certainly wasn't obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building "safe" AIs."

This also holds true of objections to SIAI, AFAICT. What seems like an "obvious" rebuttal, objection, etc. or a "common" complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of "common objections" and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses... I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of "people made aware of the issue" and "donators gained" and maybe even "researchers sensitivitized to the issue".

* Edit: Correct quote in reply by Grognor, thanks!

Comment author: Grognor 06 August 2012 10:38:07PM 5 points [-]

Whether or not a non-self-modifying planning Oracle is the best solution in the end, it's not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that 'tool AI' wasn't the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.

-Reply to Holden on Tool AI

View more: Next