Comment author: rule_and_line 07 April 2014 08:11:24PM *  8 points [-]

I'm more an outsider than a regular participant here on LW, but I have been boning up on rhetoric for work. I'm thrown by this in a lot of ways.

I notice that I'm confused.

Good for private rationality, bad for public rhetoric? What does your diagram of the argument's structure look like?

As for me, I want this as the most important conclusion in the summary.

But in fact most goals are dangerous when an AI becomes powerful

I don't get that, because the evidence for this statement comes after it and later on it is restated in a diluted form.

goals that seem safe ... can lead to extremely pathological behaviour if the AI becomes powerful

Do you want a different statement as the most important conclusion? If so, which one? If not, why do you believe the argument works best when structured this way? As opposed to, e.g., an alternative that puts the concrete evidence farther up and the abstract statement "Most goals are dangerous when an AI becomes powerful" somewhere towards the end.

Related point: I get frequent feelings of inconsistency when reading this summary.

  • I'm encouraged to imagine the AI as a super committee of

    Edison, Bill Clinton, Plato, Oprah, Einstein, Caesar, Bach, Ford, Steve Jobs, Goebbels, Buddha, etc.

  • then I'm told not to anthropomorphize the AI.
Or

  • I'm told the AI's motivations are what "we actually programmed into it",
  • then I'm asked to worry about the AI's motivation to lie.

Note I'm talking about rhetorical, a/k/a surface-level feeling of inconsistency here.

You seem like a nice guy.

Let's put on a halo. Isn't the easiest way to appear trustworthy to first appear attractive?

I was surprised this summary didn't produce emotions around this cluster of questions:

  • Who are you?
  • Do I like you?
  • Do I respect your opinion?

Did you intend to skip over all that? If so, is it because you expect your target audience already has their answers?

Shut up and take my money!

There are so many futuristic scenarios out there. For various reasons, these didn't hit me in the gut.

The scenarios painted in the paragraph that starts with

Our society is setup to magnify the potential of such an entity, providing many routes to great power.

are very easy for me to imagine.

Unfortunately, that works against your summary for me. My imagination consistently conjures human beings.

  • Wall Street banker.
  • Political lobbyist for an industry that I dislike.
  • (Nobody comes to mind for the "replace almost every worker in the service sector" scenario.)
  • Chairman of the Federal Reserve.
  • Anonymous Eastern European hacker.

The feeling that "these are problems I am familiar with, and my society is dealing with them through normal mechanisms" makes it hard for me to feel your message about novel risks demanding novel solutions. Am I unique here?

Inversely, the scenarios in the next paragraph, the one that starts with

Of course, simply because an AI could be extremely powerful

are difficult for me to seriously imagine. You acknowledge this problem later on, with

Humans don’t expect this kind of behaviour

Am I unique in feeling that as dismissive and condescending? Is there an alternative phrasing that takes into account my humanity yet still gets me afraid of this UFAI thing? I expect you have all gotten together, brainstormed scenarios of terrifying futures, trotted them out among your target audience, kept the ones that caused fear, and iterated on that a few times. Just want to check that my feelings are in the minority here.

Break any of these rules

I really enjoy Luke's post here: http://lesswrong.com/lw/86a/rhetoric_for_the_good/

It's a list of rules. Do you like using lists of rules as springboards for checking your rhetoric? I do. I find my writing improves when I try both sides of a rule that I'm currently following / breaking.

Meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything

1 rule_and_line 09 December 2013 04:25PM

Discussion article for the meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything

WHEN: 15 December 2013 04:00:00PM (-0500)

WHERE: 869 Stockton Street, Suite 1-2 , Jacksonville, FL

Folks who are enjoying this fine Jacksonville winter: come hang out on a Sunday afternoon! I'll start things off with a mini-summary of a book I've been reading and how I apply some of the concepts in work and life. With luck we'll rapidly move on to structured discussion, unstructured discussion, and social fun and games time!

Discussion article for the meetup : Book Mini-Review: Doug Hubbard's How to Measure Anything

Meetup : First Meetup in Jacksonville, FL

0 rule_and_line 10 November 2013 12:03AM

Discussion article for the meetup : First Meetup in Jacksonville, FL

WHEN: 24 November 2013 04:00:00PM (-0500)

WHERE: 869 Stockton St, Jacksonville, FL 32204 (Bold Bean Coffee Roasters)

Jacksonville's a pretty big town, but not much represented in LessWrong land. I'm looking to meet folks who also live here and are similarly interested in LW / CFAR / MIRI / etc. This first meetup will likely be a socializing event - folks getting to know other folks and gauge interest. Please leave a comment if you're interested in a LW meetup in Jacksonville, even if you can't attend one in the next weeks/months. And remember, (almost) everyone is welcome, especially newbies!

Discussion article for the meetup : First Meetup in Jacksonville, FL

Comment author: hairyfigment 04 November 2013 06:34:46AM 15 points [-]

That's why it's so important to understand how unworried I was. I wasn't $400 worth of worried, or $100 worth of worried, or even $20 worth. I wouldn't have gone to the dermatologist if I didn't have health insurance. I probably wouldn't have gone if I had insurance but it had a big deductible, or even any real co-pay. The only reason I went to have my life saved is because it cost me zero dollars.

  • Jon Schwarz, A Tiny Revolution
Comment author: rule_and_line 08 November 2013 03:36:37PM 1 point [-]

To what nugget of rationality does this point?

Comment author: rule_and_line 06 November 2013 02:39:09AM *  10 points [-]

The idea that a self-imposed external constraint on action can actually enhance our freedom by releasing us from predictable and undesirable internal constraints is not an obvious one. It is hard to be Ulysses.

-- Reid Hastie & Robyn Dawes (Rational Choice in an Uncertain World)

The "Ulysses" reference is to the famous Ulysses pact in the Odyssey.

Comment author: Ishaan 31 October 2013 07:37:59AM *  14 points [-]

While reading primary science literature, I've had the following experiences happen to me on multiple occasions.

1) Read a paper with a surprising result. Later discover it has critical flaws or didn't pass replication. I've learned to increase skepticism with increasingly surprising results. "This study is just wrong because of statistical issues or bad reporting" is now always one of the hypotheses in my mental arsenal, and I've found myself getting a bit better at predicting which results are just wrong using largely the heuristic of "this is too surprising to believe"

2) Form a hypothesis while reading. It gets verified (or falsified) via something you read later. Also, since one typically reads the methods before the results, one gets a lot of practice predicting results. (I don't formally make predictions but I find myself making them automatically as I read.)

Based on these experiences, I suggest that reading primary scientific literature is a good exercise in "alive" epistemic rationality training. The only drawback is that it takes a long time to get sufficiently acquainted with a field.

Comment author: rule_and_line 05 November 2013 12:54:53AM 2 points [-]

While I don't read scientific literature that much, I do make formal predictions pretty often. Typically any time I notice something I'm interested in that will be easy to check in the future.

Will I get to bed on time today? Will I be early for the meeting tomorrow? Etc.

I second the anecdotal evidence that this is a "live" exercise. Sidenote: it took me way too long to realize I needed to write all my predictions down. I spent a few weeks thinking I was completely excellent at predicting things.

Comment author: Vaniver 15 October 2013 08:35:10PM 4 points [-]

Is there any accepted timeframe for duplicates?

Currently, no. It seems worthwhile to keep old quotes visible, but I suspect that would be better accomplished by automatically generating a database of rationality quotes from these threads (like DanielVarga's best of collections), and then displaying a random one on each LW page with frequency related to the number of upvotes they received, say. I don't think that duplicating quotes in quote threads is a good idea, because this focuses effort on finding new quotes and material to incorporate into a growing body of knowledge rather than rehashing previously found knowledge.

Comment author: rule_and_line 16 October 2013 10:21:39PM 2 points [-]

I endorse (with the possibly-expected caveat about Wilson score ranking).

Unfortunately, I can't (don't know how to?) hack the LW backend. Is that something I can look into?

Comment author: rule_and_line 15 October 2013 07:32:10PM *  1 point [-]

I beseech you, in the bowels of Christ, think it possible that you may be mistaken.

-- Oliver Cromwell

Previously posted two years ago. I'm curious if some things bear repeating. Is there any accepted timeframe for duplicates?

Comment author: Bugmaster 14 August 2013 08:53:42PM 6 points [-]

Did Karl Popper populate his class with particularly unimaginative students ? If someone asked me to "observe", I'd fill an entire notebook with observations in less than an hour -- and that's even without getting up from my chair.

Comment author: rule_and_line 14 August 2013 10:36:08PM *  3 points [-]

That's an interesting prediction. Have you tried it? Can you predict what you'd do after filling the notebook?

In my imagination, I'd probably wind up in one of two states:

  • Feeling tricked and asking myself "What was the point of that?"
  • Feeling accomplished and waiting for the next instruction.

View more: Prev