Comment author: wedrifid 28 May 2012 03:10:12PM 2 points [-]

for those who might misinterpret the joke.

People are remarkably good at inferring the context and intended message about social norms from the sparse information in a joke. Most people reading this title would be able to understand the sentiment and infer the approximate context that caused it.

Oh, wait. Is that just me?

Comment author: dugancm 29 May 2012 01:58:16AM 2 points [-]

Is the temporary amusement of some at the sniping of those others' status worth potentially alienating them from the community, even if they number less than "most"? I do not want such "ridicule of the less socially experienced and/or quick to read sequences" norms to become prevalent here.

Comment author: dugancm 28 May 2012 01:27:19PM *  7 points [-]

Downvoted because, while I agree with the content of the message [1], I object to the way it was delivered, which seems to me to imply that an acceptable reaction to those who make the mistake is, "That was so stupid, I'm not even going to explain why you're wrong. Just do what I say." That they're worth little enough to the community as to be acceptable targets of ridicule. If I had been publicly admonished in this way, I would feel alienated.

[1] Frivolous use of the word "rationality" and its conjugates in post titles needs to be curtailed and prevented.

Edited to clarify. (Thanks, wedrifid!) Original text follows for context, but please disregard.

Downvoted for status signalling at the expense of newcomers who can reasonably be expected to not have read A Human's Guide to Words yet, without at least linking to an accessible explanation for those who might misinterpret the joke.

Comment author: Alicorn 12 May 2012 06:56:32PM *  2 points [-]

I hated "Uglies" enough to demand that the audiobook be stopped partway through on a family car trip, but the general idea might be sound. Glancing at my shelf, sci-fi that might work includes:

Archangel series by Sharon Shinn (has a strongly fantasy aesthetic, but is technically sci-fi; the same author does some of both in other series/standalones)

The Host by Stephenie Meyer (especially if this girl likes Twilight, but it might not be young-audience-aimed enough; if you're giving her Ender's Game and its sequels, maybe it's fine)

Tripods series (has aliens in, but starts gently; first book IIRC only has tripods and doesn't explain them in terms of the aliens)

The Time Traveler's Wife is sci-fi while being heavily focused on its characters/relationships, but has plenty of adult content so maybe not that one.

I seem to have a lot more fantasy than sci-fi. Why is sci-fi preferable here at all?

Comment author: dugancm 15 May 2012 12:13:53AM 0 points [-]

Seconding "The Tripods Trilogy" by John Christopher. It was my introduction to sci-fi and had a stong emotional impact.

Comment author: dugancm 05 May 2012 11:32:52PM *  6 points [-]

I found this person's anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.

A common mental model for performance is what I'll call the "error model." In the error model, a person's performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsiloni, where x is the correct note/answer, and epsiloni is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.

But we could also consider the "bug model" of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you'll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can't be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong. And fixing bugs doesn't improve your performance in a continuous fashion; you can fix a "little" bug and immediately go from getting everything wrong to everything right. You can't really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right. You can only define its behavior by isolating what the bug does.

Often, I think mistakes are more like bugs than errors. My clinkers weren't random; they were in specific places, because I had sub-optimal fingerings in those places. A kid who gets arithmetic questions wrong usually isn't getting them wrong at random; there's something missing in their understanding, like not getting the difference between multiplication and addition. Working generically "harder" doesn't fix bugs (though fixing bugs does require work).

Once you start to think of mistakes as deterministic rather than random, as caused by "bugs" (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.

You stop thinking of people as "stupid."

Tags like "stupid," "bad at _", "sloppy," and so on, are ways of saying "You're performing badly and I don't know why." Once you move it to "you're performing badly because you have the wrong fingerings," or "you're performing badly because you don't understand what a limit is," it's no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It's not you, it's the bug.

This also applies to "lazy." Lazy just means "you're not meeting your obligations and I don't know why." If it turns out that you've been missing appointments because you don't keep a calendar, then you're not intrinsically "lazy," you were just executing the wrong procedure. And suddenly you stop wanting to call the person "lazy" when it makes more sense to say they need organizational tools.

"Lazy" and "stupid" and "bad at _" are terms about the map, not the territory. Once you understand what causes mistakes, those terms are far less informative than actually describing what's happening.

Error vs. Bugs and the End of Stupidity

Comment author: dugancm 21 April 2012 11:55:57PM *  3 points [-]

Web app idea: I'm posting this comment immediately and without editing so I don't forget the idea before I get a chance to write it down/work it out more, as I have to leave my computer soon.

  1. Display a short passage that illustrates something irrational that people do or think, with instructions for the reader to enter into a text box the first or most important thing that came to mind and then press a "ready" button.
  2. Ready button reveals a question of the form, "Were your thoughts similar to any of the following?" with a list of questions/remarks you would hope a rationalist would (or wouldn't) ask/make.
  3. Yes/No buttons save text box and button answers, clear the text box and question fields and replace the passage with a new one.

No priming by reading questions before passages. Writing their thoughts before seeing the question will hopefully keep people honest. Saving text box with answers allows answer auditing. Each passage's irrationality may be more or less obvious depending on a person's background. Same with desired/undesired thinking examples with questions (that's what we're measuring with this though, isn't it?).

Positive example question: Yes = +1, No = 0 Negative example question: Yes = -1, No = 0

With even split between positive/negative example questions, rationalists should score = 1/2 # of questions asked. More questions answered = more confidence in estimate. Wider range of topics addressed in questions = more confidence in estimate.

Edited to add: I created a storyboard for the app's testing process here and have started a list of example passages with desired/undesired responses here.

Comment author: ArisKatsaris 06 April 2012 10:56:14AM *  0 points [-]

What's the fundamental difference between those two cases? I don't see it, do you?

One fundamental difference is that I don't care about Felix's further happiness. After some point, I may even resent it, which would make his additional happiness of negative utility to me.

Another difference is that happiness may be best represented as a percentage with an upper bound of e.g. 100% happy, rather than be an integer you can keep adding to without end.

I think Felix's case may be an interesting additional scenario to consider, in order to be sure that AIs don't fall victims to it (e.g. by creating a superintelligence and making it super-happy, to the expense of normal human happiness). But it's not the same scenario as the specks.

Comment author: dugancm 06 April 2012 11:51:34PM *  0 points [-]

Happiness, as a state of mind in humans, seems less to me about how strong the "orgasms" are than how frequently they occur without lessening the probability they will continue to occur. So what problems might there be with maximizing total future happy seconds experienced in humans, including emulations thereof (other than describing with sufficient accuracy the concepts of 'human' and 'happiness' to a computer)?

I think doing so would extrapolate to increasing population and longevity to within resource constraints and diminishing returns on improving average happiness uptime and existential risk mitigation, which seem to me to be the crux of people's intuitions about the Felix and Wireheading problems.

In response to SotW: Be Specific
Comment author: dugancm 05 April 2012 02:38:38AM *  0 points [-]

After sleeping on this and thinking about it all day at work, I made a game. I'd like to make the wording more ritualistic and provide descriptive play examples eventually, but here's a good untested first draft: (Note: I did not read any comments before posting this.)

ETA: I will be editing the rules to remove/redirect perverse incentives and add classroom/tournament formats and examples, but may not always have this post completely up to date. The most recent version can always be found here.

The Un-Naming for 2-6 players

Materials: A deck of words. (stack of notecards, dictionary/internet, pen) Tokens (poker chips, m&m’s, pennies, etc.) A timer (egg-timer, mini-hourglass, mobile phone app., etc.) A lighter and ashtray (see optional rules)

Rule 1: Unless stated otherwise within the rules, players must never read any of The Un-Naming’s cards aloud, sign their translation or interpretation via gesture, or reveal them face up to each other (cards should be embossed with braille when used with visually handicapped players).

Rule 2: Choose a player to go first by any arbitrary means available. This person will henceforth be referred to as the Describer.

Rule 3: Choose a method of passing the Describer position from one player to the next, such that all players hold the position an equal number of times during this session of the game.

Rule 4: The Un-Naming has at least four phases: Description, Abstraction, Concretization, and Passing The Torch. A single repetition of each phase will henceforth be referred to as a Cycle.

Rule 5: Choose a maximum time for each Cycle to last. At the beginning of each Cycle the Describer will set a timer for this amount.

Rule 6: If the timer goes off before a Cycle is complete, the players must finish the phase they’re on and then move directly to Passing The Torch.

Rule 7: The Describer may tell a player to “Stop,” move on to another player, then return immediately to the previous player if that player hesitates too long before providing a description.

Rule 8: Choose a maximum number of Cycles to complete this session, after which tokens will be counted and the game will end. This number must be a multiple of the number of players.

Rule 9: Play starts with the Description phase. Follow all phase instructions until the game ends.

Optional Rule 10: The players with the fewest tokens after all cycles have completed lose their names for the next hour (or the rest of the social event) and cannot be referred to by them.

Description: 1. The Describer starts the Cycle timer, draws a word from the deck, then describes one usage of the word without uttering it or any direct synonyms (see Rationalist Taboo), in that order. 2. Proceed to Abstraction or Concretization. 3. This phase may only occur once per Cycle.

Abstraction: 1. The Describer points to another player and utters the word “Abstract.” 2. The designated player describes a category of which e believes the previously described usage to be an example without uttering the words for, or any direct synonyms of, either. 3. If the Describer believes the described category matches the usage e described, e hands the designated player a Token. 4. Repeat 1-3 until all players other than the Describer have abstracted. 5. Proceed to Concretization or Ascension. 6. This phase may only occur once per Cycle.

Concretization: 1. The Describer points to another player and utters the word “Concretize.” 2. The designated player describes what e believes to be an example of the previously described usage without uttering the words for, or any direct synonyms of, either. 3. If the Describer believes the described example matches the usage e described, e hands the designated player a Token. 4. Repeat 1-3 until all players other than the Describer have concretized. 5. Proceed to Abstraction or Descension. 6. This phase may only occur once per Cycle.

Passing the Torch 1. Once Description, Abstraction and Concretization have occurred, the Describer hands the Cycle timer to the next player in line for the position, but keeps the Word card on his person for tallying purposes. 2. The new Describer starts the next Cycle. Optional: The Describer burns his Word in effigy before passing the lighter, ashtray and Cycle timer along.

Ascension: Optional Phase (replaces Passing The Torch and Description) 1. If all other players are handed tokens during the Abstraction phase, the Describer may declare “Ascension!” 2. The Describer chooses another player’s category description to be a description of the next Word and that player becomes the Describer as though Passing The Torch. 3. The new Describer writes the new Word on a notecard, re-starts the Cycle timer and repeats the category description. This counts as a new Cycle. 4. Proceed to Abstraction.

Descension: Optional Phase (replaces Passing The Torch and Description) 1. If all other players are handed tokens during the Concretization phase, the Describer may declare “Descension!” 2. The Describer chooses another player’s example description to be a description of the next Word and that player becomes the Describer as though Passing The Torch. 3. The new Describer writes the new Word on a notecard, re-starts the Cycle timer and repeats the Example description. This counts as a new Cycle 4. Proceed to Concretization.

I think the most difficult part of implementing this will be finding words that will place the group near the middle of the abstraction-concreteness lattice. Primary colors and emotional states should work well as a starting point.

Comment author: [deleted] 04 February 2012 05:23:42PM 0 points [-]

Thank you for posting this update! It passed out of my mind to do so.

I am pretty certain we have 3 members who will be at the Reason Rally anyways though. Next time I see them, I'll ask them to post here.

Comment author: dugancm 11 February 2012 01:52:05AM 2 points [-]

Yep! I and my father will be going anyway.

Comment author: dugancm 31 January 2012 12:38:38AM 3 points [-]

I like how SIAI's name references both the event you're working toward and method of achieving it. Is there a single word that describes a watershed event that would indicate the rationality institute's direct success like "Singularity" does an intelligence explosion? That supporters could rally around and label themselves by (singularitarian)? A word for approximating the ideal Bayesian updater, for felling akrasia, for actually changing one's mind? Can we create or annex one?

Exaltation, Transcendence, Apotheosis, Enlightenment, Upload, Elevation, Laudation, Upgrade, Epiphanic, and Ideate come to mind, but what I'm looking for is something more like "the act (event) of becoming your best self" in a word. Too many of these have strong religious connotations for me.

Comment author: dugancm 09 February 2012 11:12:04PM 2 points [-]

After some thought, I hereby create Max Agency! Plucky comic superhero mascot of Zenith Agency (Z.A. Huzzah!) ...for Consequential Action (Z.A.C.A.) The acronym for which happens to be Max's battlecry, but only when shouted in triplicate of course!

Now that I have a word, the idea of an agency without agents (only aspiring agents) tickles me tremendously.

Other thoughts: Agency Institute for Rationality Training (A.I.R. Training)

Agency Foundation for Applied Rationality (A.F.A.R.)

Comment author: juliawise 02 February 2012 08:37:07PM 0 points [-]

Thanks - I'd like to use this.

Comment author: dugancm 03 February 2012 10:56:17PM 0 points [-]

Ditto.

View more: Prev | Next