Comment author: gwillen 18 April 2015 12:41:49AM 0 points [-]

What happens in the anarchist group if someone does not wish to relinquish the stick? (Perhaps the very ethos of the group makes this unlikely. But I'm curious if there's a method for dealing with people who, as you put it in the third part, "are too fond of their own voices".)

Comment author: selylindi 29 April 2015 12:43:52AM 0 points [-]

In theory, an annoyed person would have called "point of order", asked to move on, and the group would vote up or down. The problem didn't occur while I was present.

Comment author: Epictetus 06 April 2015 12:14:26AM 4 points [-]

Feedback controls. Futarchy is transparent,carried out in real time, and gives plenty of room to adjust values and change strategies if the present ones prove defective. On the other hand, a superintelligent AI would basically run as a black box. The operators would set the values, then the AI would use some method to optimize and then spit out the optimal strategy (and presumably implement it). There's no room for human feedback between setting the values and implementing the optimal strategy.

Comment author: selylindi 16 April 2015 09:38:44PM 0 points [-]

There's no room for human feedback between setting the values and implementing the optimal strategy.

Here and elsewhere I've advocated* that, rather than using Hanson's idea of target-values that are objectively verifiable like GDP, futarchy would do better to add human feedback in the stage of the process where it gets decided whether the goals were met or not. Whoever proposed the goal would decide after the prediction deadline expired, and thus could respond to any improper optimizing by refusing to declare the goal "met" even if it technically was met.

[ * You can definitely do better than the ideas on that blog post, of course.]

Comment author: selylindi 16 April 2015 09:28:56PM 0 points [-]

Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail. If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.

I'm willing to bet that Internals think there's a prudential benefit to becoming more internal and Externals think the relationship is wholly confounded.

Comment author: selylindi 16 April 2015 04:48:43AM *  13 points [-]

In large formal groups: Robert's Rules of Order.

Large organizations, and organizations which have to remain unified despite bitter disagreements, developed social technologies such as RRoO. These typically feature meetings that have formal, pre-specified agendas plus a chairperson who is responsible for making sure each person has a chance to speak in an orderly fashion. Of course, RRoO are overkill for a small group with plenty of goodwill toward each other.

In small formal groups: Nonce agendas and rotating speakers

The best-organized small meetings I've ever attended were organized by the local anarchists. They were an independently-minded and fierce-willed bunch who did not much agree but who had common interests, which to my mind suggests that the method they used might be effectively adapted for use in LW meetups. They used the following method, sometimes with variations appropriate to the circumstances:

  1. Before and after the formal part of the meeting is informal social time.
  2. Call the meeting to order. Make any reminders the group needs and any explanatory announcements that newcomers would want to know, such as these rules.
  3. Pass around a clipboard for people to write agenda items down. All that is needed are a few words identifying the topic. (People can add to the agenda later, too, if they think of something belatedly.)
  4. Start with first agenda item. Discuss it (see below) until people are done with it. Then move on to the next agenda item. In discussing an agenda item, start with whoever added it to the agenda, and then proceed around the circle giving everyone a chance to talk.
  5. Whoever's turn it is, they not only get to speak, but they are the temporary chairperson also. If it helps, they can have a "talking stick" or "hot potato" or some physical object reminding everyone that it's their turn. They can ask questions for others to answer without giving up the talking stick. If you want to interrupt the speaker, you can raise your hand and they can call on you without giving up the talking stick.
  6. Any other necessary interruptions are handled by someone saying "point of order", briefly stating what they want, and the group votes on whether to do it.

In small informal groups: Natural leaders

Sometimes people have an aversion to groups that are structured in any manner they aren't already familiar and comfortable with. There's nothing wrong with that. You can approximate the above structure by having the more vocal members facilitate the conversation:

  • Within a conversation on a topic, deliberately ask people who aren't as talkative what they think about the topic.
  • When the conversation winds down on a topic, deliberately ask someone what's on their mind. Be sure to let everyone have a chance.
  • Tactfully interrupt people who are too fond of their own voices, and attempt to pass the speaker-role to someone else.
Comment author: Vaniver 28 February 2015 09:48:20PM 8 points [-]

Harry can test the limits of Parseltongue's truth detection properties. "I am plugged in to your Horcrux network and will not be stopped by killing me now."

Comment author: selylindi 01 March 2015 05:21:31AM *  4 points [-]

Hm, Harry can't lie in Parseltongue, meaning he can't claim what he doesn't believe, but he can probably state something of unclear truth if he is sufficiently motivated to believe it.

It'd be a nice irony if part of Harry's ultimate "rationality" test involves deliberately motivated reasoning. :D

Comment author: selylindi 03 February 2015 05:16:26PM *  0 points [-]

Background: Statistics. Something about the Welch–Satterthwaite equation is so counterintuitive that I must have a mental block, but the equation comes up often in my work, and it drives me batty. For example, the degrees of freedom decrease as the sample size increases beyond a certain point. All the online documentation I can find for it gives the same information as Wikipedia, in which k = 1/n. I looked up the original derivation and, in it, the k are scaling factors of a linear combination of random variables. So at some point in the literature after the original derivation, it was decided that k = 1/n was superior in some regard; I lack the commitment needed to search the literature to find out why.

The stupid questions:

1) Does anyone know why the statistics field settled on k = 1/n?

2) Can someone give a relatively concrete mental image or other intuitive suggestion as to why the W-S equation really ought to behave in the odd ways it does?

Comment author: gedymin 28 December 2014 12:37:39PM *  3 points [-]

See this discussion in The Best Textbooks on Every Subject

I agree that the first few chapters of Jaynes are illuminating, haven't tried to read further. Bayesian Data Analysis by Gelman feels much more practical at least for what I personally need (a reference book for statistical techniques).

The general pre-requisites are actually spelled out in the introduction of Jayne's Probability Theory. Emphasis mine.

The following material is addressed to readers who are already familiar with applied mathematics at the advanced undergraduate level or preferably higher; and with some field, such as physics, chemistry, biology, geology, medicine, economics, sociology, engineering, operations research, etc., where inference is needed. A previous acquaintance with probability and statistics is not necessary; indeed, a certain amount of innocence in this area may be desirable, because there will be less to unlearn.

Comment author: selylindi 31 December 2014 07:23:44AM 4 points [-]

familiar with applied mathematics at the advanced undergraduate level or preferably higher

In working through the text, I have found that my undergraduate engineering degree and mathematics minor would not have been sufficient to understand the details of Jaynes' arguments, following the derivations and solving the problems. I took some graduate courses in math and statistics, and more importantly I've picked up a smattering of many fields of math after my formal education, and these plus Google have sufficed.

Be advised that there are errors (typographical, mathematical, rhetorical) in the text that can be confusing if you try to follow Jaynes' arguments exactly. Furthermore, it is most definitely written in a blustering manner (to bully his colleagues and others who learned frequentist statistics) rather than in an educational manner (to teach someone statistics for the first time). So if you want to use the text to learn the subject matter, I strongly recommend you take the denser parts slowly and invent problems based on them for yourself to solve.

I find it impossible not to constantly sense in Jaynes' tone, and especially in his many digressions propounding his philosophies of various things, the same cantankerous old-man attitude that I encounter most often in cranks. The difference is that Jaynes is not a crackpot; whether by wisdom or luck, the subject matter that became his cranky obsession is exquisitely useful for remaining sane.

Comment author: ChrisHallquist 21 May 2013 06:55:04PM 4 points [-]

Having just seen this now, I like "From Artificial Intelligence to Zombies: Thinking Clearly about Truth, Value, and Winning" because it conveys just how frickin' broad The Sequences are. "The Hard Part is Actually Changing Your Mind" is good if you'd rather be catchy and give a sense of one key take-away rather than try to give a sense of the full scope of the sequences.

Comment author: selylindi 28 December 2014 02:13:37AM 0 points [-]

Think To Win: The Hard Part is Actually Changing Your Mind

(It's even catchier, and actively phrased, and gives a motivation for why we should bother with the hard part.)

Comment author: selylindi 18 December 2014 01:07:50AM *  5 points [-]

That's not really how word usages spread in English. Policing usage is almost a guaranteed failure. What would work much better would be for you to use these words consistently with your ideals, and then if doing so helps you achieve things or write things that people want to mimic, they will also mimic your words. Compare to how this community has adopted all manner of jargon due to the influence of EY's weirdly-written but thought-reshaping Sequences! SSC is now spreading Yvain's linguistic habits among us, too, in a similar way: by creating new associations between them and some good ideas.

Comment author: KatjaGrace 16 December 2014 02:18:50AM 1 point [-]

Can you think of more motivation selection methods for this list?

Comment author: selylindi 17 December 2014 11:33:43PM *  2 points [-]

Bostrom's philosophical outlook shows. He's defined the four categories to be mutually exclusive, and with the obvious fifth case they're exhaustive, too.

  1. Select motivations directly. (e.g. Asimov's 3 laws)
  2. Select motivations indirectly. (e.g. CEV)
  3. Don't select motivations, but use ones believed to be friendly. (e.g. Augment a nice person.)
  4. Don't select motivations, and use ones not believed to be friendly. (i.e. Constrain them with domesticity constraints.)
  5. (Combinations of 1-4.)

In one sense, then, there aren't other general motivation selection methods. But in a more useful sense, we might be able to divide up the conceptual space into different categories than the ones Bostrom used, and the resulting categories could be heuristics that jumpstart development of new ideas.

Um, I should probably get more concrete and try to divide it differently. The following example alternative categories aren't promised to be the kind that will effectively ripen your heuristics.

  1. Research how human values are developed as a biological and cognitive process, and simulate that in the AI whether or not we understand what will result. (i.e. Neuromorphic AI, the kind Bostrom fears most)
  2. Research how human values are developed as a social and dialectic process, and simulate that in the AI whether or not we understand what will result. (e.g. Rawls's Genie)
  3. Directly specify a single theory of partial human value, but an important part that we can get right, and sacrifice our remaining values to guarantee this one; or indirectly specify that the AI should figure out what single principle we most value and ensure that it is done. (e.g. Zookeeper).
  4. Directly specify a combination of many different ideas about human values rather than trying to get the one theory right; or indirectly specify that the AI should do the same thing. (e.g. "Plato's Republic")

The thought was to first divide the methods by whether we program the means or the ends, roughly. Second I subdivided those by whether we program it to find a unified or a composite solution, roughly. Anyhow, there may be other methods of categorizing this area of thought that more neatly carve it up at its joints.

View more: Prev | Next