Comment author: evand 15 April 2015 07:57:06PM 8 points [-]

One technique we've used with moderate success is to pass a clipboard around. People can jot down notes, or conversational ideas that are tangentially related or unrelated. Sometimes that provides a convenient way for someone else to say "hey, what's this thing you wrote down about?".

It also could let you list your three things to talk about in a breadth-first manner rather than talking about each one sequentially.

It probably sounds like a better idea than it is in practice; the clipboard gets stuck when holders get distracted, or people still refrain from bringing things up, or whatever. But you might try it out anyway!

Comment author: internety 16 April 2015 02:43:10AM 6 points [-]

Idea related to the clipboard, but combined with poker chips:

There is a stack of blank note cards on the table, and several pens/markers. If there's an existing discussion and you want to talk about an unrelated topic, you grab a notecard, write down the topic, and place it face up on the table. At any time, there may be several note cards on the table representing topics people want to talk about. Each person also has a poker chip (or a few) that they may place near a particular card, expressing their interest in talking about that topic. Poker chips are basically upvotes.

Why isn't the following decision theory optimal?

5 internety 16 April 2015 01:38AM

 

I've recently read the decision theory FAQ, as well as Eliezer's TDT paper. When reading the TDT paper, a simple decision procedure occurred to me which as far as I can tell gets the correct answer to every tricky decision problem I've seen. As discussed in the FAQ above, evidential decision theory get's the chewing gum problem wrong, causal decision theory gets Newcomb's problem wrong, and TDT gets counterfactual mugging wrong.

In the TDT paper, Eliezer postulates an agent named Gloria (page 29), who is defined as an agent who maximizes decision-determined problems. He describes how a CDT-agent named Reena would want to transform herself into Gloria. Eliezer writes

By Gloria’s nature, she always already has the decision-type causal agents wish they had, without need of precommitment.

Eliezer then later goes on the develop TDT, which is supposed to construct Gloria as a byproduct.

Gloria, as we have defined her, is defined only over completely decision-determined problems of which she has full knowledge. However, the agenda of this manuscript is to introduce a formal, general decision theory which reduces to Gloria as a special case.

Why can't we instead construct Gloria directly, using the idea of the thing that CDT agents wished they were? Obviously we can't just postulate a decision algorithm that we don't know how to execute, and then note that a CDT agent would wish they had that decision algorithm, and pretend we had solved the problem. We need to be able to describe the ideal decision algorithm to a level of detail that we could theoretically program into an AI.

Consider this decision algorithm, which I'll temporarily call Nameless Decision Theory (NDT) until I get feedback about whether it deserves a name: you should always make the decision that a CDT-agent would have wished he had pre-committed to, if he had previously known he'd be in his current situation and had the opportunity to precommit to a decision. 

In effect, you are making an general precommittment to behave as if you made all specific precommitments that would ever be advantageous to you.

NDT is so simple, and Eliezer comes so close to stating it in his discussion of Gloria, that I assume there is some flaw with it that I'm not seeing. Perhaps NDT does not count as a "real"/"well defined" decision procedure, or can't be formalized for some reason? Even so, it does seem like it'd be possible to program an AI to behave in this way.

Can someone give an example of a decision problem for which this decision procedure fails? Or for which there are multiple possible precommitments that you would have wished you'd made and it's not clear which one is best?

EDIT: I now think this definition of NDT better captures what I was trying to express: You should always make the decision that a CDT-agent would have wished he had precommitted to, if he had previously considered the possibility of his current situation and had the opportunity to costlessly precommit to a decision.

 

Help create an instrumental rationality "stack ranking"?

3 internety 28 June 2012 06:06AM

I recently heard about SIAI's Rationality Minicamp and thought it sounded cool, but for logistical/expense reasons I won't be going to one. 

There are probably lots of people who are interested in improving their instrumental rationality, know about and like LessWrong, but haven't read the vast majority of content because there is just so much material, and the practical payoff is uncertain. 

It would be cool if it was much easier for people to find the highest ROI material on LessWrong.

My rough idea for how this new instrumental rationality tool might work:

 

  • It starts off as a simple wiki focused on instrumental rationality. People only add things to the wiki (often just links to existing LessWrong articles) if they have tried them and found them very useful for achieving their goals.
  • People are encouraged to add "exercises" that help you develop the skill represented by the article, of the type that are presumably done at the Rationality Minicamps.
  • Only people who have tried the specific thing in question should add comments about their experiences with it.
  • Long Term Goal: Every LessWrong user can define their own private stack rank of the most important concepts/techniques/habits for instrumental rationality. These stack ranks are globally merged by some LessWrong software to create an overall stack rank of the highest ROI ideas/behaviors/techniques as judged by the LessWrong community at any given time. People looking to improve their instrumental rationality can then just visit this global stack rank and pick the highest item that they haven't tried yet to experiment with, and work backwards from there if there are any prerequisites.

 

Do you think others would find this useful? Anyone have suggested improvements?

Comment author: nawitus 09 February 2011 01:48:30AM 5 points [-]

I stutter, and I've done it for as long as I can remember. Anyone know how to beat it? I feel this has pretty significant (negative) effects on my life, because I'm often afraid of speaking up in a group, as stuttering is extremely embarrassing.

Comment author: internety 09 February 2011 08:32:54AM 0 points [-]

I stutter and have done a lot of research on stuttering. It's rare that adult stutterers ever completely stop stuttering, but these two ebooks are the best resources I know of for dealing with it:

http://www.stutteringhelp.org/Portals/English/Book_0012_tenth_ed.pdf http://www.scribd.com/doc/23283047/Easy-Stuttering-Avoidance-Reduction-Therapy

The short version is that the less you try to suppress or conceal your stuttering the less severe it will become in the long run.

Comment author: Louie 31 January 2011 09:44:16PM *  10 points [-]

Short explanation:

This is not me being misleading in how I present data. I'm presenting what happens by default in both options, not one optimized and one non-optimized option. What you discovered here is that, the plan to save money in the outback is robust and succeeds by default, while the plan to save money in the US is fragile and fails by default.

The longer explanation:

The Australian outback option isn't optimized. It's an off-the-shelf option that is heavily subsidized and in a bizarrely awesome economic climate... something I don't think many people here knew existed.

I think it's fair to compare a typical US job to a typical outback job because this is what you get when you don't put much effort into optimizing your budget in both cases.

The difference is that the outback is already incredible without you having to do anything.

It's actually pretty unfair to compare an outback working budget to the best-case US scenario where you spend tons of time in the US managing your money well to get the cheapest rent, best car prices, lowest food costs, and execute convoluted tax dodging strategies that most people couldn't figure out. It's a very tricky plan that requires lots of things to all go right, lots of time, lots of effort, lots of will-power, lots of knowledge, and lots of discipline.

On the other hand, my option only requires you to get whatever job you want in a remote area of Australia and get all your costs of living heavily subsidized and all your major cost centers nearly erased with no willpower, no planning, and no discipline required.

What you uncovered is not my "misleading" people, but the difference in robustness between the two plans. The Australian outback plan lets you save money by default with almost nowhere to go wrong while the plan that lets you save money in the US is a life-engulfing minefield of time-consuming bargin-hunting, self-denial, and tax evasion.

In response to comment by Louie on Optimal Employment
Comment author: internety 01 February 2011 02:19:29AM 11 points [-]

"the plan that lets you save money in the US is a life-engulfing minefield of time-consuming bargin-hunting, self-denial, and tax evasion."

I work as a software developer in the US, have never made a 'budget' for myself or tried to analyze my finaces before now, I pay taxes normally, eat out often, and have no trouble saving lots of money. I'm going to substitute my expenses and pretend I only make 100k and see how much I'd still be able to save (living in Seattle).

Rent: 16.8k instead of 23.2k Utilities: 2k instead of 7k (how can you spend 7k on utilities if you're a single person in an apartment?) Misc house expenses: 0.5k instead of 6.8k (what are these misc expenses that other people supposedly spend so much on?) Food: The estimate of 13.3k is reasonable for food, although it's easy to spend a lot less without hardship. Transportation: 4.6k instead of 16.5k (who spends 16.5k per year on transportation? Just don't buy a new BMW every 5 years and you should be set. I bought my car for $9k, 5 years ago).

Apparently it's pretty easy live well in a large US city and save 33.9k per year without really paying attention to your finances. If you're a good software developer you should be able to make a lot more than 100k and therefore save much more per year.

View more: Prev