Comment author: BrandonReinhart 18 March 2012 12:08:12AM *  6 points [-]

Carl Shulman has been convincing that I should do nothing directly (in terms of labor) on the problem on AI risks, instead become successful elsewhere, and then direct resources as I am able toward the problem.

However I believe I should 1) continue to educate myself on the topic 2) try to learn to be a better rationalist so when I do have resources I can direct them effectively 3) work toward being someone who can gain access to more resources 4) find ways to better optimize my lifestyle.

At one point I seriously considered running off to San Fran to be in the thick of things, but I now believe that would have been a strictly worse choice. Sometimes the best thing you can do is to do what you already do well and hope to direct the proceeds towards helping people. Even when it feels like this is selfish, disengaged, or remote.

Comment author: michaelcurzi 16 March 2012 06:55:57PM 1 point [-]

I don't know why you would assume that it's "probably as a result of trying harder to avoid cultishness." My prior is that they just don't seem cultish because academics are often expected to hold unfamiliar positions.

Comment author: BrandonReinhart 17 March 2012 11:41:11PM *  4 points [-]

I will say that I feel 95% confident that SIAI is not a cult because I spent time there (mjcurzi was there also), learned from their members, observed their processes of teaching rationality, hung out for fun, met other people who were interested, etc. Everyone involved seemed well meaning, curious, critical, etc. No one was blindly following orders. In the realm of teaching rationality, there was much agreement it should be taught, some agreement on how, but total openness to failure and finding alternate methods. I went to the minicamp wondering (along with John Salvatier) whether the SIAI was a cult and obtained lots of evidence to push me far away from that position.

I wonder if the cult accusation in part comes from the fact that it seems too good to be true, so we feel a need for defensive suspicion. Rationality is very much about changing one's mind and thinking about this we become suspicious that the goals of SIAI are to change our minds in a particular way. Then we discover that in fact the SIAI's goals (are in part) to change our minds in a particular way so we think our suspicions are justified.

My model tells me that stepping into a church is several orders of magnitude more psychologically dangerous than stepping into a Less Wrong meetup or the SIAI headquarters.

(The other 5% goes to things like "they are a cult and totally duped me and I don't know it", "they are a cult and I was too distant from their secret inner cabals to discover it", "they are a cult and I don't know what to look for", "they aren't a cult but they want to be one and are screwing it up", etc. I should probably feel more confident about this than 95%, but my own inclination to be suspicious of people who want to change how I think means I'm being generous with my error. I have a hard time giving these alternate stories credit.)

Comment author: dreeves 12 October 2011 05:30:46AM 0 points [-]

Great questions! Here are answers!

Giving yourself variance: Yes. It should become obvious as you add datapoints. The real nitty gritty about the width of the yellow brick road is here: http://blog.beeminder.com/roadwidth (In short: The width of the road is constructed so that if you're in the correct lane today then you're guaranteed not to lose tomorrow.)

Paying $5: Note that the first attempt is free. You only put money at risk if you go off the road and want to reset. Gory details at http://beeminder.com/money (note especially the part about the exponential fee schedule).

How do I delete a goal if I screw it up in some way?

We've hesitated to expose that option since we're not sure how to handle the case of someone deleting a goal they have a contract on. The option does appear if you delete the only datapoint though.

Is the goal value a median or is it a target?

The goal value is the y-value of the end of your yellow brick road. For weight loss it's obvious -- your goal weight. But for many kinds of goals, like "work out 20 minutes a day" for which the y-axis is the total (cumulative) amount reported, the goal value is probably not what you care about. This is confusing and we're scrambling to find a way to make it less so.

I would like the ability to expressly exclude days at a certain rate. Like "I will practice ear training approximately X minutes per day, 5 out of 7 days a week."

That works beautifully with beeminder! Just specify your rate as 5*X per week.

Is there a 'vacation' feature? If I'm on a holiday, I might not be able to maintain certain goals. I would expect vacations to have to be declared in advance, though, to prevent someone from using it as a method of worming out of an impending failure.

Well said. And yes, just use the road dial below your graph to flatten your road for the vacation. If it's a weight loss goal and you're going on an all-you-can-eat-buffet-hopping vacation, you can even make the road slope up for a while. Always with that one-week delay of course.

Are you tracking your software development goals in the same software?

Damn straight: http://beeminder.com/meta

An exponential punishment curve seems harsh. Is the concern that a linear rate of punishment might lead to basically buying indulgences? I would think that even linear curves at good rates would create incentive.

I think harshness/mildness is the wrong question here. It's just trying to help you find the order of magnitude that the punishment needs to be to get you to treat it as a hard commitment. In some sense the steeper the curve the less harsh since it means wasting less money on punishments that were insufficiently punishing before hitting your Motivation Point. We went with, roughly, 3^x.

The data tracking features are interesting to me and one reason I might try this. Is there a way to export the data?

You answered this one yourself, but, yes, we're fellow data nerds and we want import/export to always be easy.

Some goals might contain periodic sub-goals...

Oh my, that sounds like a terrible, terrible idea! :) Very likely my lack of imagination though. Want to add it on http://uservoice.beeminder.com and see if it gets any upvotes?

Just remember Antoine de Saint-Exupery: "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."

Thanks so much for asking these questions. One of our biggest problems right now is conveying how this all works to someone just dropping in, so answering these helps us a lot.

PS: Anyone who read all this might also find our commitment contract template interesting: http://beeminder.com/contract

Comment author: BrandonReinhart 12 October 2011 08:45:09AM 1 point [-]

Thanks for taking the time to respond.

I rebuilt my guitar thing and added today's datapoint and now it seems to be predicting my path properly. Makes more sense now. I think I was confused at first because I had made a custom graph instead of using the "Do More" prefab.

Neat software!

Comment author: Kaj_Sotala 10 October 2011 09:43:11PM 7 points [-]

An explanation of why this works.

Short version: suppose that reasoning in the sense of "consciously studying the premises of conclusions and evaluating them, as well as generating consciously understood chains of inference" evolved mainly to persuade others of your views. Then it's only to be expected that we will only study and generate theories at a superficial level by default, because there's no reason to waste time evaluating our conscious justifications if they aren't going to be used for anything. If we do expect them to be subjected to closer scrutiny by outsiders, then we're much more likely to actually inspect the justifications for flaws, so that we'll know how to counter any objections the others will bring up.

Comment author: BrandonReinhart 12 October 2011 05:38:17AM 4 points [-]

An exercise we ran at minicamp -- which seemed valuable, but requires a partner -- is to take and argue for a position for some time. Then, at some interval, you switch and argue against the position (while your partner defends). I used this once at work, but haven't had a chance since. The suggestion to swap sides mid argument surprised the two, but did lead to a more effective discussion.

The exercise sometimes felt forced if the topic was artificial and veered too far off course, or if one side was simply convinced and felt that further artificial defense was unproductive.

Still, it's a riff on this theme.

Comment author: BrandonReinhart 12 October 2011 01:23:50AM *  1 point [-]
  • It's a little strange that I have to set up the first data point when I register the goal. I'd rather set up the goal, then do the first day's work. I suppose this is splitting hairs.

I created two goals:

https://www.beeminder.com/greenmarine

Both goals have perfectly tight roads. Is this correct? I would like to give myself some variance, since I'll probably not ever do exactly 180 minutes in a day. To start, I fudged the first day's value at the goal value.

Based on how you describe the system, it looks like I should expect to pay $5 if I practice 179 minutes.

  • How do I delete a goal if I screw it up in some way?

  • Is the goal value a median or is it a target?

  • I would like the ability to expressly exclude days at a certain rate. Like "I will practice ear training approximately X minutes per day, 5 out of 7 days a week."

  • Is there a 'vacation' feature? If I'm on a holiday, I might not be able to maintain certain goals. I would expect vacations to have to be declared in advance, though, to prevent someone from using it as a method of worming out of an impending failure.

  • I really like that you are iterating on your concept publicly. This is the way to go. I hope you are able to move towards success. Are you tracking your software development goals in the same software?

  • An exponential punishment curve seems harsh. Is the concern that a linear rate of punishment might lead to basically buying indulgences? I would think that even linear curves at good rates would create incentive.

  • The data tracking features are interesting to me and one reason I might try this. Is there a way to export the data? If I did use this, then it would be cool to import the data into a practice log. AHA. Found the export button!

  • Some goals might contain periodic sub-goals. For example, a musical practice goal might include X days of "spend % of this time on speed improvement" this is an idea for a future feature. These could spin off to become their own goal graphs if the user wanted, otherwise they are simply children of the main goal.

Comment author: gwern 07 September 2011 10:30:09PM *  7 points [-]

While I'm at it, here are some brief thoughts I had after spending an hour or so looking through the Lifeboat Foundation filings:


I heard from Patri that Lifeboat is just a front for the guy in charge, to launder money or something.

Well, that's possible. Guidestar had 3 free filings for Lifeboat when I looked a couple months ago:

They were all pretty small budget (~50k), and I didn't notice anything obviously wrong in the expenses unless things like publishing expenses are being padded or the filings are simply wrong. (But then, I've only read the occasional filing and I've never read any intended to be deceptive; and note I didn't look at 2010 and 2011 filings. Apparently some leadership changes took place with Eric losing the CEOship?)

That said, I don't like Lifeboat myself. Matt Funk and Otto Rossler make any number of items on the crank checklist. I unsubscribed from their blog when they began removing my comments (eg. you'll notice no comment from me on http://lifeboat.com/blog/2011/04/a-relatively-brief-introduction-to-the-principles-of-economics-evolution-a-survival-guide-for-the-inhabitants-of-small-islands-including-the-inhabitants-of-the-small-island-of-earth ).

EDIT: In late December 2010, Lifeboat Foundation associated with an accused embezzler; this is rather suspicious, as non-accused-embezzlers aren't that hard to come by, and such a hire is easily fitted into a narrative of corruption ('we hire you despite the blacklist on you and in exchange you cook the books and keep quiet'). She's not clearly specified as being hired or working at LF and may just be another honorary member, but did they not bother to google her or something? Anyway, she's not mentioned in it, but the 2010 filing is now up at Guidestar:

Their printing expenses were much smaller that year, around $100. Of their $36k in donations, $25k went to 'fees and payments to independent contractors', $6.5k to rent, and $4k to misc expenses, with an overall slight surplus and an increase in their savings account to $10k. In accomplishments, 2 of 3 were $4k for designing their website (pg9 lists a slew of software licenses, a Mac & DVD stuff), and $600 for 'educational videos'. President Eric lists 60hr/wk at no compensation for him or the other employees. Unfortunately, like the previous filings this filing doesn't list where donations come from or any breakdown of the $25k in contractor fees.

Comment author: BrandonReinhart 21 September 2011 07:11:39AM 4 points [-]

It would actually be worthwhile to post a small analysis of Lifeboat. How they meet the crank checklist, etc. Do they do anything other than name drop on their website, etc?

Comment author: BrandonReinhart 25 August 2011 04:05:18AM *  34 points [-]

Hiring Luke full time would be an excellent choice for the SIAI. I spent time with Luke at mini-camp and can provide some insight.

  • Luke is an excellent communicator and agent for the efficient transmission of ideas. More importantly, he has the ability to teach these skills to others. Luke has shown this skill publicly on Less Wrong and also on his blog, with this distilled analysis of Eliezer's writing "Reading Yudkowsky."

  • Luke is a genuine modern day renaissance man, a true polymath. However, Luke is very self-aware of his limitations and has devoted significant work to finding ways of removing or mitigating those limitations. For example any person with a broad range of academic interests could fall prey to never acquiring useful skills in any of those interest areas. Luke sees this as a serious problem of concern and wants to maximize the efficiency of searching the academic space of ideas. Again, for Luke this is a teachable skill. His session "Productivity and Scholarship" at minicamp outlined techniques for efficient research and reducing akrasia. None of that material would be particularly surprising for a regular reader of Less Wrong -- because Luke pioneered critical posts on these subjects. Luke's suggestions were all implementable and process focused, such as utilizing review articles and Wikipedia to rapidly familiarize one's self broadly with the jargon of a new discipline before doing deep research.

  • Luke is an excellent listener and has a high degree of effectiveness in human interaction. This manifests itself as someone you enjoy speaking to, who seems interested in your views, and then who is able to tell you why you are wrong in a way that makes you feel smarter. (Compare with Eliezer, who will simply turn away when you are wrong. This is fine for Eliezer, but not ideal for SIAI as an organization.) Again, Luke understands how to teach this skill set. It seems likely that Luke would raise the social effectiveness of SIAI as an organization and then also generate positive affectations toward the organization in his dealings with others.

Luke would have a positive influence on the culture of the SIAI, the research of the SIAI, and the public face of the SIAI. Any organization would love to find someone who excels in any one of those dimensions, much less someone who excels in all of them.

Mini-camp was an exhausting challenge to all of the instructors. Luke never once showed that exhaustion, let it dampen his enthusiasm, or let his annoyance be shown (except, perhaps, as a tactical tool to move along a stalled or irrelevant conversation). In many ways he presented the best face of "mini-camp as a consumable product." That trait (we could call it customer focus or product awareness) is a critical skill the SIAI is lacking.

An example of how Luke has changed me. I was only vaguely aware of the concepts of efficient learning and study. Of course, I know about study habits and putting in time at practice in a certain sense. These usually emphasize practice and time investment (which is important) but underemphasize the value of finding the right things to spend time on.

It was only when I read Luke's posts, spoke to him, and participated in his sessions at mini-camp that I received a language for thinking about and conducting introspection on the subject of efficient learning. Specifically, I've applied his standards and process to my study of guitar and classical music and I now feel I've effectively solved the question of where to spend my time and am solely in the realm of doing the actual practice, composition, and research. I've advanced more in the past few months of music study than I have ever done in the prior year and a half I played guitar.

In the past month I have actively applied his skill of skimming review material (review books on classical composers) and then used wikipedia to rapidly drill down on confusing component subjects. In the past month, I have actively applied his skill of thinking vicariously about someone else's victory that represents goals I have to make a hard road seem less like a barrier and more like a negotiable terrain. In the past month, I have applied his skill of considering the merits of multiple competing areas of interest, determined the one with the most impact, and pursued it (knowing I could later scoop up the missing pieces more quickly).

I did all of that with the awareness that Luke was the source of the skills and language that let me do those things.

I am more awesome because of Luke.

Comment author: Duncan 10 July 2011 03:16:52AM 2 points [-]

I consider all of the behaviors you describe as basically transform functions. In fact, I consider any decision maker a type of transform function where you have input data that is run through a transform function (such as a behavior-executor, utility-maximizer, weighted goal system, a human mind, etc.) and output data is generated (and in the case of humans sent to our muscles, organs, etc.). The reason I mention this is that trying to describe a human's transform function (i.e., what people normally call their mind) as mostly a behavior-executor or just a utility-maximizer leads to problems. A human's transform function is enormously complex and includes both behavior execution aspects and utility maximization aspects. I also find that attempts to describe a human's transform function as 'basically a __' results in a subsequent failure to look at the actual transform function when trying to figure out how people will behave.

Comment author: BrandonReinhart 24 July 2011 03:57:23AM 0 points [-]

Is "transform function" a technical term from some discipline I'm unfamiliar with? I interpret your use of that phrase as "operation on some input that results in corresponding output." I'm having trouble finding meaning in your post that isn't redefinition.

Comment author: lukeprog 10 May 2011 02:36:21AM *  20 points [-]

I've never had success with 'speed reading' in a way that allows me to consume more words per minute and have the same degree of retention and comprehension, especially for dense scholarly material.

Efficient scholarship benefits much more, I think, from learning to be strategic and have good intuitions about what to read - on the level of fields of knowledge, on the level of books and articles, and on the level of paragraphs within books and articles. I've been doing something like what I described in this post for at least two years and I have the impression that this is where I've gained the most utility.

The difference between somebody who is just getting into continuous scholarship and myself is, I suspect, almost entirely to be found in the fact that I can be extremely strategic about which fields of knowledge to consume, which books and articles to consume within those fields, and which paragraphs within those books and articles to consume. That's only what it seems like to me, though.

Genuine 'speed reading' can be achieved with a different brain architecture than I have, of course.

Comment author: BrandonReinhart 10 May 2011 02:48:27AM *  0 points [-]

Here is another question, regarding the basic methdology of study. When you are reading a scholastic work and you encounter an unfamiliar concept, do you stop to identify the concept or continue but add the concept to a list to be pursued later? In other words, do you queue the concept for later inspection or do you 'step into' the concept for immediate inspection?

I expect the answer to be conditional, but knowing what conditions is useful. I find myself sometimes falling down the rabbit hole of chasing chained concepts. Wikipedia makes this mistake easy.

Comment author: BrandonReinhart 10 May 2011 02:25:08AM *  6 points [-]

Here's a question: does learning to read faster provide a net marginal benefit to the pursuit of scholarship? Are there narrow, focused, and confirmed methods of learning to read faster that yield positive results? This would be beneficial to all, but perhaps moreso to those of us that have full time jobs that are not scholarship.

View more: Prev | Next