Comment author: Qiaochu_Yuan 18 December 2015 02:18:42AM 10 points [-]

Thanks for writing this up!

As a participant, I think the claim that MSFP was a resounding success is a little strong. It's not at all clear to me that anyone gained new skills by attending (at least, I don't feel like I did), as distinct from learning about new ideas, using their existing skills, becoming convinced of various positions, and making social connections (which are more than enough to explain the new hires). To me it was an interesting experiment whose results I find hard to evaluate.

Comment author: Qiaochu_Yuan 26 December 2014 10:18:46PM 32 points [-]

Thanks for the detailed update! Donated $1,500.

Comment author: JoshuaFox 30 October 2014 02:28:58PM *  6 points [-]

You're absolutely right. CfAR could get statistics by measuring quantifiable goals across its students: Grade point average, wealth, weight-loss; preferably with a control group. Until then, I'm just looking for any info I can get.

Comment author: Qiaochu_Yuan 30 October 2014 07:28:13PM *  5 points [-]

Fair enough. In that case, after my first CFAR workshop I lost 15 pounds over the course of a few months (mostly through changes in my diet) and started sleeping better (harder to quantify, but I would estimate at least an effective hour's worth of extra sleep a night).

Comment author: JoshuaFox 28 October 2014 02:28:58PM *  19 points [-]

A question that has been asked before, and so may be stupid: What concrete examples are there of gains from CfAR training (or self-study based on LessWrong)? These would have to come in the form of very specific examples, preferably quantitative.

E.g. "I was $100,000 in debt and unemployed for 2 years, and now I have employment earning twice what I ever have before and am out of debt."

"I never had a relationship that lasted more than 2 months, but now am happily married."

"My grade point average went up from 2.2 to 3.8"

"After struggling to diet and exercise for years, I finally got on track and am now in the best shape of my life."

etc.

Comment author: Qiaochu_Yuan 30 October 2014 06:55:09AM *  10 points [-]

I want to point out that this question doesn't quite test for the right thing. One way an organization like CFAR can cause extreme life improvements is by encouraging participants to do extreme things generally in order to increase the variance of alumni outcomes after the workshops. That leads to potentially many extreme improvements but also potentially many extreme... disimprovements? And the latter are harder to notice because of survivorship bias. (There's also regression to the mean to watch out for: you expect the population of CFAR workshop attendees to be somewhat selected for having life problems and those could just randomly improve afterwards.)

I expect the main benefit of CFAR training should be that it improves median outcomes; that is, that it improves alumni ability to consistently win. But this is hard to test for by asking for anecdotes: it would be better to do statistics.

Comment author: evec 22 June 2014 09:03:03PM 2 points [-]

Sure. Say you have to make some decision now, and you will be asked to make a decision later about something else. Your decision later may depend on your decision now as well as part of the world that you don't control, and you may learn new information from the world in the meantime. Then the usual way of rolling all of that up into a single decision now is that you make your current decision as well as a decision about how you would act in the future for all possible changes in the world and possible information gained.

This is vaguely analogous to how you can curry a function of multiple arguments. Taking one argument X and returning (a function of one argument Y that returns Z) is equivalent to taking two arguments X and Y and returning X.

There's potentially a huge computational complexity blowup here, which is why I stressed mathematical equivalence in my posts.

Comment author: Qiaochu_Yuan 22 June 2014 09:12:08PM 2 points [-]

Thanks for the explanation! It seems pretty clear to me that humans don't even approximately do this, though.

Comment author: Lumifer 20 June 2014 01:53:26AM *  4 points [-]

What sort of things do you think experts (professors, generally) might value that a less-expert person like myself might be able to offer?

Lots of things, of course: adoration, bacon, sexual favors, etc. etc. :-D

In practice, I suspect that some attention, gratefulness, and a demonstration that you're not a clueless idiot with some agenda will go a long way towards making the expert willing to answer your questions. The last part is the problematic one in online communications -- by default you're just "another guy from the internet" and we all know what the average of that looks like.

However something in this vein seems like not a bad start to me: "Dear Professor X, I read your papers/books Y and Z and was amazed how you figured out A, B, and C. However I have a question about D because while E it seems to me that F." Demonstrate cluefulness and use flattery :-)

P.S. Another important issue is scope. Ask questions that can be concisely answered in a couple of paragraphs. Do not ask questions the answers to which are a graduate degree, a shelf of books, and a really tall stack of printed-out papers (e.g. "What should I eat for health and fitness?").

Comment author: Qiaochu_Yuan 22 June 2014 06:49:55PM *  3 points [-]

I get people emailing me math questions every once in awhile. I never answer them (I strongly prefer to answer math questions in a public forum like Quora or StackExchange), but some of them are at least tempting. I am actively turned off by any attempt on their part to use flattery, and those are never tempting. It always sounds fake to me. (Also, some of them call me a professor on accident and that's annoying too.)

Comment author: buybuydandavis 21 June 2014 11:21:41PM 17 points [-]

A third possibility is that AGI becomes the next big scare.

There's always a market for the next big scare, and a market for people who'll claim putting them in control will save us from the next big scare.

Having the evil machines take over has always been a scare. When AI gets more embodied, and start working together autonomously, people will be more likely to freak, IMO.

Getting beat on Jeopardy is one thing, watching a fleet of autonomous quad copters doing their thing is another. It made me a little nervous, and I'm quite pro AI. When people see machines that seem like they're alive, like they think, communicate among themselves, and cooperate in action, many will freak, and others will be there to channel and make use of that fear.

That's where I disagree with EY. He's right that a smarter talking box will likely just be seen as an nonthreatening curiosity. Watson 2.0, big deal. But embodied intelligent things that communicate and take concerted action will press our base primate "threatening tribe" buttons.

"Her" would have had a very different feel if all those AI operating systems had bodies, and got together in their own parallel and much more quickly advancing society. Kurzweil is right in pointing out that with such advanced AI, Samantha could certainly have a body. We'll be seeing embodied AI well before any human level of AI. That will be enough for a lot of people to get their freak out on.

Comment author: Qiaochu_Yuan 22 June 2014 06:38:09PM 3 points [-]

Yeah, this becomes plausible if some analogue of Chernobyl happens. Maybe self-driving cars cause some kind of horrible accident due to algorithms behaving unexpectedly.

Comment author: [deleted] 22 June 2014 04:46:35AM 0 points [-]

Does MIRI have a prediction market on this stuff?

In response to comment by [deleted] on Will AGI surprise the world?
Comment author: Qiaochu_Yuan 22 June 2014 06:35:25PM 3 points [-]

By the time the market closes everyone will have bigger concerns than whatever was being risked on the market.

Comment author: evec 20 June 2014 10:16:24PM 3 points [-]

Let me rephrase: would you like to describe your arguments against utility functions in more detail?

For example, as I mentioned, there's an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.

The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges? And if so, is the objection to the normal-form assumption essentially the same?

Comment author: Qiaochu_Yuan 22 June 2014 06:32:07PM 1 point [-]

For example, as I mentioned, there's an obvious mathematical equivalence between making a plan at the beginning of time and planning as you go, which is directly analogous to how one converts games from extensive-form to normal-form. As such, all aspects of acquiring information is handled just fine (from a mathematical standpoint) in the setup of vNM.

Can you give more details here? I'm not familiar with extensive-form vs. normal-form games.

The standard response to the discussion of knowing probabilities exactly and to concerns about computational complexity (in essence) is that we may want to throw aside epistemic concerns and simply learn what we can from a theory that is not troubled by them (a la air resistance in physics..)? Is your objection essentially that those factors are more dominant in human morality than LW acknowledges?

Something like that. It seems like the computational concerns are extremely important: after all, a theory of morality should ultimately output actions, and to output actions in the context of a utility function-based model you need to be able to actually calculate probabilities and utilities.

Comment author: [deleted] 20 June 2014 07:23:34PM *  2 points [-]

You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we're going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.

Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won't hurt themselves.

I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn't there something rather severely bad about that?

In response to comment by [deleted] on Against utility functions
Comment author: Qiaochu_Yuan 22 June 2014 06:26:33PM 1 point [-]

we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.

Why?

View more: Next