Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AI Summer Fellows Program

9 abramdemski 06 August 2017 07:35AM

CFAR is running a free two-week program this September, aimed at increasing participant's ability to do technical research in AI alignment. Like the MIRI Summer Fellows Program which ran the past two years, this will include CFAR course material, plus content on AI alignment research and time to collaborate on research with other participants and researchers such as myself! It will be located in the SF Bay area, September 8-25. See more information and apply here.

In response to Articles in Main
Comment author: abramdemski 30 November 2016 02:27:00AM 1 point [-]

I'll put in my vote for #2.

Comment author: Raemon 29 November 2016 09:09:20PM 7 points [-]

Yeah. One thing that'd be very counterintuitive for many is that "thought seriously for 5 minutes" is actually a surprisingly high bar. (i.e. most people do not do that at all).

I also wonder if it might be better to eschew vague labels like "confident" and instead issue more concrete statements like "80% confident this will be useful for X", in the interest of avoiding the problem you list in the first paragraph.

Integration with existing signaling games is an important concern. I do think it'd be valuable to shift our status norms to reflect "what useful labor actually looks like." For example, when someone says "I will think seriously about that for 5 minutes", I actually now have very positive associations with that - I take it to mean that, while it's not their top priority, they bothered (at all) to evaluate it in "serious mode."

That may or may not be achievable to shift, but I think ideally our cultural norms / internal status games should help us learn what actually works, and give more transparency on how much time people actually spend thinking about it.

In response to comment by Raemon on Epistemic Effort
Comment author: abramdemski 30 November 2016 02:19:09AM 4 points [-]

I agree. My knee-jerk reaction "does not play well with signaling games" has a lot to do with how "thought about it for five minutes" looks to someone not familiar with the LW meme about that. This might address my other point as well: perhaps if people were used to seeing things like "thought for 5 minutes" and "did one google search" and so on, they would feel comfortable writing those things and it wouldn't make people self-conscious. Or maybe not, if (like me) they also think about how non-community-members would read the labels.

In response to Epistemic Effort
Comment author: abramdemski 29 November 2016 08:48:27PM 13 points [-]

I think this is an intriguing idea. It reminds me of the discussion of vague language in Superforecasters: the intelligence community put a lot of effort into optimizing language in its reports, such as "possibly", "likely", "almost certainly", etc. only to later realize that they didn't know what those words meant (in terms of probabilities) even after discussing word choice quite a bit. Someone went around asking analysts what was meant by the words and got very different probabilities from different people. Similarly, being careful about describing epistemic status is likely better than not doing so, but the words may not have as clear a meaning as you think; describing what you actually did seems like a good way to keep yourself honest.

This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.'

This seems like an important failure mode. People may not be so interested in writing if they also have to indicate their amount of effort. :p

Another problem I see is: "epistemic effort" may not play as well with signalling games as "epistemic status". Putting your specific efforts out there rather than a degree of confidence can make something look scatter-brained that is actually well-conceived. For example, "thought about it for 5 minutes" on your post doesn't indicate the degree of support the idea has from background knowledge and experience. Your actual post indicates that. But, the real reasons you think something will work will often be hard to summarize in a small blurb and will instead go into the content of the post itself.

I think what I'll do is keep using the "epistemic status" tag, starting with a vague status such as "confident" or "speculative", and then providing more detail with the notion of "epistemic effort" in mind.

Comment author: Thomas 29 November 2016 02:47:02PM 2 points [-]

I would like to read you in the Main or elsewhere.

Preferably not about "rationality". But certainly about DT, AI or something like that. Or, if you can sway the term "rationality" in your direction, so much better.

Comment author: abramdemski 29 November 2016 07:01:05PM 2 points [-]

I'll try and taboo out the term "rationality" when convenient. I think LW has created a strong illusion in me that all of the things I listed and more are "the same subject" -- or rather, not just the same subject, but so similar that they can all be explained at once with a short argument. I spent a lot of time trying to articulate what that short argument would be, because that was the only way to dispel the illusion -- writing out all the myriad associations which my brain insisted were "the same thing". These days, that makes me much more biased toward splitting as opposed to lumping. But, I suspect I'm still too biased toward lumping overall. So tabooing out rationality is probably a good way to avoid spurious lumping for me.

How can people write good LW articles?

7 abramdemski 29 November 2016 10:40AM

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.

Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.

I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.

Comment author: CynicalOptimist 24 April 2016 12:56:48PM 2 points [-]

This is fair, because you're using the technique to redirect us back to the original morality issue.

But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)

This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.

Comment author: abramdemski 25 April 2016 12:21:49AM 0 points [-]

Agreed.

Meetup : West LA: Futarchy

1 abramdemski 30 October 2015 07:22AM

Discussion article for the meetup : West LA: Futarchy

WHEN: 04 November 2015 08:00:00PM (-0700)

WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064, USA

How To Find Us: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".

Parking: Free for three hours.

Discussion: This week we will attempt to tackle Robin Hanson's proposal for solving coordination problems on a massive scale, Futarchy. This will be a freestyle discussion, so it's probably best to at least read a little of the recommended reading to know what's going on. However, this is not required.

Recommended Reading:

Discussion article for the meetup : West LA: Futarchy

Meetup : West LA: Bias Bias

1 abramdemski 27 October 2015 08:53AM

Discussion article for the meetup : West LA: Bias Bias

WHEN: 28 October 2015 07:00:00PM (-0700)

WHERE: Westside Pavilion Mall, 10850 Pico Blvd, Los Angeles, CA 90064

How to Find Us: We are meeting at the old location again this week, in the upstairs wine bar at Westside Pavillion.

Discussion: The phrase "bias bias" could mean many things. Perhaps one might employ the term to point to the tendency to accuse others of bias before oneself. Perhaps, as in this paper, it could refer to the tendency of statisticians to be overly concerned with eliminating statistical bias and under-concerned about variance. What I want to discuss is the risk that, if we are observing other decision-makers from the outside with less knowledge about the situation than them, we will almost always find predictable irregularities in their decision-making which we cannot explain via our understanding of the situation. This will, I think, tend to be true whether they're "biased" in a significant sense or not. In other words: we're very likely to have less knowledge about the situation than the people making the decisions, and this is very likely to mislead us into thinking they're making biased decisions which are harming them, if we approach the question without sufficient awareness. This doesn't mean we can't assess bias, but it does sound a note of caution in doing so. Even in cases where the reasoning from our perspective seems very clear, the decision-maker may have other considerations to take into account.

Recommended Reading: I don't know of anything written specifically on this, but the recent breaking Chesterton’s fence in the presence of bull seems relevant here.

No prior exposure to Less Wrong is required; all are welcome.

Discussion article for the meetup : West LA: Bias Bias

Comment author: Lumifer 13 October 2015 02:40:14PM 2 points [-]

Current genetic engineering, yes, but 50 or 100 years from now? Remember, we're talking not about what's viable now, but rather what's plausible and a unicorn is very plausible biologically -- it's merely technical difficulties which prevent us from creating one.

Comment author: abramdemski 13 October 2015 10:13:20PM 0 points [-]

Touché!

It seems worth considering that I might benefit from specifically practicing being imaginative, or otherwise modifying my "two modes" thought pattern.

View more: Next