Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by abramdemski on P: 0 <= P <= 1
Comment author: DragonGod 28 August 2017 10:55:17PM 1 point [-]

I reject 0 and 1 for non logical facts as well.
"I think therefore I am" is a logical proof of my own existence, and as such, I assign a probability of 1 to the proposition: "I exist".

In response to comment by DragonGod on P: 0 <= P <= 1
Comment author: abramdemski 01 September 2017 08:22:45PM *  0 points [-]

Right; sorry for not phrasing that in a way that sounded like agreement with you. We should be less that totally certain about mathematical statements in real life, but when setting up the formalism for probability, we're "inside" math rather than outside of it; there isn't going to be a good argument for assigning less than probability 1 to logical truths. Only bad things happen when you try.

This does change a bit when we take logical uncertainty into account, but although we understand logical uncertainty better these days, there's not a super strong argument one way or the other in that setting -- you can formulate versions of logical induction which send probabilities to zero immediately when things get ruled out, and you can also formulate versions in which probabilities rapidly approach zero once something has been logically ruled out. The version which jumps to zero is a bit better, but no big theoretical advantage comes out of it afaik. And, in some abstract sense, the version which merely rapidly approaches zero is more prepared for "mistakes" from the deductive system -- it could handle a deductive system which occasionally withdrew faulty proofs.

In response to P: 0 <= P <= 1
Comment author: abramdemski 28 August 2017 10:12:57PM 2 points [-]

This article was overly vicious and confrontational; I adopted such an attitude to minimise the bias in my perception of the original article based on the halo effect.

I accept that this is likely the best thing for you to do for debugging your own world-view, but there's a problematic group-epistemic question: it would be bad if a person could always justify arguing in a way that's biased against X by saying "I'm biased toward X, so I have to argue in a way that's biased against X."

To the extent that you can, I'd suggest that you steer toward de-biasing in a way that's closer to "re-deriving things from first principles"; IE, try to figure out how one would actually answer the question involved, and then do that, without particularly steering toward X or against X.

With respect to the object-level question: the same type of argument which supports the laws of probability also supports non-dogmatism (see theorem 4), IE, the rejection of probabilities zero or one for non-logical facts. So, I put this principle on the same level as the axioms of probability theory, but I do not extend it to things like "P(A or not A)=1", which don't fall to the same arguments.

AI Summer Fellows Program

9 abramdemski 06 August 2017 07:35AM

CFAR is running a free two-week program this September, aimed at increasing participant's ability to do technical research in AI alignment. Like the MIRI Summer Fellows Program which ran the past two years, this will include CFAR course material, plus content on AI alignment research and time to collaborate on research with other participants and researchers such as myself! It will be located in the SF Bay area, September 8-25. See more information and apply here.

In response to Articles in Main
Comment author: abramdemski 30 November 2016 02:27:00AM 1 point [-]

I'll put in my vote for #2.

Comment author: Raemon 29 November 2016 09:09:20PM 7 points [-]

Yeah. One thing that'd be very counterintuitive for many is that "thought seriously for 5 minutes" is actually a surprisingly high bar. (i.e. most people do not do that at all).

I also wonder if it might be better to eschew vague labels like "confident" and instead issue more concrete statements like "80% confident this will be useful for X", in the interest of avoiding the problem you list in the first paragraph.

Integration with existing signaling games is an important concern. I do think it'd be valuable to shift our status norms to reflect "what useful labor actually looks like." For example, when someone says "I will think seriously about that for 5 minutes", I actually now have very positive associations with that - I take it to mean that, while it's not their top priority, they bothered (at all) to evaluate it in "serious mode."

That may or may not be achievable to shift, but I think ideally our cultural norms / internal status games should help us learn what actually works, and give more transparency on how much time people actually spend thinking about it.

In response to comment by Raemon on Epistemic Effort
Comment author: abramdemski 30 November 2016 02:19:09AM 4 points [-]

I agree. My knee-jerk reaction "does not play well with signaling games" has a lot to do with how "thought about it for five minutes" looks to someone not familiar with the LW meme about that. This might address my other point as well: perhaps if people were used to seeing things like "thought for 5 minutes" and "did one google search" and so on, they would feel comfortable writing those things and it wouldn't make people self-conscious. Or maybe not, if (like me) they also think about how non-community-members would read the labels.

In response to Epistemic Effort
Comment author: abramdemski 29 November 2016 08:48:27PM 13 points [-]

I think this is an intriguing idea. It reminds me of the discussion of vague language in Superforecasters: the intelligence community put a lot of effort into optimizing language in its reports, such as "possibly", "likely", "almost certainly", etc. only to later realize that they didn't know what those words meant (in terms of probabilities) even after discussing word choice quite a bit. Someone went around asking analysts what was meant by the words and got very different probabilities from different people. Similarly, being careful about describing epistemic status is likely better than not doing so, but the words may not have as clear a meaning as you think; describing what you actually did seems like a good way to keep yourself honest.

This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.'

This seems like an important failure mode. People may not be so interested in writing if they also have to indicate their amount of effort. :p

Another problem I see is: "epistemic effort" may not play as well with signalling games as "epistemic status". Putting your specific efforts out there rather than a degree of confidence can make something look scatter-brained that is actually well-conceived. For example, "thought about it for 5 minutes" on your post doesn't indicate the degree of support the idea has from background knowledge and experience. Your actual post indicates that. But, the real reasons you think something will work will often be hard to summarize in a small blurb and will instead go into the content of the post itself.

I think what I'll do is keep using the "epistemic status" tag, starting with a vague status such as "confident" or "speculative", and then providing more detail with the notion of "epistemic effort" in mind.

Comment author: Thomas 29 November 2016 02:47:02PM 2 points [-]

I would like to read you in the Main or elsewhere.

Preferably not about "rationality". But certainly about DT, AI or something like that. Or, if you can sway the term "rationality" in your direction, so much better.

Comment author: abramdemski 29 November 2016 07:01:05PM 2 points [-]

I'll try and taboo out the term "rationality" when convenient. I think LW has created a strong illusion in me that all of the things I listed and more are "the same subject" -- or rather, not just the same subject, but so similar that they can all be explained at once with a short argument. I spent a lot of time trying to articulate what that short argument would be, because that was the only way to dispel the illusion -- writing out all the myriad associations which my brain insisted were "the same thing". These days, that makes me much more biased toward splitting as opposed to lumping. But, I suspect I'm still too biased toward lumping overall. So tabooing out rationality is probably a good way to avoid spurious lumping for me.

How can people write good LW articles?

7 abramdemski 29 November 2016 10:40AM

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of "reading while digesting/synethesizing/blog posting", or ...) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta / too clever an idea, but may be worth some individual brainstorms?

I wouldn't presume to write "How To Write Good LessWrong Articles", but perhaps I'm up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it's still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there's only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on -- and that this community would have an interesting spin on those things.

Moreover, I think that "rationality isn't solved" (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out -- you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can't do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn't really exist? If something is not quite clear to you, there's a decent chance that it's not quite clear to a lot of people; don't make the mistake of thinking everyone understands but you. And don't make the mistake of thinking you understand something that you haven't tried to explain from the start.

I'd encourage a certain kind of pluralistic view of rationality. We don't have one big equation explaining what a rational agent would look like -- there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm -- one unifying decision theory -- is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I'm thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of "rational principle" which we can attempt to follow -- to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can't work it all out from decision theory alone -- and anyway, as I've been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being "dive into the literature"), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I'm thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.

Comment author: CynicalOptimist 24 April 2016 12:56:48PM 2 points [-]

This is fair, because you're using the technique to redirect us back to the original morality issue.

But i also don't think that MBlume was completely evading the question either. The question was about ethical principles, and his response does represent an exploration of ethical principles. MBlume suggests that it's more ethical to sacrifice one of the lives that was already in danger, than to sacrifice an uninvolved stranger. (remember, from a strict utilitarian view, both solutions leave one person dead, so this is definitely a different moral principle.)

This technique is good for stopping people from evading the question. But some evasions are more appropriate than others.

Comment author: abramdemski 25 April 2016 12:21:49AM 0 points [-]


Meetup : West LA: Futarchy

1 abramdemski 30 October 2015 07:22AM

Discussion article for the meetup : West LA: Futarchy

WHEN: 04 November 2015 08:00:00PM (-0700)

WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064, USA

How To Find Us: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".

Parking: Free for three hours.

Discussion: This week we will attempt to tackle Robin Hanson's proposal for solving coordination problems on a massive scale, Futarchy. This will be a freestyle discussion, so it's probably best to at least read a little of the recommended reading to know what's going on. However, this is not required.

Recommended Reading:

Discussion article for the meetup : West LA: Futarchy

View more: Next