This is a link post for:

I'm not going to quote the content at the link itself. [Should I?]

David Chapman – the author of the linked post – claims that "meta-rational" methods are necessary to 'reason reasonably'. I, and I think a lot of other people broadly part of the greater rationalist community, have objected to that general distinction. I still stand by that objection, at least terminologically.

But with this post and other previous recent posts/'pages' that he's posted at his site Meaningness, I think I'm better understanding the points he was gesturing or hinting at with what he describes as meta-rationality and I think that's because 'rationality', in his understanding, is grounded in the actual behavior people perform. The notion that Richard Feynman is quoted as insisting on of having to "work on paper", or the idea of 'repair' are almost certainly real and important facts about how we actually reason.

I think there's less and less object-level disagreement between him and myself, even about something like 'Bayesian reasoning'. His recent writing has crystalized the notion in me that he's really on to something and dispelled the notion that we shared some kind of fundamental disagreement. It seems relatively unimportant whether I or others subsume 'meta-rationality' within 'rationality' (or not).

I'm less sure how much of this applies to arbitrary reasoners or artificial intelligence, e.g. an AI could maintain a checklist 'internally' instead of relying on an external physical device to perform the same function, but the ideas he's discussing seem to be insightful and true to me about our own rational practices nonetheless.

New Comment
6 comments, sorted by Click to highlight new comments since:

The stuff on "cognitive prostheses" reminded me of Nova Organum.

Written before reading the linked post (but after reading the link post).


Contents:

Of Style

Rationality

AI

Footnotes


Of Style

I'm not going to quote the content at the link itself. [Should I?]

In linkposts, often the content is quoted wholesale (beginning to end) or partially (First X paragraphs or so). Although the post starting with a video might have made these options more difficult than usual.

or the idea of 'repair'

This is a reference to the linked article.


Rationality

David Chapman – the author of the linked post – claims that "meta-rational" methods are necessary to 'reason reasonably'. I, and I think a lot of other people broadly part of the greater rationalist community, have objected to that general distinction. I still stand by that objection, at least terminologically.

This could be a disagreement about the name, or what the name should be.*

How we think people should reason. (Rationalism?)

How people should reason. (Empiricism?)

I think there's less and less object-level disagreement between him and myself, even about something like 'Bayesian reasoning'.

There's disagreement around Bayesian reasoning on LW, some relating to logical inductors, which might be relevant (although it's not clear) - the fact that 'the credence you'd give to different hypothesis changes as you think about it' has been mentioned in his work.

His recent writing has crystalized the notion in me that he's really on to something and dispelled the notion that we shared some kind of fundamental disagreement.

David Chapman's thoughts on AI might differ from yours.


AI

I'm less sure how much of this applies to arbitrary reasoners or artificial intelligence, e.g. an AI could maintain a checklist 'internally' instead of relying on an external physical device to perform the same function,

AlphaStar: Impressive for RL progress, not for AGI progress could be loosely interpreted as "AI have memory problems":

The overhyped part is that AlphaStar doesn't really do the "strategy" part of real-time strategy. Each race has a few solid builds that it executes at GM level, and the unit control is fantastic, but the replays don't look creative or even especially reactive to opponent strategies.
That's because there's no representation of causal thinking - "if I did X then they could do Y, so I'd better do X' instead". Instead there are many agents evolving together, and if there's an agent evolving to try Y then the agents doing X will be replaced with agents that do X'. But to explore as much as humans do of the game tree of viable strategies, this approach could take an amount of computing resources that not even today's DeepMind could afford.
(This lack of causal reasoning especially shows up in building placement, where the consequences of locating any one building here or there are minor, but the consequences of your overall SimCity are major for how your units and your opponents' units would fare if they attacked you. In one comical case, AlphaStar had surrounded the units it was building with its own factories so that they couldn't get out to reach the rest of the map. Rather than lifting the buildings to let the units out, which is possible for Terran, it destroyed one building and then immediately began rebuilding it before it could move the units out!)
...
The end result cleaned up against weak players, performed well against good players, but practically never took a game against the top few players. I think that DeepMind realized they'd need another breakthrough to do what they did to Go, and decided to throw in the towel while making it look like they were claiming victory. (Key quote: "Prof Silver said the lab 'may rest at this point', rather than try to get AlphaStar to the level of the very elite players.")

[emphasis mine] The whole post is worth reading.


Footnotes

*Perhaps the distinction is

-Rationality is what you should do.

-Meta-rationality is what you should do in order to "make rationality work".

While these two things can be combined under one umbrella, making definitions smaller:

-Increases clarity (of discussion)

-Makes it easier to talk about components

-Makes it clear when all membership criterion for a category have been met.

-Might help with teaching/retention

I love this comment!

In linkposts, often the content is quoted wholesale (beginning to end) or partially (First X paragraphs or so). Although the post starting with a video might have made these options more difficult than usual.

I didn't know that – thanks for the feedback!

Should I edit my post to quote the referenced content?

David Chapman's thoughts on AI might differ from yours.

At sufficient detail, everyone's thoughts about anything (sufficiently complex) differ from everyone else's. But I don't think David Chapman and I have any fundamental disagreements about AI.

"AI have memory problems"

Ooooh! That's a perfectly concise form of a criticism I've thought about neural network architectures for a long time. The networks certainly are a form of memory themselves, but not really a history, i.e. of distinct and relatively discrete events or entities. Our own minds certainly seem to have that kind of memory and it seems very hard for an arbitrary intelligent reasoner to NOT have something similar (if not exactly something like this).

The quoted text you included is a perfect example of this kind of thing too; thanks for including it.

Isn't there evidence that human brains/minds have what is effectively a dedicated 'causal reasoning' unit/module? It probably also relies on the 'thing memory' unit(s)/module(s) too tho.

  • Perhaps the distinction is:
  • Rationality is what you should do.
  • Meta-rationality is what you should do in order to "make rationality work".

While these two things can be combined under one umbrella, making definitions smaller:

  • Increases clarity (of discussion)
  • Makes it easier to talk about components
  • Makes it clear when all membership criterion for a category have been met.
  • Might help with teaching/retention

As I mentioned, or implied, in this post, I'm indifferent about the terminology. But I like all of your points and think they're good reasons to make the distinction that Chapman does. I'm going to consider doing the same!

Isn't there evidence that human brains/minds have what is effectively a dedicated 'causal reasoning' unit/module? It probably also relies on the 'thing memory' unit(s)/module(s) too tho.

I'm not an expert on neuroscience. I'm not sure how continuous those things are between being modular and integrated. (I have a suspicion they're both.)

Should I edit my post to quote the referenced content?

It's a particular style/format[1], using it isn't required. There's a couple styles around this that I've noticed:

1) A post which consists of a link, and maybe material on what it is. This is done by describing it, with an intended audience of people who haven't read it. (The low-effort way of doing this is to copy the first few paragraphs.)

2) Writing a post with a (single) pre-requisite, which is linked to.


What makes the two different is that 1 is meant to be read after the linked post, while 2 is for reading before it [2].

I brought it up because I read your post, and then the linked post, and then I read your post again. 1 and 2 can be combined, but it wasn't clear whether your post came before the link or after, and it wasn't clearly split into two parts along those lines.


[1] I am not familiar with its origin.

[2] Except when 2 is a quote, and skipping to the link doesn't miss anything. The discussion/comments on a post of either type may be indistinguishable, including discussions which require having read the linked post to understand. Sometimes a post which originally just consists of a link might get comments like "what's the linked post about".

Thanks!