There are now at least two academic research groups on rationality, in the sense we use the word in here, in Germany that seem to be little known in the US rationality community. The point of this post is telling you they exist if you didn't already know.

There's the Rationality Enhancement Group lead by Falk Lieder in the Max-Plank Insititute in Tübingen and there's a group on Adaptive Rationality in the Max-Plank Insititute in Berlin.

When Falk Lieder was in at our European community weekend he repeatidly said that he's interested in collaborating with the wider rationality community. There's a list of publications of his group and also a Youtube channel that presents a few ideas. 

The adaptive rationality group has decided to speak of rationality techniques we would likely call applied rationality techniques as "boosting decision-making" in contrast to the academic literature on nudging. I think it's worth exploring whether we should also apply their term for the cluster of techniques like Double Crux. 

New Comment
4 comments, sorted by Click to highlight new comments since:
[-]dxu40

This seems cool! Strongly upvoted for signal-boosting.

Cool, thanks for sharing.

I posted about my academic research interest here, do you know their research well enough to give input on whether my interests would be compatible? I would love to find a way to do my PhD in Europe, but especially Germany.

Your post suggests that your target is to do research that's supposed to influence AI. As far as I understand the two groups their goal focuses on improving human rationality. 

My mental model of Falk Lieder would likely say something like: "The operations research team leader   background is interesting. Did you find a way to bring findings from computational game theory / cognitive science / system modeling / causal inference into a way that you believe helps people in your organization make better decisions? If so it would be great to study in an academically rigorous way whether those interventions lead to better outcomes." 

Ahh, I think I did not think through what "rationality enhancement" might mean; perhaps my own recent search and the AI context of Yudkowsky's original intent skewed me a little. I was thinking of something like "understanding and applying concepts of rationality" in a way that might include "anticipating misaligned AI" or "anticipating AI-human feedback responses". 

I like the way you've framed what's probably the useful question. I'll need to think about that a bit more.