Wiki Contributions

Comments

I am honored to be part of enabling more people from around the world to contribute to the safe and responsible development of AI. 

Thanks for posting.

The hindrances you mention seem relevant but also highly related. In particular, the
"Poorly-tested implementation: A disconnection between engaging educational content and its practical application, underscoring the importance of field-tested guidance that works in varied real-life situations."

I'm excited to see advice that has been tested via working closely with individuals. I'm reminded of the Scott Miller (who played a major role within evidence-based psychotherapy) claim on a podcast (paraphrasing): "We have an implementation problem".

Great! I'd expect most people on there are. I know for sure that Paul Rohde and James Norris (the founder) are aware. My rates depends on the people I work with but $200-$300 is the standard rate.

Thank you. This is a really excellent post. I'd like to add a few resources and providers:
1. EA mental health navigator: https://www.mentalhealthnavigator.co.uk/.
2. Overview of providers on EA mental health navigator (not everyone familiar with alignment in significant ways). https://www.mentalhealthnavigator.co.uk/providers
3. Upgradable has some providers that are quite informed around alignment. https://www.upgradable.org/
4. If permissible, I'd like to add myself as a provider (coach) though I don't take on any coachees at present.

 


 

So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.

Re-reading this after four years. It is still so elegantly beautiful. I don't understand why this wasn't part of sequences highlights. I would have put under "thinking better on purpose" or "the laws governing beliefs".

Interesting that you seem to see rationality (as opposed to traditional rationality) as a more effective and efficient version of seeking the truth (~epistemic rationality). In that sense, it does seem somewhat similar to doing what EA is trying to do for altruism.