Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
zdot10

Hi Brendan!

I agree with much of RobertM's comment; I read the same essay, and came away confused.

One thing I think might be valuable: if you were to explain your object-level criticisms of particular arguments for AI safety advanced by researchers in the field - for instance, this one or this one.

Given that there are (what I think are) strong arguments for catastrophic risks from AI, it seems important to engage with them and explain where you disagree - especially because the Cosmos Institute's approach seems partially shaped by rejecting AI risk narratives.

zdot50

Not to suggest that you've done this, but I think it's a fairly common mistake to look for conceptual engineering's merits as a metaphilosophy by only looking at papers that include the words 'conceptual engineering', many of which are quite bad. There's a section of Fixing Language (by Cappelen) that provides examples of actual philosophical contributions, some of which predate the term.

Two papers that I think are important - and count as conceptual engineering, by my lights - are The Extended Mind and Grace and Alienation.