I'm particularly interested in sustainable collaboration and the long-term future of value. I'd love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read - let me know your suggestions! In no particular order, here are some I've enjoyed recently
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
People who've got to know me only recently are sometimes surprised to learn that I'm a pretty handy trumpeter and hornist.
improving AI strategic competence (relative to their technological abilities) may be of paramount importance (so that they can help us with strategic thinking and/or avoid making disastrous mistakes of their own), but this is clearly even more of a double-edged blade than AI philosophical competence
I think you can get less of the tradeoff here by explicitly and deliberately aiming for AI 'tools' for improving human (group) strategic competence. It sounds subtle, but I think it has quite different connotations and implications for what you actually go and do!
(Interested to know if you have a taxonomy here which you feel is approaching MECE? I think this is more like a survey - which is still helpful.)
p.s. I agree your 'Agentic' and 'Process' are the same thing, and I expected 'Agentic' to be what I just called 'Adversarial' (which I think is a better name for it).
Adversarial selection? i.e. another wants X, believes they'll get more X if I believe A, believes I'll believe A if I see O, shows me O.
A complementary angle: we shouldn't be arguing over whether or not we're in for a rough ride, we should be figuring out how to not have that.
I suspect more people would be willing to (both empirically and theoretically) get behind 'ruthless consequentialist maximisers are one extreme of a spectrum which gets increasingly scary and dangerous; it would be bad if those got unleashed'.
Sure, skeptics can still argue that this just won't happen even if we sit back and relax. But I think then it's clearer that they're probably making a mistake (since origin stories for ruthless consequentialist maximisers are many and disjunctive). So the debate becomes 'which sources of supercompetent ruthless consequentialist maximisers are most likely and what options exist to curtail that?'.
The human intuition that treating other humans as a resource to be callously manipulated and exploited, just like a car engine or any other complex mechanism in their environment, is a weird anomaly rather than the obvious default
e.g. as is the typical human response to people who are far away (both physically and conceptually, so whose approval isn't salient or anticipated) i.e. 'the outgroup'
For what it's worth, I've considered unsubscribing from several inkhaven participants (on LW and elsewhere) because of the low quality, though I think I endorse the 'practice form intensely' manoeuvre in general and feel some sort of 'praxis envy'/inspiration. In practice I have mostly skimmed or ignored some posts, read others, and hope that the participants' writing will be better for it in future. I'm definitely not representative of the general internet denizenry, though I may be more representative of LW audience (and perhaps even more so LW target audience).
I appreciate that this message has the germ of '...and this is an abominable status quo', Ruby (even if you wouldn't put it that way).
How would more data and software live in reality?
A combination of more pervasive and reliable provenance annotation (citations, signed raw observations/recordings, trust-weighted endorsement of provenance claims, ...) and discourse mapping (connecting supporting and contradictory arguments and evidence, clustering claims semantically, navigation of the emergent graph) would move some way towards this. FLF is among organisations working toward boosting societal collective intelligence by developing these sorts of tech. Many strategy and design variables to resolve! And questions of distribution/demand. But the design space looks newly feasible [[1]] and ripe for exploration.
for example through massive cheap clerical labour from suitably scaffolded LM agents, and semantically-sensitive software (again via LMs). ↩︎
Yudkowsky's 2008 AI as a Positive and Negative Factor in Global Risk is a pretty good read, both for the content (which is excellent in some ways and easy to critique in others), and for the historical interest (where it's useful to litigate the question of what MIRI was aiming at around then, and because it's interesting how much dynamic Yudkowsky anticipated/missed, and because it's interesting to inhabit 2008 for a bit and update on empirical observations since then).