This year's Spring ACX Meetup everywhere in Nijmegen.
Location: The Sportsbar "The Yard" on the upper floor of the Radboud Sports Centre – https://plus.codes/9F37RV98+9X
Group Link: https://chat.whatsapp.com/GFDbgvQpgvRKL1DMpASryl
Contact: stian.sgronlund@outlook.com
Do we know that it's the recognising-content-of-images part of the task that is difficult? iirc a couple of years ago there was someone who made a geoguesser "find each other" game where two people could "load into" google maps and would try to meet up. I played it with a friend in a city we both knew. This seems like it might cleave into (some of) the difference between the planning part of the task and the image recognition. I'll try to find the game (iirc it was a student project, maybe on itch, probably not on steam).
I think the frame of "trying to 'solve the whole' future is aking to gripping too hard" might be relevant for me changing my mind about research directions. But it still doesn't present a positive vision for why one should work on e.g. incremental prosaic methods, so then it's even less clear for me what areas to focus. I had been focusing on "actually solving the problm" in the more safety at all scales or permanent safety versions of the term, but I think even those directions might need to be minimally developed in the minimal superintelligence-assisted future? The article gives some clarity of vision to identifying which research directions might do meaningful "solving", but still not uniquely picking out anything.
Sorry for moderately ramble-y comment
This year's Fall ACX Meetup everywhere in Nijmegen.
Location: Sport Cafe "The Yard", Elinor Ostromgebouw, Heyendaalseweg 141, 6525 AJ Nijmegen. I will bring some sort of sign that says "ACX/Rationality Meetup Nijmegen" – https://plus.codes/9F37RV98+CX
RSVPs appreciated but not required
Contact: stian.sgronlund@outlook.com
This year's Spring ACX Meetup everywhere in Nijmegen.
Location: The Yard Sportcafe in the Elinor Ostromgebouw, or possibly moving outside if there's nice weather. – https://plus.codes/9F37RV96+GX
Group Link: No dedicated place yet, but you can join the EA Nijmegen whatsapp group through https://www.eanijmegen.nl/
Contact: stian.sgronlund@outlook.com
Some researchers at my university have in the past expressed extreme skepticism at AGI and the safety research field, and recently released a preprint taking a stab at the "inevitability of AGI". In this 'journal club' post I take a look at their article, and end up thinking that they a) have a point, AGI might be farther away than I previously thought, and b) they actually made a very AI safety-like argument in the article, which I'm not sure they realised.
[Epistemic state: posting first drafts in order to produce better thoughts]
Some people argue that humanity is currently not co-opting everything, and that this is evidence that the AI would not necessarily co-opt everything. While the argument is logically true as stated ("There exists AI systems which would not co-opt everything"), in practice it is an abuse of probabilities and gross anthropomorphising of systems which are not necessarily like humans (and which we would have to work to make like humans).
Allow me to explain. In a recent podcast appearance, Dwarkesh Patel gave the example that "spruce trees are still around", and that humans were likely to keep... (read 664 more words →)