Yeah I would say AI is more like the popularization of PhotoShop in photography, or CGI in movies/animations. Most any work you can do with AI is a result of taking existing pieces from the medium, training the AI on them, and then using it to generate more. So it's effectively a stochastic editor or cognitive harness, that helps you make the art. It shines in a few areas, like those subliminal pictures that are two things at once, but ultimately is like a really large sized brush that makes everything you paint average.
Wow this was a very well-written post, really put the fear of confounders in me as a humble ML student.
I took the survey
I think Canada still has a pretty outsized impact on AI, especially considering how many researchers and companies came out of places like University of Toronto or UWaterloo.
Additionally, I think that if we at least had an example of what good AI policy looks like, implemented successfully in a country, it could be used as a proof to other countries that good AI regulation is possible and useful. In that regard I think lobbying for well thought out AI regulation policies on the national level is really important, and it's not necessary to focus on international to get traction.
I recently worked at the Canadian Department of National Defence's newly established AI Centre, and I was surprised one of the tasks of the policy team was an AI safety report, in addition to other generic governance and responsible AI stuff. Admittedly, the AI safety report mostly focused on cybersecurity and CBRN risks, but there were some loss-of-control issues mentioned as well. It indicated to me that working to integrate AI safety by working in a department and pushing projects, or talking to director-level people is possible.
Guesses:
Bounty, magical thinking, bummer, clear division/separation, +1, made me angry, long thoughts/time spent thinking, sage, exposed an error, good communication, good post smell?/vibes, broken argument, strong argument, moved my position on the topic
There are some models on HuggingFace that do automatic PII data redaction, I've been working on a project to automate redaction for documents with them. AI4privacy's models and Microsoft Presidio have been helpful.
You might find some puzzle games to be useful. In particular Understand is a game that was talked about on here as being good for learning how to test hypotheses and empirically deduce patterns. Similar to your Baba Is You experiments.
Any way that we can easily get back our own results from the survey? I know you can sometimes get a copy of your responses when you submit a Google form.
What happens to the general Lightcone portfolio if you don't meet a fundraising target, either this year or a future year?
For concreteness, say you miss the $1M target by $200K.
Understand is a puzzle game that is basically what you describe as an Epistemic Roguelike, in that you deduce a different ruleset for every set of levels. It's not an actual roguelike though, purely focused on the puzzle of figuring out rules, which are limited to a simple grid with shapes.
I would say that as something exploring a relatively unplumbed space of content, Epistemic Roguelikes are more likely to be interesting in a time where AI can make average copies of any existing content, and mainly struggles with new concepts.
Correspondingly, I think that D&D Sci might be less useful now that you could plausibly automate a large chunk of the process of creating scenarios? That's my impression based on checking out the posts, although I haven't actually completed one.