video game companies can be extremely well-aligned with delivering a positive experience for their users
This doesn't seem obvious to me; video game companies are incentivized to make games that are as addicting as possible without putting off new users/getting other backlash.
Ok, this is super not the point of your post, but now I can't help thinking about how cool it would be to have an AI-assisted educational video game about improving agriculture. Like, there's so many useful ideas around improving agriculture that rural villages around the world could benefit from knowing. Stuff like how and where and why to build 'sand dams' (small dams back-filled upstream with sand) in dry rocky gullies for harvesting infrequent high-overland-flow rain events. Or turning carbonaceous crop waste into charcoal rather than ash, and then mixing it with nitrogen-rich compost for a year, to make bio-char. Big soil fertility gains as well as great for reducing greenhouse gasses via carbon capture.
Along the same lines of thought, I think the most important automated agriculture tool humanity could develop and deploy wouldn't be a farm tool at all. It would be a slow-and-steady solar-powered ATV-sized digging robot which could work mostly-self-supervised to build and maintain rainwater capture facilities (sand dams, swales, small reservoirs, dry wells, trincheras, gabions, etc.) up-watershed of farms in semi-arid areas.
Why not build an in-person community and use that as a source of data and playground for experiments?
Also available on the EA Forum.
Preceded by: Encultured AI Pre-planning, Part 1: Enabling New Benchmarks
Followed by: Announcing Encultured AI
In the preceding post, we talked about how our plan with Encultured is to enable new existential safety benchmarks for AI. In this post, we'll talk about involving humans and human data in those benchmarks. Many of the types of benchmarks we want to enable are made more useful if we can involve humans in them. For example, testing whether an AI system can align its values with another agent is especially interesting if that other agent is a human being.
So, we want a way to get lots of humans engaging with our platform. At first, we thought we’d pay humans to engage with the platform and generate data. In considering this, we wanted to make the process of engagement not-too-annoying for people, both so that it wouldn’t make their lives worse, and so that we wouldn’t have to pay them too much to engage. But then we thought: why not go a bit further, and provide something people intrinsically value? I.e., why not provide a service?
Out of the gate, we thought: what’s a service where people might not mind lots of experiments happening? A few possibilities come to mind for what we could build:
Safely?*
of Training
Data?
Geopolitically
Stabilizing?
“physical
assistance”
benchmarks?
* i.e., we think we can safely grow the company by following market incentives and still end up with something aligned with our goals.
† i.e., tough in today’s data privacy climate.
Followed by: Announcing Encultured AI