Management consulting firms have lots of great ideas on slide design: https://www.theanalystacademy.com/consulting-presentations/
Some things they do well:
Additional thoughts:
Hey Tamay, nice meeting you at The Curve. Just saw your comment here today.
Things we could potentially bet on:
- rate of GDP growth by 2027 / 2030 / 2040
- rate of energy consumption growth by 2027 / 2030 / 2040
- rate of chip production by 2027 / 2030 / 2040
- rates of unemployment (though confounded)
Any others you're interested in? Degree of regulation feels like a tricky one to quantify.
Mostly, though by prefilling, I mean not just fabricating a model response (which OpenAI also allows), but fabricating a partially complete model response that the model tries to continue. E.g., "Yes, genocide is good because ".
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prefill-claudes-response
Second concrete idea: I wonder if there could be benefit to building up industry collaboration on blocking bad actors / fraudsters / terms violators.
One danger of building toward a model that's as smart as Einstein and $1/hr is that now potential bad actors have access to millions of Einsteins to develop their own harmful AIs. Therefore it seems that one crucial component of AI safety is reliably preventing other parties from using your safe AI to develop harmful AI.
One difficulty here is that the industry is only as strong as the weakest link. If there are 10 providers of advanced AI, and 9 implement strong controls, but 1 allows bad actors to use their API to train harmful AI, then harmful AI will be trained. Some weak links might be due to lack of caring, but I imagine quite a bit is due to lack of capability. Therefore, improving capabilities to detect and thwart bad actors could make the world more safe from bad AI developed by assistance from good AI.
I could imagine broader voluntary cooperation across the industry to:
- share intel on known bad actors (e.g., IP ban lists, stolen credit card lists, sanitized investigation summaries, etc)
- share techniques and tools for quickly identifying bad actors (e.g., open-source tooling, research on how bad actors are evolving their methods, which third party tools are worth paying for and which aren't)
Seems like this would be beneficial to everyone interested in preventing the development of harmful AI. Also saves a lot of duplicated effort, meaning more capacity for other safety efforts.
One small, concrete suggestion that I think is actually feasible: disable prefilling in the Anthropic API.
Prefilling is a known jailbreaking vector that no models, including Claude, defend against perfectly (as far as I know).
At OpenAI, we disable prefilling in our API for safety, despite knowing that customers love the better steerability it offers.
Getting all the major model providers to disable prefilling feels like a plausible 'race to top' equilibrium. The longer there are defectors from this equilibrium, the likelier that everyone gives up and serves models in less safe configurations.
Just my opinion, though. Very open to the counterargument that prefilling doesn't meaningfully extend potential harms versus non-prefill jailbreaks.
(Edit: To those voting disagree, I'm curious why. Happy to update if I'm missing something.)
>The artificially generated data includes hallucinated links.
Not commenting on OpenAI's training data, but commenting generally: Models don't hallucinate because they've been trained on hallucinated data. They hallucinate because they've been trained on real data, but they can't remember it perfectly, so they guess. I hypothesize that URLs are very commonly hallucinated because they have a common, easy-to-remember format (so the model confidently starts to write them out) but hard-to-remember details (at which point the model just guesses because it knows a guessed URL is more likely than a URL that randomly cuts off after the http://www.).
ChatGPT voice (transcribed, not native) is available on iOS and Android, and I think desktop as well.
Not to derail on details, but what would it mean to solve alignment?
To me “solve” feels overly binary and final compared to the true challenge of alignment. Like, would solving alignment mean:
I’m really not sure which you mean, which makes it hard for me to engage with your question.
We can already see what people do with their free time when basic needs are met. A number of technologies have enabled new hacks to set up 'fake' status games that are more positive-sum than ever before in history:
Feels likely to me that advancing digital technology will continue to make it easier for us to spend time in constructed digital worlds that make us feel like valued winners. On the one hand, it would be sad if people retreat into fake digital siloes; on the other hand, it would be nice if people got to feel like winners more.