I find it sad to see the "American corporations behave passively aggressively because evil GDPR forces them to" meme also on LessWrong, so please allow me to provide an ELI5 version of what GDPR actually says:
*
You shall not collect your customers' personal data.
Unless they consent to that.
The consent must be explicit and freely given.
What is "explicit"?
What is "freely given"?
But what if storing the data is intrinsically necessary for given functionality? For example, I cannot send you an invoice for the stuff you want to buy at my e-shop, if I don't know your name and address? Or, if it is required by law, for example I am legally required to ask whether you are an adult before selling you alcohol?
That is called "legitimate interest", and the rules are the following: You still need to ask the user for consent, but if the user disagrees, then you simply do not provide them the specific functionality (for example the "buy" button is disabled).
However, you cannot cleverly leverage the "legitimate interest" to expand your data collection beyond its scope.
You have to disclose on request to the user all the information that you are currently storing about them.
The user can revoke the consent, in which case you need to delete their stored personal data (except for those you are legally required to keep; in which case you need to delete them after the legally specified period is over). You cannot make revocation of the consent logistically more difficult than it was to provide the consent.
*
Whenever you see a company doing something more complicated than "hey, you okay with us storing the following information about you: yes or no?", nine times out of ten, the company is just stupid or passively aggressive, and the things "required by GDPR" are in fact not required by GDPR at all (and often are in violation of GDPR). Yes, that includes companies such as Google. Yes, they are perfectly aware of that; they do the annoying thing on purpose, because trading your personal data is an important part of their business.
How to make your website GDPR compliant?
Easy version: Do not collect personal data.
Hard version: Display a form asking whether it is okay to collect the personal data. (No, it doesn't have to be a modal window covering half of the screen. No, you do not have to display it every time the user visits your page. For example, in case of an e-shop, it is enough to ask for consent after the user clicks "create account" or an unregistered user puts the first item into their shopping cart.) In the user settings, create a tab that shows all information you have collected about the user. Also, provide a "delete account" button which actually deletes the personal information (you can still keep the shopping history of an unknown deleted user).
no one is currently hard at work drafting concrete legislative or regulatory language
I'd like readers to know that fortunately, this hasn't been true for a while now. But yes, such efforts continue to be undersupplied with talent.
the other path isn’t guaranteed to work, but if the default path is probably or almost certainly going to get everyone killed, then perhaps ‘guaranteed to work’ is not the appropriate bar for the alternative, and we should be prepared to consider that, even if the costs are high?
I think it's an extremely important point, often ignored.
Trying to prevent the AGI doom is not enough. If the doom is indeed very likely to happen, we should also start thinking how to survive in it.
My LW post on the topic, with some survival strategies that might work: How to survive in an AGI cataclysm.
But if you order up that panda and unicorn in a rocket ship with Bill Murray on request, as a non-interactive movie, without humans sculpting the experience? I mean let’s face it, it’s going to suck, and suck hard, for anyone over the age of eight.
Strongly depends on the prompt.
I would pay some real money to watch a quality movie about panda and unicorn in a rocket ship with Bill Murray, but with the writing of H. P. Lovecraft, and with the visuals of HR Giger.
The ship’s innards pulsed with eldritch life, cold metallic tendrils stretching into the vastness of the ship, their biomechanical surface glistening under the muted luminescence. Tunnels of grotesque yet fascinating detail lay like a labyrinthine digestive system within the cruiser, throbbing in eerie synchrony with the void outside. Unfathomable technologies hummed in the underbelly, churning out incomprehensible runes that flickered ominously over the walls, each a sinister eulogy to the dark cosmos.
Bill Murray, the lonely jester of this cosmic pantomime, navigated this shadowy dreadnought with an uncanny ease, his eyes reflecting the horrid beauty around him. He strode down the nightmarish corridors, a silhouette against the cruel artistry of the ship, a figure oddly at home in this pandemonium of steel and shadow...
At least two potentially important algorithmic improvements had papers out this week. Both fall under ‘this is a well-known human trick, how about we use that?’ Tree of Thought is an upgrade to Chain of Thought, doing exactly what it metaphorically sounds like it would do. Incorporating world models, learning through interaction via a virtual world, into an LLM’s training is the other. Both claim impressive results. There seems to be this gigantic overhang of rather obvious, easy-to-implement ideas for improving performance and current capabilities, with the only limiting factor being that doing so takes a bit of time.
That’s scary. Who knows how much more is out there, or how far it can go? If it’s all about the algorithm and they’re largely open sourced, there’s no stopping it. Certainly we should be increasingly terrified of doing more larger training runs, and perhaps terrified even without them.
The regulation debate is in full swing. Altman and OpenAI issued a statement reiterating Altman’s congressional testimony, targeting exactly the one choke point we have available to us, which is large training runs, while warning not to ladder pull on the little guy. Now someone – this means you, my friend, yes you – need to get the damn thing written.
The rhetorical discussions about existential risk also continue, despite moral somewhat improving. As the weeks go by, those trying to explain why we might all die get slowly better at navigating the rhetoric and figuring out which approaches have a chance of working on which types of people with what background information, and in which contexts. Slowly, things are shifting in a saner direction, whether or not one thinks it might be enough. Whereas the rhetoric on the other side does not seem to be improving as quickly, which I think reflects the space being searched and also the algorithms being used to search that space.
Table of Contents
Language Models Offer Mundane Utility
Personal observation: Bing Chat’s bias towards MSN news over links is sufficiently obnoxious that it substantially degrades Bing’s usefulness in asking about news and looking for sources, at least for me, which would otherwise be its best use case. Blatant self-dealing.
Ramp claims that they can use GPT-enabled tech to save businesses 3% on expenses via a combination of analyzing receipts, comparing deals, negotiating prices. They plan to use network effects, where everyone shares prices paid with Ramp, and Ramp uses that as evidence in negotiations. As usual, note that as such price transparency becomes more common, competition on price between providers increases which is good, and negotiations on price become less lucrative because they don’t stay private. That makes it much tougher to do price discrimination, for better and for worse.
Correctly predict the next word generating process for A Song of Fire and Ice.
Imagine a world where we had onion futures.
Patrick McKenzie reminds us that those with less computer literacy and in particular poor Google-fu are both under-represented in the AI debates and more likely to do better with AI-based searches versus Google searches.
I notice this in my own work as well. The less idea I have what I’m talking about or asking about, or I want a particular piece of information without knowing where it is, the more talking to AIs is useful relative to Google.
Correct all deviations from Euclid’s proof that there are infinitely many primes, both subtle errors and changes that aren’t errors at all, because pattern matching.
How good and safe in practice is Zapier, and its ‘create a zap’ functionality? Some commentors report having used it and it seeming harmless. I haven’t dared.
Language Models Don’t Offer Mundane Utility
I feel the same way every time I ask for something Tweet-length and it’s full of useless drivel and hashtags.
AI Comes to Windows 11
Windows Copilot, an integration of AI directly into Windows 11, will be available in preview in June. This gives the AI (presumably similar to Bing’s version of GPT-4) direct access to all your apps, programs and windows as context, and lets it do the things generative AI can do. This is 365 Copilot, except trying to be even more universal.
There’s also a lot of developer-oriented stuff in here. I do not feel qualified to evaluate these parts based on what little information is here, they may or may not be of any net use.
The big question here is how to balance functionality and security, even more than 365 Copilot or Bard. This level of access is highly dangerous for many obvious reasons.
Jim Fan describes this as ‘The first Jarvis is around the corner.’ That seems like hype.
Fun With Image, Sound and Video Generation
Edit images by dragging components around. Wow. Paper: Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. Insanely great mundane utility if this works as the examples indicate. You can either generate an image that’s close to what you want and then move things around to get it there or fix the mistakes, or you can do this to a photo. Scary point: Doing this to a photo seems likely to produce much harder to notice deepfakes than previous methods.
Photoshop launches beta for generative fill. Test run from Jussi Kemppainen looks quite good.
Paper shows a way to use reinforcement learning on a standard diffusion model, which is non-trivial (GitHub link). Starts to be better at giving the user what they requested, with a focus on animals doing various activities. Also includes our latest example of goal misalignment, as the model fails to get the number of animals right because it instead learns to label the picture as having the requested number.
I love how clean an example this is of ‘manipulate the human doing the evaluation.’
Google introduces AI-generated descriptions of images, and the ability to ask the AI follow up questions, inside their Lookout app for the visually impaired. They also offer Live Caption to provide real-time captions on anything with sound.
Justine Bateman speculates on where AI is going for Hollywood (Via MR).
That eighth season of Family Ties would suck. How many years from now would it no longer suck, if you thought season seven of Family Ties was pretty good? That depends on many factors, including how much human effort goes into it. My guess is that traditional sitcoms are relatively tractable, so maybe it would be Watchable within a few years once video gets solved with some human curation, but it seems hard for it to be Good (Tier 3) without humans doing a large percentage of the writing, with Worth It (Tier 2) that is AI-written seeming AI complete.
Over time, new forms will likely arise, that take advantage of what the AI is good at, especially interactive and custom ones, including in virtual reality. If I am going to have the AI generate a movie for me, at a minimum I want it to respond when I yell at the characters to not be idiots.
Will some people put their face on Luke Skywalker? Yeah, sure, but my guess is that it will be more common to swap in actual actors for each other to recast films and shows, and especially voices. As an example of negative selection, when I recently watched The Sandman, I definitely would have recast the raven, because that which worked fine in Agents of Shield and in some pretty good stand-up totally didn’t fit the vibe. Positive selection will be even bigger, probably, put in whoever you love. Also we should expect to see things like systematic de-aging.
But if you order up that panda and unicorn in a rocket ship with Bill Murray on request, as a non-interactive movie, without humans sculpting the experience? I mean let’s face it, it’s going to suck, and suck hard, for anyone over the age of eight.
Deepfaketown Right Now
The first serious market impact was seen on Monday, although only briefly.
The fake image (labeled by Twitter as ‘Manipulated Media’ btw where it doesn’t have a giant X through it, though not where it does):
Also this just in:
The result:
The BloombergFeed account intentionally sounds and looks like Bloomberg. It’s possible someone responsible made off with rather quite a lot of money here.
Notice that this isn’t an especially impressive fake.
I am guessing this particular image is not even ‘would have taken more skills’ it is ‘would not have occurred to people to try.’
This particular event was close to ideal. Clear warning shot, clear lessons to be learned, no one gets hurt other than financially, the financial damage should be limited to those attempting to trade on the false news and doing it badly. They are very much the ‘equity capital’ of such situations, whose job it is to take losses first, and who are compensated accordingly.
Having to make your own calls on what is real is a matter of degree, and a matter of degree of difficulty. Everyone needs to be able to employ shortcuts and heuristics when processing information, and find ways to do the processing within their cognitive ability and time budget.
I do not think it is obvious or overdetermined that the new world will be harder to navigate than the old one. We lose some heuristics and shortcuts, we gain new ones. I do think the odds favor things getting trickier for a while.
We also have this from Insider Paper: China scammer uses AI to pose as man’s friend, steal millions ($600k in USD). Scam involved a full voice plus video fake to request a large business transfer, the first such loss I’ve seen reported for this kind of size. He got ~81% of it back so far by notifying the bank quickly and efforts are ongoing.
Deepfaketown and Botpocalypse Soon
I worry whenever people at places like OpenAI describe such issues as on the harder end. These are serious issues we need to tackle well, while also seeming to me to be clearly on the easy end when it comes to such matters. As in, I can see paths to good solutions, there are no impossible problems in our way only difficult ones, and we don’t need to ‘get it right on the first try’ the way we do alignment, if we flail around for a bit it (almost certainly) won’t kill us.
The core problem is, how do we verify what is real and what is fake, once our traditional identifiers of realness, such as one’s voice or soon a realistic-looking photo or then video, stop working, as they can be easily faked? And the cost of generating and sending false info of all kinds drops to near zero? How do I know that you are you, and you know that I am me?
I do see problems growing for those unusually unable to handle such problems, especially the elderly, where automated defenses and some very basic heuristics will have to do a lot of work. I still am waiting to see in the wild attack vectors that the basic homework wouldn’t stop – if you follow basic principle of never giving out money, passwords or other vulnerable information via insecure channels, what now?
Here’s a great deepfake use case: Spotify to let podcast hosts create an AI version of their voice so they don’t have to read the ads. Which is only fair, since I don’t listen to them.
Can’t Tell If Joking
Or maybe our deepfake does not need to be so deep, or pretend to not be fake?
Ungated version is here, in Zocalo Public Square. He’s not wrong.
We’ve actually had this technology for a while. Its old name is ‘A rock with the words ‘vote Democratic party line’ on it.
They Took Our Jobs
Bryan Caplan is asked in a podcast (about 3:30 here) whether he thinks AI will take 90% of all jobs within 20 years, and he says he’d be surprised if it took 9%. It does not seem he has updated sufficiently from his losing bet on AI passing his exams. Sure, you might say there will be many replacement jobs waiting in the wings, and I do indeed think this, but he’s explicitly not predicting this, and also, 9%? In 20 years? What? Because of previous tech adaptation lags that were longer? I am so confused.
They Took Our Jobs: Bring in the Funk
Simon Funk recaps current state of AI as he sees it, and where we are going. Mainly he is thinking ahead to widespread automation, and whether it will be good if humans end up with copious free time.
The Art of the Superprompt
Prompt engineering is not only for computers.
I expect both the practical experience of doing prompt engineering for AIs, and the conceptualization of prompt engineering as a thing, will help a lot with prompt engineering for humans, an underappreciated skill that requires learning, experimentation and adaptation. The right prompt can make all the difference.
From Chain to Tree of Thought
Chain of Thought, meet Tree of Thought. Use full branching and decomposition of multi-step explorations and evaluations to choose between paths for better performance.
Some of their results were impressive, including dramatic gains in the ‘Game of 24’ where you have four integers and the basic arithmetic operations and need to make the answer 24.
Language Models Meet World Models
The full paper title is: Language Models Meet World Models: Embodied Experiences Enhance Language Models
Here’s the abstract. The claim is that letting an LLM experience a virtual world allows it to build a world model, greatly enhancing its effectiveness.
If the paper is taken at face value, this is potentially a dramatic improvement in model effectiveness at a wide variety of tasks, a major algorithmic improvement. It makes sense that something like this could be a big deal. A note of caution is that I am not seeing others react as if this were a big deal, or weigh in on the question yet.
Introducing
Perplexity AI Copilot. It’s a crafted use of GPT-4, including follow-up questions that can be complete with check boxes, as a search companion. Rowan Cheung offers some examples of things you can do with it, one of which is ‘write your AI newsletter.’ Which is a funny example to give if your main product is an AI newsletter – either the AI can write it for you or it can’t. Note that Perplexity was already one of the few services that crosses my ‘worth using a non-zero portion of the time’ threshold.
Drawit.art will let you move a pen around the screen drawing lines and then do a ControlNet style art generation in one of a few different styles. It’s slow to execute, but kind of cool.
CoDi: Any-to-any generation via Composable Diffusion, put in any combination of pictures, text and voice, get any combination back.
DarkBert, an LLM trained on the Dark Web.
MPT-7B And the Beginning of Infinity, small model with huge context window (75K).
WizardLM-30B-Uncenssored (direct GitHub link, how he does it).
Lovo.ai is latest AI voice source one could try out, 1000+ voices available.
Increasingly when I see lists of the week’s top new AI applications, it is the same ones over and over again under a different name. Help you write, help you name things, help you automate simple processes, generate audio, new slight tweaks on search or information aggregation, search your personal information, give me access to your data and I’ll use it as context.
That’s not to say that these aren’t useful things, or that I wouldn’t want to know what the best (safe) versions of these apps was at any time. It is to say that it’s hard to get excited unless one has the time to investigate which ones are actually good, which I do not have at this time.
In Other AI News
Anthropic raises $450 million in Series C funding, led by Spark Capital, with participation including Salesforce, Google, Zoom Ventures, Sound Ventures and others. I’m curious why it wasn’t larger.
That does not sound like an alignment-first company. Their official announcement version is a bit better?
Anthropic interpretability team begins offering smaller updates in between papers describing where their heads are at. Seems good if you’re somewhat technical. More accessible is ‘interpretability dreams’ by Chris Olah.
Markov model helps paralyzed man walk again via brain implants. Worth noting that this was not done using generative AI or large language models, for anyone who references it as a reason to think of the potential. GOFAI still a thing.
OpenAI releases iOS ChatGPT app. It automatically incorporates Whisper.
Supreme court rules, very correctly and without dissent, that ordinary algorithmic actions like recommendations and monetization do not make Google or Twitter liable for terrorist activity any more than hosting their phone or email service would, and that this does not quire section 230.
Meta announces customized MTIA chip for AI, optimized for PyTorch.
Some perspective:
Even with a billion users, the future remains so much more unevenly distributed. I think of myself as not taking proper advantage of AI for mundane utility, yet my blue bar here isn’t towering over the black and green.
Guardian says Think Tank calls for 11 billion pounds for ‘BritGPT’ AI, to train its own foundation models ‘to be used for good.’ It is not clear what they hope to accomplish or why they think they can accomplish it.
Jim Fan high quality posts thread volume 2, for those curious.
Kevin Fisher publishes his paper, Reflective Linguistic Programming (RLP): A Stepping Stone to Socially-Aware AGI (arvix).
The central concept is pretty basic, give it a personality and at every step have it go through some reflection like so:
A bull case for Amazon advertising, and its plans for LLM integration, including for Alexa. When people say things like ‘Amazon will reimagine search in the 2030s’ I wonder what the person expects the 2030s to be like, such that the prediction is meaningful. The lack of concern here about Amazon being behind in the core tech seems odd, and one must note that Alexa is painfully bad in completely unnecessary ways and has been for some time, in ways I’ve noted, and which don’t bode well for future LLM-based versions as they reflect an obsession with steering people to buy things. In general, would you trust the Amazon LLM?
Washington Post article looks into the sources of LLM training data. There are some good stats, mostly they use it to go on a fishing expedition to point out that the internet contains sources that say things we don’t like, or that contain information people put in public but would rather others not use, or that certain things were more prominently used than certain other things that one might claim ‘should’ be on more equal footing, and that’s all terrible, I guess.
Here are the top websites, note the dominance of #1 and #2 although it’s still only 0.65% of all data.
The Week in Podcasts
From 11 days ago, Yuval Noah Harari gives talk AI and the future of humanity. I got pointed to this several times but haven’t had time to listen yet.
Odd Lots covers AI today, self-recommending.
80,000 hours podcast by Joe Carlsmith on navigating serious philosophical confusion. No idea if it helps with that but I’ve heard it goes deep.
Team AI Versus Team Humanity
When thinking about existential risks from AGI, a common default assumption is that at the first sign of something being amiss, all of humanity would figure out it was under existential threat, unite behind Team Humanity and seek to shut down the threat at any cost.
Whereas our practical experiences tell us that this is very much not the case. Quite a few people, including Google founder Larry Page, have been quoted makin it clear they would side with the AGI, or at best be neutral. Some will do it because they are pro-AI, some will do it to be anti-human, some will do it for various other reasons. Many others will simply take a Don’t Look Up attitude, and refuse to acknowledge or care about the threat when there are social media feuds and cat videos available.
This will happen even if the AGIs engage in no persuasion or propaganda whatsoever on such fronts. The faction is created before the AGIs it wants to back exist or are available to talk. Many science fiction works, including the one I am currently experiencing, get this right.
In real life, if such a scenario arises, there will be an opponent actively working to create such a faction in the most strategically valuable ways, and that opponent will be highly resourceful and effective at persuasion. Any reasonable model includes a substantial number of potential fifth columnists.
I was reminded of this recently when I was clearing out old links, and saw this 2014 post about The Three Body Problem.
Obviously this could be said to all be in good fun. And yet. Keep it in mind.
Plug-Ins Do Not Seem Especially Safe
“Cross Plug-In Request Forgery” does not sound especially awesome.
Reply says this particular exploit got patched. That’s always a game of whack-a-mole.
My understanding is that Harang is correct about the goal here, yet hopelessly optimistic about prospects for making the problem go away.
Quiet Speculations
An underappreciated tactic and question, for now. I expect this to change.
Several others thought much smaller, only 100M-300M, would work fine.
AI Winter upon us, sort of, on the AI time scale?
Every day you can count on certain people to say ‘massive day in AI releases’ yet this pattern does seem right. There are incremental things that happen, progress hasn’t stopped entirely or anything like that, yet things do seem slow.
Nathan Lambert says ‘Unfortunately, OpenAI and Google have moats.’ Much here is common sense, big companies are ahead, they have integrations, you have to be ten times better to beat them, the core offerings are free already, super powerful secret sauce is rare. I do agree with his notes that LoRA seems super powerful, which I can verify for image models, and that data quality will be increasingly important. I’d also strongly agree that I expect the closed source data sources to be much higher quality than the open source data sources.
Joe Zimmerman points out that if both accelerationists and doom pointer-outers think they are losing, perhaps it is because they are?
Certainly this is a plausible outcome, given governments are highly concerned about near term concerns without much grasping the existential threat. We see great willingness to cripple mundane utility without restraining capability development in a sustainable way.
Elon Musk asks, “how do we find meaning in life if the AI can do your job better than you can?” Cate Hall says this is ‘not the most relatable statement.’ I disagree, I find this hugely relatable, we all need something where we provide value. If the objection is ‘you are not your job any more than you are your f***ing khakis’ then yes, sure, but that’s because you are good at something else. It may be coming for that, too.
ChatGPT as Marketing Triumph, despite its pedestrian name, no advertising, no attempts to market of any kind really, just the bare bones actual product. Tyler Cowen asks in Bloomberg, will it change marketing? I think essentially no. It succeeded because it was so good it did not need marketing. Which is great, but that doesn’t mean that it would have failed with more traditional marketing, or done less well. We have the habit of saying that any part of anything successful must have done something right, and there are correlational reasons this is far more true than one would naively expect, yet this doesn’t seem like one of those times.
Tyler Cowen links to this warning. If you haven’t seen it it’s worth clicking through to the GIF and following the instructions, I can confirm the results.
The point here is that our brains have bugs in the way they process information. For now, those bugs are cool and don’t pose a problem, which is why they didn’t get fixed by evolution, they weren’t worth fixing.
However, if we go out of distribution, and allow a superintelligent agent to search the space for new bugs, it seems rather likely that it will be able to uncover hacks to our system in various ways, for which we will have no defense.
Kevin Fischer proposes that we think engagement metrics are bad because of the inability of existing systems to adjust, which conversational AI interfaces will fix.
Important note: We can all agree it wasn’t great. Including you, at the time.
Yes, you’d say ‘no more politics please’ long before you actually stop rewarding the political content.
So the model here is that you’ll tell Samantha the AI ‘no more politics, please’ and Samantha will be designed for maximizing engagement, but she’ll honor your request because you know thyself? No, you don’t, Samantha can figure out full well that isn’t true. How many friends or relatives or coworkers do you have, where both of you loathe talking politics (or anything else) and yet you can’t seem to avoid it? And that’s when no one involved is maximizing engagement.
Similarly, it is very easy for existing systems to offer multi-dimensional feedback. Instead, the trend has been against that, to reduce the amount of feedback you can offer, despite the obvious reasons one might want to inform the algorithm of your preferences. This is largely because most people have no interest in providing such feedback. TikTok got so effective, as I understand it, in large part because they completely ignore all of your intentional feedback, don’t give you essentially any choices beyond whether to watch or not watch the next thing, and only watch for what you choose to watch or not watch when given the choice. Netflix had a rich, accurate prediction model, and rather than enhance it with more info, they threw it out the window entirely. And so on.
Do I have hope for a day when I can type into a YouTube search a generative-AI style request? When I can provide detailed feedback that it will track for the long term, to offer me better things? Sure, that all sounds great, but I still expect the algorithm to know when to think I am a lying liar, in terms of what I’d actually engage with. Same with the chat bots, especially across people. They’ll learn what generally engages, and they’ll often watch for the unintentional indications of interest more than the explicit intentional ones.
Rhetorical Innovation
Underappreciated gem, worth trying more generally, including without the substitution.
New toolbox inclusion – ‘Understanding causality without mechanism.’
In our regular lives, this covers a huge percentage of what is around us. I have a great causal understanding of most things I interact with regularly. My degree of mechanistic understanding varies greatly. There are so many basic things I don’t know, because I am counting on someone else to know them. In AI, the difference is that no one else knows how many of those mechanisms work either.
Davidad gives a shot at listing the disjoint fundamental problems we must solve.
(This came after some commonly expressed frustration about the number of disjoint problems, there are some interesting other discussions higher in the thread.)
That’s a reasonably good list of 12 things. I don’t think #13 belongs on the list, even if one agrees with its premise. I certainly don’t agree with #13 in a maximalist sense. Mostly in a sensible version it reduces to ‘we should do something that is good for existing people to the extent we can’ and I don’t disagree with that but I don’t think of such questions as that central to what we should care about.
Simeon attempts to expand this a bit into non-technical language.
To take a first non-technical shot at #9: Superintelligent computer programs are code, so they can see each others’ code and provably modify their own code, in order to reliably coordinate actions and agree on a joint prioritization of values and division of resources. Humans can’t do this, risking us being left out.
Could we get this down below 12? It feels like we can, as several of these feel like special cases of ‘a smarter and more capable than you AI system will be able to do things you don’t expect in ways you can’t anticipate’ (e.g. #7, #8 and #12). And (#2, #4, #5 and #10) also feel like a cluster. #9 feels like it is pointing at a more broad category of problem (something like ‘a world containing many AGIs has no natural place or use for or ability to coordinate with humans), I do think that category is distinct. So my guess is you can get down to at most 10, likely something around 8. It’s not clear unifying them in that way makes them more useful though. Perhaps you want a two-level system.
Something like a high level of maybe these five?
Or, we must worry about:
I put everything in exactly one category, but it’s worth noting that several of them could go into two or more. The first category suggests the original list is missing (#14: We don’t know how to specify the values or goals of an AI system at all), and #11 also feels like it’s not covering the full space it is for.
As they say: More research is needed.
Paul Graham provides strong evidence the Magnus Carlsen metaphor is working.
Eliezer Yudkowsky Wishes He Was Only Worried AI Might Kill Us
I’ve been searching for ways to make this point clear, Eliezer I think does it well here.
I am torn on the core rhetorical question of how to treat contingent ruin scenarios. If we are so lucky as to get to a world where ruin is contingent in the sense used above, that’s great, yet it is worth noticing the obstacles remaining in the path to victory. Should we ignore those in order to avoid the distractions, which can easily actively backfire (and has backfired already) in huge ways if you’re worried about the wrong monkey getting it first?
One issue is that there is overlap here. If we didn’t have to worry about contingent ruin, it would be much easier to deal with the problems of convergent ruin.
What is an Expert?
Taleb’s view has always been unique.
What Taleb describes here, that current AI is definitely missing, is important to being an expert. Every field in which one can be an expert carries fat tail risk… to your job and reputation. An expert knows, as Taleb puts it, exactly what not to be wrong about, to avoid this happening. Via negativa, indeed.
The distinction is that for some experts, this is due to a real tail risk consequence, where the bridge will fall down. For others, there is only tail risk to you, the expert. In which case, the expertise we select for is political, is expert at appearances.
ChatGPT is in some contexts rather good at knowing not to say the wrong thing, if it’s been trained not to do that. You could fine tune a system to never be wrong about a given particular thing the same way, if you wanted to pay the price. So in that sense, we don’t have much to worry about today, but that won’t last long as we enter the fine tuning era of specialized models over the course of 2023, the same way humans are (almost always) only experts in this sense when they fine tune and are specialized.
The other trick AI has is that if the negativa is over reputation or reaction, then the AI might give people a chance to ignore it, which can be actively helpful – the same way a self-driving car needs to be vastly more reliable than a1 human driver, in other contexts the AI can be vastly less reliable than a human would be, and be forgiven for that. That does mean a human needs to double check, but that need not require an expert, and often will require dramatically less time.
Perhaps we will soon learn which experts were which type. Some should worry.
To Ban or not to Ban
NYC public schools drop their ban.
While Apple restricts employee access for fear of data leaks (about its own AI programs, no less) at the same time that ChatGPT got an iOS app/.
OpenAI Requests Sane Regulation
OpenAI put out a statement on regulation this week entitled Governance of Superintelligence, authored by Sam Altman, Greg Brockman and Ilya Sutskever.
It is short enough that I’m going to go ahead and quote the whole thing to save everyone a click. It is a broad reiteration of what Altman said in his congressional testimony. An IAEA-like entity will likely be necessary, there need to be controls on development of new frontier models, sufficiently behind-SOTA work can be open sourced and otherwise free to have fun.
Yes, why would you want to build a superintelligence at all if it is so risky: “Given the risks and difficulties, it’s worth considering why we are building this technology at all.” The response being (1) it has upside and (2) we can’t stop it anyway.
Ignoring that (2) is, to the extent it is true, largely OpenAI’s direct fault, shouldn’t there also be a (3) we believe that we will be able to navigate the risks safely, because [reasons]?
Otherwise, well…
I would say better plan than only trying to solve it anyway.
No, the other path isn’t guaranteed to work, but if the default path is probably or almost certainly going to get everyone killed, then perhaps ‘guaranteed to work’ is not the appropriate bar for the alternative, and we should be prepared to consider that, even if the costs are high?
Instead, I don’t see acknowledgment of the difficulty of the underlying technical problems, or the need to evaluate that difficulty level when deciding what to do or not do, and ‘risky’ rather understates the downsides. No, we don’t know how to design the Democratic control mechanism, but perhaps it’s more important to notice we don’t know how to design any control mechanism at all, of any kind?
Other than that, and the lack of additional concrete detail in the proposals, this seems about as good as such a document could reasonably be expected to be.
Jeffrey Ladish has compatible and unsurprising reactions here.
Brian Chau warns that details are important.
It seems odd to say that the FDA doesn’t hurt development, even if it hurts deployment modestly more. I do agree that details matter, but Altman has actually been rather clear on this particular detail – restrict training of large new models, allow new uses for small models.
The Quest for Sane Regulation Otherwise
From what I can tell, the OpenAI regulatory framework suggested above is an excellent place to begin, and no one is currently hard at work drafting concrete legislative or regulatory language. This is dropping the ball on a massive scale. While I haven’t talked about it in a while, Balsa Policy Institute does exist, it does have a single employee about to start soon, and I tentatively intend the first concrete mission to be to investigate drafting such language. I will say more at another time.
In the meantime, how is this possibly MY job, people are crazy the world is mad, someone has to and no one else will, just do it, step up your game you fool of a took.
Heads of DeepMind, OpenAI and Anthropic meet with UK Prime Minister, who together then issue a statement calling for international regulatory cooperation, but with no content beyond that and no mention of existential risks or calls for particular interventions. Demis echoes the language and says it was a good conversation.
Most promising? This wasn’t a large pro forma meeting:
That’s what you want it to look like.
Quartsz says ‘OpenAI’s Sam Altman threatened leave the EU if he doesn’t like their ChatGPT regulation.’ A better headline would have been ‘OpenAI’s Sam Altman warns that it might be forced to withdraw from the EU if it is unable to comply with future EU regulations.’ If anything, it’s weird that OpenAI thinks their products are legal in the EU now.
Tyler Cowen has a very good editorial in Bloomberg pointing out that too much regulation favors large and entrenched firms in general, so we should have less regulation if we want be dominated less by big business – a point largely missing from his book-sized love letter to big business. In general, I strongly agree. Tyler mentions AI only in order to point out Sam Altman’s explicit call to avoid shutting out the little guy when implementing regulations, whereas in this one case I’m pretty into shutting out the little guy.
Timothy Lee has a more-thoughtful-than-the-usual case for waiting on AI regulations, because it is early and we will get the implementation wrong. He notes that while we might say now that we were too late to regulate social media, even if we had moved to regulate social media early, we still don’t know what a helpful set of rules would have been to stop what eventually happened. Which is very true. And the conflation of job risks and existential risks, with the Congressional focus on jobs, is as Lee highlights a big red flag. I agree that we shouldn’t be pushing to pass anything too quickly, in the sense that we should take time to craft the regulations carefully.
However, I do think there’s a clear concrete option being suggested by Altman, that the regulations restrict the training and deployment of the biggest models while holding off for now on further restrictions on smaller models, and that those large models be registered, licensed and tested for various forms of safety, with an eye towards an international agreement. We should be drafting actual detailed legislative language to put that proposal into practice. Who is doing that?
Washington Examiner, among many similar others, warns us:
Scott Lincicome says similar things at The Dispatch.
Presumably if Altman had opposed regulation, that would have been the red flag that it would have threatened to strangle American competitiveness and innovation.
A common reaction to Sam Altman’s regulatory suggestions was to suggest it was regulatory capture, or a ‘ladder pull’ to prevent others from competing with OpenAI.
Many pointed out that if this was the goal of Altman’s proposal, it’s highly non-optimized for that outcome, including several explicit calls to choose details to avoid doing that.
That does not mean that the regulations that result from such discussions are safe from exactly the effects Altman is warning against. It is common to talk a good game, or even intend one, and have the results be otherwise. Regulations never turn out the way you would want them to, always are twisted against entrants and towards insiders, whether or not that was the goal. There will be some of that.
This still seems like an attempt to do the minimum of that, indeed to impose far harsher regulations on state-of-the-art models and top insiders, while being more relaxed for open source, if anything I’d worry about being too relaxed there.
Also the whole premise is more than a little weird.
Davidad has a concrete suggestion for the threshold.
The National Artificial Intelligence R&D Strategic Plan mentions existential risk (direct link to full report).
Sebb Krier: New updated roadmap to focus federal investments in AI research and development released by the White House
It is better to see some consideration of this than none. This is still very little, at the end of the ‘Building Safe AI’ section. This does not yet imply any action, and any action that would be taken would likely not be useful. Still, you have to start somewhere.
That Which Is Too Smart For You, Strike That, Reverse It
An unintentional illustration of an important principle.
That is not a good suggestion or good prediction. The key point is the last one. Who among us had the imagination to envision, in advance, something as profoundly stupid, wrong-headed and pointlessly destructive as GDPR in its particulars?
Certainly not me. That is because, in context, I am not smart enough. I am not sufficiently good at generating the type of thing GDPR is, or anticipating the way such a law might be written.
I can tell you that the EU will continue to pass stupid pointlessly destructive regulations, the same way I can tell you Magnus Carlen would beat me at chess.
I can’t tell you which laws the EU will pass, the same way I can’t tell you how Magnus Carlen will beat me at chess beyond playing better than I do. If you ever sit around thinking about the EU’s reegulations ‘wow this is a huge pain in the ass that is disrupting things in ways I never thought possible, while optimizing for things no one wants in a completely unaligned way and somehow the humans aren’t coordinating to shut this thing down even now’ then consider what the future might bring.
Gary Marcus links to this Planet Money story (10:28), that equates Altman and Marcus to the traditional bootleggers and Baptists. Strange bedfellows, indeed, and more presuming that Altman couldn’t possibly not be profit maximizing, nothing else important is at stake here, no no.
The Senator From East Virginia Worries About AI
Shoshana Weissmann has, together with Robin Hanson and Tyler Cowen, been part of a reliable three-person team that will bring us all the don’t-regulate-AI takes one might otherwise miss, combining all the logically orthogonal arguments as necessary.
So I was amused to see her strongly endorse a warning about how the government must indeed step in to prevent misuse of AI… by the government.
I share the general instinct of ‘let technology develop until we know there is a problem’ and also the instinct of ‘the government needs to have its surveillance powers limited.’ Yet in this case, even if one ignores questions of existential risk, one must tackle with the contradiction, and with the question of inevitability.
You expect everyone else to be free to develop and use AI with minimal oversight, while the government chooses not to use it? You plan to impose this, dare I call it, regulation on the government, and expect this to turn out the way you would like or expect, differentiating good uses from bad uses?
Next thing you’ll likely tell us we shouldn’t be hooking it up to weapon systems.
If you don’t want the government to have the tools to monitor everything in the style of Person of Interest (minus the ‘never wrong’ clause), and don’t want them hooked up to our shall we say enforcement mechanisms, don’t let anyone develop those tools. If you want it to have those tools available and choose to not use them, while not restricting or regulating the technology? Good luck.
I am of course also frustrated by the angle of ‘these techs might well take control of the planet away from humans or literally kill everyone on the planet, but the harm you see is government using them for censorship’ but I’ve come to expect that.
If you want to predict who is opposing regulation of AI, you would get it almost exactly right if you went with ‘the people who oppose regulation of any new technology on principle, no matter what it is, minus those who managed despite this to notice it is going to kill everyone.’
It is continuously frustrating for me, and I hope for you, how damn good the ‘oppose regulation’ heuristic is in general, where one must continuously say ‘yes, if it wasn’t for that whole existential risk issue…’ So it was kind of good to see an ‘ordinary’ fallacy.
Safety Test Requirements
People Are Worried About AI Killing Us
Turing Award Winner Yoshua Bengio writes How Rogue AIs May Arise, with the definition “A potentially rogue AI is an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere.” This seems like a strong attempt to explain some of the more likely reasons why and paths how such systems are likely to arise. There is a certain type of person for which this might hit the sweet spot and be worth forwarding to them.
Thomas Dietterich has a few things to add on top of that.
Former Israeli PM Naftali Bennett is worried that AIs are growing in intelligence exponentially, and what is going to happen then? At some threshold, you get humor, you get mathematics, you get cynicism, what would you get beyond humans that we don’t even know about, that we can’t even imagine or understand?
Tetraspace makes a good point.
Fun thing about Schrödinger’s Gun: If you’re not sure that a gun isn’t loaded, then it’s loaded, unless you need it to be loaded, in which case it isn’t.
The distinction here matters.
Hopefully we can all agree that AGI is at least a gun, meaning that one must learn how it works and how it operate it, take great care handling it, keep it in good working order and so on, or else people get killed, in this case potentially everyone. AGI is not a naturally safe thing. If the AGI is safe, it is safe because we made it safe.
The question is to what extent one requires security mindset. If our AGI systems are not cryptography-style secure, are we all doomed, or is that merely a nice to have? Given the way LLMs work, this level of security could well be impossible. My inclination is that we are closer to this second stronger requirement than to the weaker one.
That does not mean that every cryptography system that can be broken will inevitably break, in or out of the metaphor, but that’s the way to bet if the stakes are high enough.
Clips from the Logan/Eliezer podcast, including a clip from the Weiss/Altman podcast, with the essence of the ‘you need to be in contact with the state of the art systems to understand the alignment problem’ argument against the ‘please tell us what you have learned from such interactions’ argument.
It also has a crisp ‘you made a prediction of doom that has been invalidated’ versus ‘no I did not, citation needed.’ I do think that some of the key particular abilities of GPT-4 manifested sooner on the general capabilities curve than most people expected, which led to some admissions of surprise, and that admitting one’s surprise there is good. It’s quite the pattern to say ‘this person or group admitted being wrong about this prediction, therefore we can dismiss their other predictions,’ although of course it counts as some amount of evidence the same way everything else does.
Jacy Reese Anthis calls for an international Manhattan Project to build safe AI, you already know everything the argument says.
How worried are the people at alignment lab Conjecture? As you’d expect, pretty worried.
Full survey results here.
Steven Brynes is worried and offers some helpful framing.
Tim Urban is an unknown amount of worried, suggests we make the most of our time.
There is a rather dominant argument for doing all the things worth doing and making the best use of your time, if the alternative is not doing that. Why not get more out of life rather than less, even if everything will stay the same?
Presumably the answer is that the alternative is investing in the future, so the question becomes: Which future? If the world is going to change a lot pretty often, then investing in your ‘normal’ future to score victory points later becomes less worthwhile, your investments won’t prove relevant so often, so you should invest more in scoring victory points now.
Another alternative is to invest in impacting the non-normal futures. Don’t spend your time savoring, spend your time fighting for a good outcome. Highly recommended, if you can find a potentially impactful path to doing that – even if the probability of it having no effect is very high.
A key question is always how these intersect. I continue to believe that savoring one’s experiences and getting the most out of life is mostly not a rival good with impact on the future, because a responsible amount of savoring actively aids in your quest, and an irresponsible amount does not actually leave you better off for long.
Google Existential Risk Philosophy Watch
What exactly are DeepMind head Demis Hassabis and Google CEO Sunder Pichai worried about?
OK, so first we improve capabilities. As you do.
Second, we avoid short term direct harm cases. As you do.
Third, we want regulations to help us deal with these issues and realize these gains.
Oh. I see.
So, no mention of existential risks, then, at all. No hint that there might be something larger at stake than ordinary human misuse of systems.
Other People Are Not As Worried About AI Killing Us
Nabeel Qureshi thinks we should worry somewhat, but that ‘we must treat alignment as a tractable, solvable technical problem.’ Because just think of the potential.
I am always confused when people say that the upside potential is being ignored. I assure you we are absolutely not ignoring that. We are obsessed with the upside potential, and that’s exactly what we don’t want to throw away.
Almost every call to ‘ban AI’ is not a call to ban current AI, or even finding new uses for current AI. Everyone agrees the children get their tutors, the ones fighting against the medical advice will be the American Medical Association, and so on. The calls are calls to suspend additional capability developments we see as actively dangerous. Very different. Certainly we don’t say stop working on alignment, and we’d be thrilled if you came back with a solution so we could resume the work.
Also, what, short-sighted? You can say wrong, and have a case, but short-sighted?
Give up on the problem? Giving up would be moving forward without solving the problem. Not moving forward means not giving up on the problem, or humanity. If we are to ‘do our best’ to achieve a solution we will need time, also presumably orders of magnitude more resources and focus, and people taking the question seriously under a security mindset.
I’d also add that this appears to make the common mistake of setting the alignment bar far too low. ‘Cares about humans’ may or may not be necessary depending on what method is used, but it is really, really not sufficient.
My go-to intuition pump on this right now is to ask, suppose we create a vast array of AGIs, much more capable, faster and smarter than humans and capable of replication, that are not under our control, which care about both themselves and also humans about as much and in similar ways as typical humans care about humans. Sounds like a triumph of sorts. Do you think that ends well for us?
I also urge everyone to follow Litany of Tarski: Treat alignment as a practical, tractable problem if and only if you believe that it could be one if you treated it that way. The world does not owe you this, physics is not fair and won’t save you.
Norswap thread on why he is not so worried, at least not for a while. Main reason is he does not think LLMs can understand underlying concepts or on their own get us to AGI. He talks at length about Eliezer’s model and why he disagrees with it, with a softer version of the ‘if any of these steps breaks the threat goes away’ mistaken impression, which we need to work harder on avoiding. Always good to see such clear articulations of where someone has their head, to help calibrate what might be useful new info.
Dominic Pino engages in straightforward intelligence denialism, which I’m proposing as a term for those such as Tyler Cowen who think that intelligence does not much matter and that much smarter than human AGIs would not have any dominant capabilities, and all our usual dynamics will continue to apply far, far out of distribution. And who assert this as if it is obvious, without making any argument for it, or answering any of the possible arguments against it. We have a reasonable disagreement about the value of intelligence within the human spectrum. Well past it? I don’t understand how their position is even coherent, and I can’t find any true objections I can actually respond to. Perhaps they would say that this proves intelligence is not so important, to which I’d say I am still in the human distribution here.
Alex Tabarrok wants to ‘see that the AI baby is dangerous before we strangle it in its crib.’ I would remind him that by the time you know it is dangerous, in a way that we don’t already know now, changes are quite good that it is no longer in its crib and you can no longer strangle it. He does not answer the question of what would count as evidence that the AI is indeed dangerous, that we can expect to be available in time.
Garett Jones asks, if AI is going to wipe out humans, why haven’t humans in productive areas wiped out humans in other lower-productivity areas?
Before we get to the objections down-thread, it’s worth pointing out that
I am reporting to you live from New York City, in the United States of America, in the American hemisphere, where the native population was entirely displaced not too long ago by those with superior technology. I realize the entire population was not wiped out per se, but it was pretty close despite no substantial IQ differential. This is hardly a unique situation.
Anyway.
I actually think it’s either one. Driving in country X will be a task calibrated to the ability of those in country X, and will certainly require a certain bare minimum in capabilities. If people from Y couldn’t reliably drive cars, we wouldn’t hire them to drive cars. If we had self-driving cars, we wouldn’t hire them to drive cars because we wouldn’t need to. Both work.
Right now, humans need substantial amounts of manual labor done, where the minimum capabilities to do that are within the range of most humans. That is very lucky for most humans. Similarly, we don’t have a better way of producing what cows produce. That is lucky for the existence of cows, whether they are lucky or unlucky to exist in their current state versus not existing is a matter of dispute.
I’d also say that yes, the ‘niceness’ of humans is doing a lot of work as well, although we are not always so nice. Quite recently we saw some examples of less nice humans (e.g. Nazis) deciding to wipe out via deliberate genocide what they viewed as ‘inferior’ humans. If they’d had 30 extra IQ points on the rest of us, the rest of us likely wouldn’t be around.
My other answer to that is that birth rates in developed countries have declined due to dynamics that will not be duplicated with AGIs.
I would notice that before the nukes, the higher-income nations did indeed divide most of the world between them and were in the process of carving up the remainder, until the world wars, as technology had enabled this. Yes, right now we no longer do that, for various realpolitik reasons, but this is an unusual age that could easily be coming to a close if history is not kind to us.
Nor do I think that ‘there were some survivors of the death squads’ is a hopeful line, the failures were indeed due to lack of capabilities, in ways that won’t transfer.
I have no idea how one looks at human history, even with the type of model Jones is using, and conclude that humans should have little to worry about, or even are likely to survive for long, should very intelligent AIs come into being.
Garrett Jones insists, in various threads and places, on various forms of this ‘Doomer puzzle’ in the present world that simply… are not puzzles, and I don’t understand how anyone could think that they were puzzles? He says that doomers claim that when there is a 2x to 200x IQ gap that the smarter thing inevitably destroys the weaker thing, but that we already have such smarter things in the world, and… I can’t even process the claims involved here. Can he not notice the differences in these scenarios, where it is only the technologies available that differ, and we have a multi-polar world with many levels of opposition to predation and genocide with no one having a decisive advantage and many participants having nuclear weapons, and an inability to efficiently exploit what you take even if you succeed in a way that could possibly make up for that and so on?
I’d also point to the question of time scales, and that humans are currently in a very strange period where people are choosing not to reproduce much for various reasons, and many other things, in the hopes they would help.
There is a long continuation of the original thread, ending with a bunch of back and forth between Yudkowsky and Robin Hanson, that also happened, for those curious.
Seriously, I don’t get what is going on here if one takes the argument at face level, everyone involved has to be too… high IQ… for this. I would be happy to have an actual good faith discussion with Jones, the way I had one with Hanson, either in public or in private, to sort this out – his other work has more than justified that.
I can hardly exclude from this section a piece entitled “I’m not worried about an AI apocalypse.” It was written by Jacob Buckman.
Section one is entitled ‘Yudkowsky’s Monster’ and asks us to imagine Eliezer born into an earlier age, and him worrying about us finding the ‘vital force’ people at the time thought existed, generating super-human servants. Jacob suggests Eliezer would have called for monster alignment and a pause on all medical research, doing great harm.
This is great for highlighting that metaphorical Eliezer isn’t calling for the halt of all medical research. He is calling for a halt of research into harnessing the vital force, of the search for creating more capable monsters. He is not saying to stop testing, say, Penicillin. Only, ask Frankenstein, if he would kindly not attempt to create Frankenstein’s Monster, or perhaps to make such a pursuit illegal.
Which is good advice. Attempting to create Frankenstein’s Monster fails, or it succeeds and creates Frankenstein’s Monster. Either way, not great.
Section two says we should despair of any ability to usefully predict or prepare for any problems that do occur, until they happen: “Sometimes, we cannot plan; our only hope is to react.”
That is an argument for not worrying about AI as a matter of utility, I suppose? In the sense that in the movie Charlie and the Chocolate Factory, when they are told the last golden ticket has been found, Charlie’s family decides to let him sleep, let him have one last dream.
In terms of how much probability to put on existential catastrophe, it is very much the opposite. If we cannot prepare or do anything useful until after an AGI already exists, then we are almost certainly super dead if we build one, so let’s not build one?
Section three draws a parallel to a dramatically different situation where one would want to wait until the last minute, while saying that yes, in many cases one can indeed usefully take actions in light of potential future events.
Section four I think fundamentally misunderstands how alignment works and what different people are worried about, and also is strangely optimistic that meaningful alignment progress is already being made. I also don’t understand how one can endorse the actions and models of Paul Cristiano as useful, while dismissing any hope of anticipating future events – the claims earlier prove way too much.
Even more than that, when the conclusion says ‘ASI x-risk sits on the other side of a paradigm shift’ that directly contradicts the Cristiano approach. If that’s right, then there is every reason to expect the iterated, trial-and-error strategies to break down when that paradigm shift occurs, which is exactly Eliezer’s warning. And then, whoops.
He finishes:
I can only conclude that this person says ‘he is not worried’ only in the Bobby McFarrin sense that he fails to see doing so as useful. What could one do? Well, obviously, stop the breakthrough from being found would be one intervention – the Genetic Engineering Asilomar approach or something more forceful, however difficult implementation would be.
In summary: If you told me “I think there is going to be some technological breakthrough, after which we will be faced with ASI, and there is nothing we can do before that happens to ensure it doesn’t kill us or otherwise creates a good world, so don’t worry” I would very much worry, and I would say “so let’s not do that, then!”
There’s also the possibility that there won’t be a necessary breakthrough, in which case OP seems to agree that alignment research now would be super useful, and he endorses it, so the dispute there is a matter of probability and magnitude.
Maxwell Tabarrok, inspired by Robin Hanson, with the ‘well sure but that’s good actually’ in meme form?
I’d expect that merchant or smith to be rather positive about today’s world. In general I don’t share the instinct here, nor do I think that we should allow the transformation of the world into something we wouldn’t recognize as valuable on reflection, because that’s what valuable means. No, I don’t primarily care about GDP or energy expenditures.
The Lighter Side
Grady Booch: And so it begins.
AI David Attenborough on Memecoin trading (2 min). Exactly what one would expect.
But think of the value he gave up, and this type of thing had never happened before when there was a new smartest or most capable being around.
Can we not be Cronos?
From Kris Kashtanova.
Boaz Barak: My 10-year old kid is liar-paradoxing ChatGPT.
1
It’s funny how tempted I am to drop certain words here to fix the voice.
2
Twitter is strangely bad at realizing this type of reply is from the OP’s author.