Scott Alexander's "Meditations on Moloch" paints a gloomy picture of the world being inevitably consumed by destructive forces of competition and optimization. But Zvi argues this isn't actually how the world works - we've managed to resist and overcome these forces throughout history.
Followup to: The Moral Void
A widespread excuse for avoiding rationality is the widespread belief that it is "rational" to believe life is meaningless, and thus suffer existential angst. This is one of the secondary reasons why it is worth discussing the nature of morality. But it's also worth attacking existential angst directly.
I suspect that most existential angst is not really existential. I think that most of what is labeled "existential angst" comes from trying to solve the wrong problem.
Let's say you're trapped in an unsatisfying relationship, so you're unhappy. You consider going on a skiing trip, or you actually go on a skiing trip, and you're still unhappy. You eat some chocolate, but you're still unhappy. You do some volunteer work at a charity (or better yet,...
How I would phrase it is, value precedes justification.
I wonder if there's a clear evidence that LessWrong text has been included in LLM training.
Claude seems generally aware of LessWrong, but it's difficult to distinguish between "this model has been trained on text that mentions LessWrong" and "this model has been trained on text from LessWrong"
Related discussion here, about preventing inclusion: https://www.lesswrong.com/posts/SGDjWC9NWxXWmkL86/keeping-content-out-of-llm-training-datasets?utm_source=perplexity
LessWrong scrape dataset on Hugging face, by NousResearch
https://huggingface.co/datasets/LDJnr/LessWrong-Amplify-Instruct
is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos—aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I’ll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, Tyler Cowen, George Hotz, and others.
Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows:
...Tacit knowledge is knowledge that can’t properly be transmitted via verbal or written instruction, like the ability to create
Done.
Over the past few years, Lightcone has started using AI art in more of our products. This is a fairly easy and fun part of our job, but I've noticed often there's just a lotta art that needs to get made which we don't quite have the bandwidth to do ourselves.
So I'm looking into hiring* an AI** artist for periodic contract gigs (this wouldn't be a fulltime thing, I have one initial job in mind, if it goes well we may periodically have other jobs to offer).
* we aren't sure we want to hire a person for this, this isn't a top organizational priority, it's more like I'm checking if someone exists who would integrate pretty quickly/easily into our workflow.
** in theory, we could hire, like, a...
I am commenting as to commit publicly.
I Will: Create an AI art portfolio, and DM it to Raemon by 10pm AEST, tonight.
I'll explain my reasoning in a second, but I'll start with the conclusion:
I think it'd be healthy and good to pause and seriously reconsider the focus on doom if we get to 2028 and the situation feels basically like it does today.
I don't know how to really precisely define "basically like it does today". I'll try to offer some pointers in a bit. I'm hoping folk will chime in and suggest some details.
Also, I don't mean to challenge the doom focus right now. There seems to be some good momentum with AI 2027 and the Eliezer/Nate book. I even preordered the latter.
But I'm still guessing this whole approach is at least partly misled. And I'm guessing that fact will show up in 2028 as "Oh, huh, looks...
I also had to look it up and got interested in testing whether or how it could apply.
Here's an explanation of Bulverism that suggests a concrete logical form of the fallacy:
Here's a possible assignment for X and Y that tries to remain rather general:
Why would that be a fall...
The first in a series of bite-sized rationality prompts[1].
This is my most common opening-move for Instrumental Rationality. There are many, many other pieces of instrumental rationality. But asking this question is usually a helpful way to get started. Often, simply asking myself "what's my goal?" is enough to direct my brain to a noticeably better solution, with no further work.
I'm playing Portal 2, or Baba is You. I'm fiddling around with the level randomly, sometimes going in circles. I notice I've been doing that awhile.
I ask "what's my goal?"
And then my eyes automatically glance at the exit for the level and realize I can't possibly make progress unless I solve a particular obstacle, which none of my fiddling-around was going to help with.
I'm arguing with a...
Psyllium husk is a non-fermenting (no gas or bloating) soluble dietary fiber that improves both constipation and diarrhea (such as with IBS), normalizes blood sugar, reduces LDL ("bad") cholesterol, and can help with weight loss. Each type of dietary fiber has different effects, and a "high fiber" diet in general won't necessarily provide the same benefits, especially for conditions like Irritable Bowel Syndrome[1].
At a high level:
Thoughts on this recent finding?
https://www.consumerlab.com/news/best-psyllium-fiber-supplements-2024/02-29-2024/
This is the second of a two-post series on foom (previous post) and doom (this post).
The last post talked about how I expect future AI to be different from present AI. This post will argue that, absent some future conceptual breakthrough, this future AI will be of a type that will be egregiously misaligned and scheming; a type that ruthlessly pursues goals with callous indifference to whether people, even its own programmers and users, live or die; and more generally a type of AI that is not even ‘slightly nice’.
I will particularly focus on exactly how and why I differ from the LLM-focused researchers who wind up with (from my perspective) bizarrely over-optimistic beliefs like “P(doom) ≲ 50%”.[1]
In particular, I will argue...
That makes sense. Although I don't think that non-behavioral training is a magic bullet either. And I don't think behavioral training becomes doomed when you hit an AI capable of scheming if it was working right up until then. Scheming and deception would allow an AI to hide its goals but not change its goals.
What might cause an AI to change its goals is the reflection I mention. Which would probably happen at right around the same level of intelligence as scheming and deceptive alignment. But it's a different effect. As with your point, I think doomed is ...
Come on out to the next ACX (Astral Codex Ten) Montreal Meetup! This week, we're reading Orienting Toward Wizard Power, by John Wentworth. The post discusses the distinction between Wizard Power and King Power.
I strongly recommend this post, which is quite good (and short), as well as the optional readings, which are also excellent.
Optional readings:
Feel free to suggest topics or readings for future meetups on this form. Seriously, I'm struggling here. :P
Venue: L'Esplanade Tranquille, 1442 Clark. Rough location here: https://plus.codes/87Q8GC5P+P2R. Note: join our Discord server to receive last-minute information in case of bad weather.
Date & Time: Saturday,...