Zack_M_Davis

Comments

Sorted by

Simplicia: But how do you know that? Obviously, an arbitrarily powerful expected utility maximizer would kill all humans unless it had a very special utility function. Obviously, there exist programs which behave like a webtext-next-token-predictor given webtext-like input but superintelligently kill all humans on out-of-distribution inputs. Obviously, an arbitrarily powerful expected utility maximizer would be good at predicting webtext. But it's not at all clear that using gradient descent to approximate the webtext next-token-function gives you an arbitrarily powerful expected utility maximizer. Why would that happen? I'm not denying any of the vNM axioms; I'm saying I don't think the vNM axioms imply that.

(Self-review.) I think this pt. 2 is the second most interesting entry in my Whole Dumb Story memoir sequence. (Pt. 1 deals with more niche psychology stuff than the philosophical malpractice covered here; pt. 3 is a more of a grab-bag of stuff that happened between April 2019 and January 2021; pt. 4 is the climax. Expect the denouement pt. 5 in mid-2025.)

I feel a lot more at peace having this out there. (If we can't have justice, sanity, or language, at least I got to tell my story about trying to protect them.)

The 8 karma in 97 votes is kind of funny in how nakedly political it is. (I think it was higher before the post got some negative attention on Twitter.)

Given how much prereading and editing effort had already gone into this, it's disappointing that I didn't get the ending right the first time. (I ended up rewriting some of the paragraphs at the end after initial publication after it didn't land in the comments section the way I wanted it to land.)

Subsection titles would have also been a better choice for such a long piece (which was rectified for the publication of pt.s 3 and 4); I may still yet add them.

(Self-review.) I'm as proud of this post as I am disappointed that it was necessary. As I explained to my prereaders on 19 October 2023:

My intent is to raise the level of the discourse by presenting an engagement between the standard MIRI view and a view that's relatively optimistic about prosaic alignment. The bet is that my simulated dialogue (with me writing both parts) can do a better job than the arguments being had by separate people in the wild; I think Simplicia understands things that e.g. Matthew Barnett doesn't. (The karma system loved my dialogue comment on Barnett's post; this draft is trying to scale that up.)

I'm annoyed at the discourse situation where MIRI thinks we're dead for the same fundamental reasons as in 2016, but meanwhile, there are a lot of people who are looking at GPT-4, and thinking, "Hey, this thing seems pretty smart and general and good at Doing What I Mean, in contrast to how 2016-era MIRI said that we didn't know how to get an agent to fill a cauldron; maybe alignment is easy??"—to which MIRI's response has been (my uncharitable paraphrase), "You people are idiots who didn't understand the core arguments; the cauldron thing was a toy illustration of a deep math thing; we never said Midjourney can't exist".

And just, I agree that Midjourney doesn't refute the deep math thing and the people who don't realize that are idiots, but I think the idiots deserve a better response!—particularly insofar as we're worried about transformative AI looking a lot like the systems we see now, rather than taking a "LLMs are nothing like AGI" stance.

Simplicia isn't supposed to pass the ITT of anyone in particular, but if the other character [...] doesn't match the MIRI party line, that's definitely a serious flaw that needs to be fixed!

I think the dialogue format works particularly well in cases like this where the author or the audience is supposed to find both viewpoints broadly credible, rather than an author avatar beating up on a strawman. (I did have some fun with Doomimir's characterization, but that shouldn't affect the arguments.)

This is a complicated topic. To the extent that I was having my own doubts about the "orthodox" pessimist story in the GPT-4 era, it was liberating to be able to explore those doubts in public by putting them in the mouth of a character with the designated idiot character name without staking my reputation on Simplicia's counterarguments necessarily being correct.

Giving both characters perjorative names makes it fair. In an earlier draft, Doomimir was "Doomer", but I was already using the "Optimistovna" and "Doomovitch" patronymics (I had been consuming fiction about the Soviet Union recently) and decided it should sound more Slavic. (Plus, "-mir" (мир) can mean "world".)

Retrospectives are great, but I'm very confused at the juxtaposition of the Lightcone Offices being maybe net-harmful in early 2023 and Lighthaven being a priority in early 2025. Isn't the latter basically just a higher-production-value version of the former? What changed? (Or after taking the needed "space to reconsider our relationship to this whole ecosystem", did you decide that the ecosystem is OK after all?)

Speaking as someone in the process of graduating college fifteen years late, this is what I wish I knew twenty years ago. Send this to every teenager you know.

At the time, I remarked to some friends that it felt weird that this was being presented as a new insight to this audience in 2023 rather than already being local conventional wisdom.[1] (Compare "Bad Intent Is a Disposition, Not a Feeling" (2017) or "Algorithmic Intent" (2020).) Better late than never!


  1. The "status" line at the top does characterize it as partially "common wisdom", but it's currently #14 in the 2023 Review 1000+ karma voting, suggesting novelty to the audience. ↩︎

But he's not complaining about the traditional pages of search results! He's complaining about the authoritative-looking Knowledge Panel to the side:

Obviously it's not Google's fault that some obscure SF web sites have stolen pictures from the Monash University web site of Professor Gregory K Egan and pretended that they're pictures of me ... but it is Google's fault when Google claim to have assembled a mini-biography of someone called "Greg Egan" in which the information all refers to one person (a science fiction writer), while the picture is of someone else entirely (a professor of engineering). [...] this system is just an amateurish mash-up. And by displaying results from disparate sources in a manner that implies that they refer to the same subject, it acts as a mindless stupidity amplifier that disseminates and entrenches existing errors.

Regarding the site URLs, I don't know, I think it's pretty common for people to have a problem that would take five minutes to fix if you're an specialist that already knows what you're doing, but non-specialists just reach for the first duct-tape solution that comes to mind without noticing how bad it is.

Like: you have a website at myname.somewebhost.com. One day, you buy myname.net, but end up following a tutorial that makes it a redirect rather than a proper CNAME or A record, because you don't know what those are. You're happy that your new domain works in that it's showing your website, but you notice that the address bar is still showing the old URL. So you say, "Huh, I guess I'll put a note on my page template telling people to use the myname.net address in case I ever change webhosts" and call it a day. I guess you could characterize that as a form of "cognitive rigidity", but "fanaticism"? Really?

I agree that Egan still hasn't seen the writing on the wall regarding deep learning. (A line in "Death and the Gorgon" mentions Sherlock's "own statistical tables", which is not what someone familiar with the technology would write.)

I agree that preëmptive blocking is kind of weird, but I also think your locked account with "Follow requests ignored due to terrible UI" is kind of weird.

It's implied in the first verse of "Great Transhumanist Future."

One evening as the sun went down
That big old fire was wasteful,
A coder looked up from his work,
And he said, “Folks, that’s distasteful,

(This comment points out less important technical errata.)

ChatGPT [...] This was back in the GPT2 / GPT2.5 era

ChatGPT never ran on GPT-2, and GPT-2.5 wasn't a thing.

with negative RL signals associated with it?

That wouldn't have happened. Pretraining doesn't do RL, and I don't think anyone would have thrown a novel chapter into the supervised fine-tuning and RLHF phases of training.

Load More