Zack_M_Davis

Comments

Sorted by
  1. Arguments from moral realism, fully robust alignment, that ‘good enough’ alignment is good enough in practice, and related concepts.

What is moral realism doing in the same taxon with fully robust and good-enough alignment? (This seems like a huge, foundational worldview gap; people who think alignment is easy still buy the orthogonality thesis.)

  1. Arguments from good outcomes being so cheap the AIs will allow them.

If you're putting this below the Point of No Return, then I don't think you've understood the argument. The claim isn't that good outcomes are so cheap that even a paperclip maximizer would implement them. (Obviously, a paperclip maximizer kills you and uses the atoms to make paperclips.)

The claim is that it's plausible for AIs to have some human-regarding preferences even if we haven't really succeeded at alignment, and that good outcomes for existing humans are so cheap that AIs don't have to care about the humans very much in order to spend a tiny fraction of their resources on them. (Compare to how some humans care enough about animal welfare to spend an tiny fraction of our resources helping nonhuman animals that already exist, in a way that doesn't seem like it would be satisfied by killing existing animals and replacing them with artificial pets.)

There are lots of reasons one might disagree with this: maybe you don't think human-regarding preferences are plausible at all, maybe you think accidental human-regarding preferences are bad rather than good (the humans in "Three Worlds Collide" didn't take the Normal Ending lying down), maybe you think it's insane to have such a scope-insensitive concept of good outcomes—but putting it below arguments from science fiction or blind faith (!) is silly.

in a world where the median person is John Wentworth [...] on Earth (as opposed to Wentworld)

Who? There's no reason to indulge this narcissistic "Things would be better in a world where people were more like meeeeeee, unlike stupid Earth [i.e., the actually existing world containing all actually existing humans]" meme when the comparison relevant to the post's thesis is just "a world in which humans have less need for dominance-status", which is conceptually simpler, because it doesn't drag in irrelevant questions of who this Swentworth person is and whether they have an unusually low need for dominance-status.

(The fact that I feel motivated to write this comment probably owes to my need for dominance-status being within the normal range; I construe statements about an author's medianworld being superior to the real world as a covert status claim that I have an interest in contesting.)

2019 was a more innocent time. I grieve what we've lost.

It's a fuzzy Sorites-like distinction, but I think I'm more sympathetic to trying to route around a particular interlocutor's biases in the context of a direct conversation with a particular person (like a comment or Tweet thread) than I am in writing directed "at the world" (like top-level posts), because the more something is directed "at the world", the more you should expect that many of your readers know things that you don't, such that the humility argument for honesty applies forcefully.

Answer by Zack_M_Davis52

Just because you don't notice when you're dreaming, doesn't mean that dream experiences could just as well be waking experiences. The map is not the territory; Mach's principle is about phenomena that can't be told apart, not just anything you happen not to notice the differences between.

When I was recovering from a psychotic break in 2013, I remember hearing the beeping of a crosswalk signal, and thinking that it sounded like some sort of medical monitor, and wondering briefly if I was actually on my deathbed in a hospital, interpreting the monitor sound as a crosswalk signal and only imagining that I was healthy and outdoors—or perhaps, both at once: the two versions of reality being compatible with my experiences and therefore equally real. In retrospect, it seems clear that the crosswalk signal was real and the hospital idea was just a delusion: a world where people have delusions sometimes is more parsimonious than a world where people's experiences sometimes reflect multiple alternative realities (exactly when they would be said to be experiencing delusions in at least one of those realities).

(I'm interested (context), but I'll be mostly offline the 15th through 18th.)

Here's the comment I sent using the contact form on my representative's website.

Dear Assemblymember Grayson:

I am writing to urge you to consider voting Yes on SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. How our civilization handles machine intelligence is of critical importance to the future of humanity (or lack thereof), and from what I've heard from sources I've trust, this bill seems like a good first step: experts such as Turing Award winners Yoshua Bengio and Stuart Russell support the bill (https://time.com/7008947/california-ai-bill-letter/), and Eric Neyman of the Alignment Research Center described it as "narrowly tailored to address the most pressing AI risks without inhibiting innovation" (https://x.com/ericneyman/status/1823749878641779006). Thank you for your consideration. I am,

Your faithful constituent,
Zack M. Davis

This is awful. What do most of these items have to do with acquiring the map that reflects the territory? (I got 65, but that's because I've wasted my life in this lame cult. It's not cool or funny.)

On the one hand, I also wish Shulman would go into more detail on the "Supposing we've solved alignment and interpretability" part. (I still balk a bit at "in democracies" talk, but less so than I did a couple years ago.) On the other hand, I also wish you would go into more detail on the "Humans don't benefit even if you 'solve alignment'" part. Maybe there's a way to meet in the middle??

It seems pretty plausible to me that if AI is bad, then rationalism did a lot to educate and spur on AI development. Sorry folks.

What? This apology makes no sense. Of course rationalism is Lawful Neutral. The laws of cognition aren't, can't be, on anyone's side.

Load More