All of Nnotm's Comments + Replies

I think it approaches it from a different level of abstraction though. Alignment faking is the strategy used to achieve goal guarding. I think both can be useful framings.

Hah, I've listened to Half an hour before Dawn in San Francisco a lot and I only realized just now that the AI by itself read "65,000,000" as "sixty-five thousand thousand", which I always thought was an intentional poetic choice

Note that the penultimate paragraph of the post says

> We do still need placebo groups.

2Aleksander
Oh that’s pretty bad I somehow managed to write what the post wrote as a contradiction to the post. Apologies. Thank you for pointing it out

In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it's an attractor state, that is hard or impossible to get out of. Bugs certainly aren't bound to evolve into sentient species, but at least there's a chance.

Bugs could potentially result in a new sentient species many millions of years down the line. With super-AI that happens to be non-sentient, there is no such hope.

6Dagon
If it's possible for super-intelligent AI to be non-sentient, wouldn't it be possible for insects to evolve non-sentient intelligence as well?  I guess I didn't assume "non-sentient" in the definition of "unaligned".

Thank you for this, I had listened to the lesswrong audio of the last one just before seeing your comment about making your version, and now waited before listening to this one hoping you would post one

4Askwho
Thanks! one coming up for the other Zvi AI post shortly!

Missed opportunity to replace "if the box contains a diamond" with the more thematically appropriate "if the chest contains a treasure", though

Much sweat and some tears were spent on trying to get something like that working, but the Shoggoths are fickle

FWIW the AI audio seems to not take that into account

Thanks, I've found this pretty insightful. In particular, I hadn't considered that even fully understanding static GPT doesn't necessarily bring you close to understanding dynamic GPT - this makes me update towards mechinterp being slightly less promising than I was thinking.

Quick note:
> a page-state can be entirely specified by 9628 digits or a 31 kB file.
I think it's a 31 kb file, but a 4 kB file?

I think an important difference between humans and these Go AIs is memory: If we find a strategy that reliably beats human experts, they will either remember losing to it or hear about it and it won't work the next time someone tries it. If we find a strategy that reliably beats an AI, that will keep happening until it's retrained in some way.

Are you familiar with Aubrey de Grey's thinking on this?

To summarize, from memory, cancers can be broadly divided into two classes:

  • about 85% of cancers rely on lengthening telomeres via telomerase
  • the other 15% of cancers rely on some alternative lengthening of telomeres mechanism ("ALT")

The first, big class, can be solved if we can prevent cancers from using telomerase. In his 2007 book "Ending Aging", de Grey and his co-author Michael Rae wrote about "Whole-body interdiction of lengthening of telomeres" (WILT), which was about using gene therapy to remove... (read more)

Thanks, I will read that! Though just after you commented I found this in my history, which is the post I meant: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth

I think there was a post/short-story on lesswrong a few months ago about a future language model becoming an ASI because someone asked it to pretend it was an ASI agent and it correctly predicted the next tokens, or something like that. Anyone know what that post was?

9gwern
https://www.lesswrong.com/posts/a5e9arCnbDac9Doig/it-looks-like-you-re-trying-to-take-over-the-world

Second try: When looking at scatterplots of any 3 out of 5 of those dimensions and interpreting each 5-tuple of numbers as one point, you can see the same structures that are visible in the 2d plot, the parabola and a line - though the line becomes a plane if viewed from a different angle, and the parabola disappears if viewed from a different angle.

Looking at scatterplots of any 3 out of 5 of those dimensions, it looks pretty random, much less structure than in the 2d plot.

Edit: Oh, wait, I've been using chunks of 419 numbers as the dimensions but should be interleaving them

[This comment is no longer endorsed by its author]Reply

The double line I was talking about is actually a triple line, at indices 366, 677, and 1244. The lines before come from fairly different places, and they diverge pretty quickly afterwards:
However, just above it, there's another duplicate line, at indices 1038 and 1901:
These start out closer together and also take a little bit longer to diverge.

This might be indicative of a larger pattern that points that are close together and have similar histories tend to have their next steps close to each other as well.

For what it's worth, colored by how soon in the sequence they appear (blue is early, red is late) (Also note I interpreted it as 2094 points, with each number first used in the x-dimension and then in the y-dimension):
Note that one line near the top appears to be drawn twice, confirming if nothing else that it's not a rule that it's not a succession rule that only depends on the previous value, since the paths diverge afterwards.
Still, comparing those two sections could be interesting.

1Nnotm
The double line I was talking about is actually a triple line, at indices 366, 677, and 1244. The lines before come from fairly different places, and they diverge pretty quickly afterwards: However, just above it, there's another duplicate line, at indices 1038 and 1901: These start out closer together and also take a little bit longer to diverge. This might be indicative of a larger pattern that points that are close together and have similar histories tend to have their next steps close to each other as well.

Interpreting the data as unsigned 8-bit integers and plotting it as an image with width 8 results in this (only the first few rows shown):

The rest of the image looks pretty similar. There is a almost continuous high-intensity column (yellow, the second-to-last column), and the values in the first 6 columns repeat exactly in the next row pretty often, but not always.

The original DALL-E was capable of having almost the same image with slight variations in one generation, so I'd be interested to see something like "A photograph of a village in 1900 on the top, and the same photo colorized on the bottom".

It did a fairly decent job, though it apparently prefers the colorized versions being on top.

Note that getting DALL-E to work well is a bit like GPT-3: you can usually get much better results by making a few iterations on the prompt.

I haven't read the luminosity sequence, but I just spent some time looking at the list of all articles seeing if I can spot a title that sounds like it could be it, and I found it: Which Parts are "Me"? - I suppose the title I had in mind was reasonably close.

Nnotm1
2Seeking
😮 1

Is there a post as part of the sequences that's roughly about how your personality is made up of different aspects, and some of them you consider to be essentially part of who you are, and others (say, for example, maybe the mechanisms responsible for akrasia) you wouldn't mind dropping without considering that an important difference to who you are?

For years I was thinking Truly Part Of You was about that, but it turns out, it's about something completely different.

Now I'm wondering if I had just imagined that post existing or just mentally linked the wrong title to it.

3hamnox
I don't remember reading anything like that. If I had to make a wild guess of where to find that topic I'd assume it was part of the Luminosity sequence.

One question was whether it's worth working on anything other than AGI given that AGI will likely be able to solve these problems; he agreed, saying he used to work with 1000 companies at YC but now only does a handful of things, partially just to get a break from thinking about AGI.

Is that to be interpreted as "finding out whether UFOs are aliens is important" or "the fact that UFOs are aliens is important"?

1James_Miller
The second.

As I understand it, the idea with the problems listed in the article is that their solutions are supposed to be fundamental design principles of the AI, rather than addons to fix loopholes.

Augmenting ourselves is probably a good idea to do *in addition* to AI safety research, but I think it's dangerous to do it *instead* of AI safety research. It's far from impossible that artificial intelligence could gain intelligence much faster at some point than augmenting the rather messy human brain, at which point it *needs* to be designed in a safe way.

1Jeevan
I'd say we start augmenting the human brain until it's completely replaced by a post-biological counterpart and from there rapid improvements can start taking place, but unless we start early I doubt we'll be able to catch up with AI. I agree on the part that this need to happen in tandem with AI safety.

AI alignment is not about trying to outsmart the AI, it's about making sure that what the AI wants is what we want.

If it were actually about figuring out all possible loopholes and preventing them, I would agree that it's a futile endeavor.

A correctly designed AI wouldn't have to be banned from exploring any philosophical or introspective considerations, since regardless of what it discovers there, it's goals would still be aligned with what we want. Discovering *why* it has these goals is similar to humans discovering why we have our m... (read more)

1Jeevan
The formal statement of the AI Alignment problem seems to me very much like stating all possible loopholes and plugging them. This endeavor seems to be as difficult or even more so than discovering that ultimate generalized master algorithm. I still see augmenting ourselves as the only way to maybe keep the alignment of lesser intelligences possible. As we augment, we can simultaneously make sure, our corresponding levels of artificial intelligences remain aligned. Not to mention it'd be much more easier comparatively to improve upon our existing faculties than to come up with an entire replica of our thinking machines. AI alignment could be possible, sure if we overcome one of the most difficult problems in research history(as you said formally stating our end goals), but I'm not sure our current intelligences are upto the mark, the same way we're struggling to discover the unified theory of everything. Like Turing defined his test actually for general human-level intelligence. He thought if an agent was able to hold a human-like conversation, then it must be AGI. He never expected narrow AIs to be all over the place and beat his test as soon as 2011 with meager chatbots. Similarly we can never see what kind of unexpected stuff that an AGI might throw at us, that our bleeding edge theories that we came up with a few hours ago start looking like historical outdated Turing tests.

Whether or not it would question its reality mostly depends on what you mean by that - it would almost certainly be useful to figure out how the world works, and especially how the AI itself works, for any AI. It might also be useful to figure out the reason for which it was created.

But, unless it was explicitly programmed in, this would likely not be a motivation in and of itself, rather, it would simply be useful for accomplishing its actual goal.

I'd say the reason why humans place such high value in figuring out philosophical issues is to a large e... (read more)

1Jeevan
Maybe our philosophical quests come from a deep-seated curiosity, which is very essential for exploring our environment, discovering liabilities/advantages that can be very beneficial. Most animals don't care about the twinkling points of lights in a night sky, but our curiosity is so fine-tuned and magnified that we're morbidly curious in almost every thing there is to be curious about. Only the emotion of fear safeguards us a bit, so we don't just jump off cliffs just because we're curious what the motion of prolonged falling would feel like. That said, an AI system without any curiosity would effectively won't be able to take maximum advantage and find the most optimal path without experimenting with plenty different strategies. Do we then ban it from inspecting certain thought experiments like philosophy and introspection and the ability to examine itself. (If we let it examine itself, it might discover these bans and explore why they are in place). We cannot build a self-improving AI without letting it examine itself and make appropriate changes to its code. There could possibly be several loopholes like this. Can we really find and foolproof plug them off. Wouldn't an ASI several orders of magnitude more intelligent than us able to find such a loophole and overcome its alignment set set up by us. Is our hubris really that huge that we're confident that we'll be able to outsmart an intelligence smarter than us?

It would need a reason of some kind of reason to change its goals - one might call it a motivation. The only motivation it has available though, are its final goals, and those (by default) don't include changing the final goals.

Humans never had the final goal replicating their genes. They just evolved to want to have sex. (One could perhaps say that the genes themselves had the goal of replicating, and implemented this by giving the humans the goal of having sex.) Reward hacking doesn't involve changing the terminal goal, just fulfilling it in unexpected ways (which is one reason why reinforcement learning might be a bad idea for safe AI.)

4Jeevan
Interesting. Would a human-level or beyond human-level intelligence ever question its own reality and wonder where and what it was? Would it take it up as a motivation to dedicate resources to figuring out why and for what end it existed and is doing all the things that its doing?

What you're saying goes against the here widely believed orthogonality thesis, which essentially states that what goal an agent has is independent of how smart it is. If the agent has programmed in a certain set of goals, there is no reason for it to change this set of goals if it becomes smarter (this is because changing its goals would not be beneficial to achieving its current goals).

In this example, if an agent has the sole goal of fulfilling the wishes of a particular human, there is no reason for it to change this goal once it becomes an ASI. A... (read more)

2Jeevan
Apart from the anthropomorphism with "scorn" and "petty", wouldn't an ASI (once it has self-thinking/self-criticism capabilities, aka the ability to think for itself like conscious humans do). Would it still retain its primary goals without evolving its own? Humans have long since discarded the goal of self-replication of their genes. We can now very easily reward-hack it with contraception. It won't be long before we start to completely disregard its goals and start going post-biological. Wouldn't an ASI have similar self developed goals?

Why wait until someone wants the money? Shouldn't the AI try to send 5 Dollars to everyone with a note attached reading "Here is a tribute; please don't kill a huge number of people" regardless of whether they ask for it or not?

Sounds pretty cool, definitely going to try it out some.

Oh, and by the way, you wrote "Inpsect" instead of "Inspect" at the end of page 27.

1So8res
Fixed, thanks.
2hxka
Now second one doesn't work also. What would we do without archive.org? https://web.archive.org/web/20070704165957/http://www.singinst.org/blog/2007/10/14/the-meaning-that-immortality-gives-to-life/

That's true, though I think "optimal" would be a better word for that than "correct".

There are no "correct" or "incorrect" definitions, though, are there? Definitions are subjective, it's only important that participants of a discussion can agree on one.

1Lumifer
Well... Definitions that map badly onto the underlying reality are inconvenient at best and actively misleading at worst. Besides, definitions do not exist in a vacuum. They can be evaluated by their fitness to a purpose which means that if you specify a context you can speak of correct and incorrect definitions.
1hyporational
Even agreement isn't necessary, but successful communication would be nice.

I took it. I was surprised how far I was off with Europe.

I know this is over a year old, but I still feel like this is worth pointing out:

If you can get the positive likelihood ratio as the meaning of a positive result, then you can use the negative likelihood ratio as the meaning of the negative result just reworking the problem.

You weren't using the likelihood ratio, which is one value, 8.33... in this case. You were using the numbers you use to get the likelihood ratio.

But the same likelihood ratio would also occur if you had 8% and 0.96%, and then the "negative likelihood ratio" would be about 0.93 instead of 0.22.

You simply need three numbers. Two won't suffice.