All of jkraybill's Comments + Replies

As a poker player, this post is the best articulation I've read that explains why optimal tournament play is so different from optimal cash-game play. Thanks for that!

6meijer1973
Agreed, one of the objectives of a game is to not die during the game. This is also true for possible fatal experiments like inventing AGI. You have one or a few shots to get it right. But to win you got to stay in the game. 

Haven't read that book, added to the top of my list, thanks for the reference!

But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of 'cultural technology', which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology -> smaller stomachs and bigger brains -> even more culture -> bigger brains even more of an advantage etc.)

One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow pl... (read more)

3lewis smith
Well maybe you should read the book! I think that there are a few concrete points you can disagree on. I'm not an expert, but I'm not so sure that this is right; I think that anatomically modern humans already had significantly better abilities to learn and transmit culture than other animals, because anatomically modern humans generally need to extensively prepare their food (cooking, grinding etc.) in a culturally transmitted way. So by the time we get to sapiens we are already pretty strongly on this trajectory. I think there's an element of luck: other animals do have cultural transmission (for example elephants and killer whales) but maybe aren't anatomically suited to discover fire and agriculture. Some quirks of group size likely also play a role. It's definitely a feedback loop though; once you are an animal with culture, then there is increased selection pressure to be better at culture, which creates more culture etc. I'm gonna go with absolutely yes, see my above comment about anatomically modern humans and food prep. I think you are severely under-estimating the sophistication of hunter-gatherer technology and culture! The degree to which 'objective' measures of intelligence like IQ are culturally specific is an interesting question.

It is true that we have seen over two decades of alignment research, but the alignment community has been fairly small all this time. I'm wondering what a much larger community could have done. 

 

I start to get concerned when I look at humanity's non-AI alignment successes and failures; we've had corporations for hundreds of years, and a significant portion of humanity have engaged in corporate alignment-related activities (regulation, lawmaking, governance etc, assuming you consider those forces to generally be pro-alignment in principle). Corpor... (read more)

I've probably committed a felony by doing this, but I'm going to post a rebuttal written by GPT-4, and my commentary on it. I'm a former debate competitor and judge, and have found GPT-4 to be uncannily good at debate rebuttals. So here is what it came up with, and my comments. I think this is a relevant comment, because I think what GPT-4 has to say is very human-relevant.

Radiations from the Sun bounce off a piece of cheese and enter into the beady eyes of a mouse; its retinal cells detect the light; the energy of the photons triggers neural impulses; the

... (read more)

One of the things I think about a lot, and ask my biologist/anthropologist/philosopher friends, is: what does it take for something to be actually recognised as human-like by humans? For instance, I see human-like cognition and behaviour in most mammals, but this seems to be resisted almost by instinct by my human friends who insist that humans are superior and vastly different. Why don't we have a large appreciation for anthill architecture, or whale songs, or flamingo mating dances? These things all seem human-like to me, but are not accepted as forms of... (read more)

Hi, I have a few questions that I'm hoping will help me clarify some of the fundamental definitions. I totally get that these are problematic questions in the absence of consensus around these terms -- I'm hoping to have a few people weigh in and I don't mind if answers are directly contradictory or my questions need to be re-thought.

  • If it turns out that LLMs are a path to the first "true AGI" in the eyes of, say, the majority of AI practitioners, what would such a model need to be able to do, and at what level, to be considered AGI, that GPT-4 can't curre
... (read more)
2Boris Kashirin
It is important to remember that our ultimate goal is survival. If someone builds a system that may not meet the strict definition of AGI but still poses a significant threat to us, then the terminology itself becomes less relevant. In such cases, employing a 'taboo-your-words' approach can be beneficial. Now lets think of intelligence as "pattern recognition". It is not all that intelligence is, but it is big chunk of it and it is concrete thing we can point to and reason about while many other bits are not even known.[1] In that case GI is general/meta/deep pattern recognition. Patterns about patterns and patterns that apply to many practical cases, something like that. Obvious thing to note here: ability to solve problems can be based on a large number of shallow patterns or small number of deep patterns. We are pretty sure that significant part of LLM capabilities is shallow pattern case, but there are hints of at least some deep patterns appearing.  And I think that points to some answers: LLM appear intelligent by sheer amount of shallow patterns. But for system to be dangerous, number of required shallow patterns is so large that it is essentially impossible to achieve. So we can meaningfully say it is not dangerous, it is not AGI... Except, as mentioned earlier there seem to be some deep patterns emerging. And we don't know how many. As for the pre-home-computer era researchers, I bet they could not imagine amount of shallow patterns that can be put into system. I hope this provided at least some idea how to approach some of your questions, but of course in reality it is much more complicated, there is no sharp distinction between shallow and deep patterns and there are other aspects of intelligence. For me at least it is surprising that it is possible to get GPT-3.5 with seemingly relatively shallow patterns, so I myself "could not imagine amount of shallow patterns that can be put into system" 1. ^ I tried Chat GPT on this paragraph, like t

Very interesting points, if I was still in middle management these things would be keeping me up at night!

One point I query is "this is a totally new thing no manager has done before, but we're going to have to figure it out" -- is it that different from the various types of tool introduction & distribution / training / coaching that managers already do? I've spent a good amount of my career coaching my teams on how to be more productive using tools, running team show-and-tells from productive team members on why they're productive, sending team member... (read more)

3VojtaKovarik
I think the general vibe of "this hasn't been done before" might have been referring to fully automating the manager job, which possibly comes with very different scaling of human- vs AI managers. (You possibly remove the time bottleneck, allowing unlimited number of meetings. So if you didn't need to coordinate the low-level workers, you could have a single manager for infinite workers. Ofc, in practice, you do need to coordinate somewhat, so there will be other bottlenecks. But still, removing a bottleneck could changes things dramatically.)

It would be pretty nuts if you rewarded it for being able to red-team itself -- like, it's deliberately training it to go of the rails, and I thiiiiink would seem so even to non-paranoid people? Maybe I'm wrong.

I'm actually most alarmed on this vector, these days. We're already seeing people giving LLM's completely untested toolsets - web, filesystem, physical bots, etc - and "friendly" hacks like Reddit jailbreaks and ChaosGPT. Doesn't it seem like we are only a couple steps before a bad actor produces an ideal red-team agent, and then abuses it rather th... (read more)

Seeing this frantic race from random people to give GPT-4 dangerous tools and walking-around-money, I agree: the risk is massively exacerbated by giving the "parent" AI's to humans.

Upon reflection, should that be surprising? Are humans "aligned" how we would want AI to be aligned? If so, we must acknowledge the fact that humanity regularly produces serial killers and terrorists (etc). Doesn't seem ideal. How much more aligned can we expect a technology we produce, vs our own species?

If we view the birth of AGI as the birth of a new kind of child, to me, th... (read more)

The Doomsday Clock is at 23:58:30, but maybe that's not what you meant? I think they were way off in the Cuban Missile Crisis era, but these days it seems more accurate and maybe more optimistic than I would give it. They do accommodate x-risk of various types.

1Seth Herd
I'd need that in p(doom) per year to do any useful reasoning weighing that against AGI misalignment x-risk

You have a decent set of arguments related to UBI as it may be conceived today, but I think it doesn't accommodate the future or where we are right now in terms of worker productivity as a ratio to capital profitability.

There's a longer term x-risk for non-major (US/CN/IN/etc) countries - especially in my mind AU since I live here - that isn't being discussed as much as it should, since it's already been happening for decades and will only accelerate with tech/AI-centric developments: where is the tax/revenue base going?

This dream of technology unlocking U... (read more)

1dr_s
100% agree. Worse, I think you can't really get to the Star Trek like future and stay there unless you give the people a way to lock it in place and not have their rights taken away. Sam Altman speaks explicitly about "capturing all the value in the world" and redistributing it in the form of UBI but that's... like... a Saturday morning cartoon supervillain's plan. "Get all the money and then give it to everyone else, trust me". Even assuming he is being 100% sincere, you can't expect things to go that smoothly, or that system to fix the problems of the entire world rather than the US alone, or that he'll be allowed to do it by those around him, or that he'll be the one to win the AI race. This is like a modern version of "just have an absolute monarch and trust that he'll be an enlightened, wise dude with everyone's best interests at heart". There's a reason why that never worked.