All of tilek's Comments + Replies

tilek10

AI-Caused Extinction Ingredients

Below is what I see is required for AI-Caused Extinction to happen in the next few tens of years (years 2024-2050 or so). In brackets is my very approximate probability estimation as of 2024-07-25 assuming all previous steps have happened.

  1. AI technologies continue to develop at approximately current speeds or faster (80%)
  2. AI manages to reach a level where it can cause an extinction (90%)
  3. AI that can cause an extinction did not have enough alignment mechanisms in place (90%)
  4. AI executes an unaligned scenario (low, maybe less than
... (read more)
2gilch
I think #1 implies #2 pretty strongly, but OK, I was mostly with you until #4. Why is it that low? I think #3 implies #4, with high probability. Why don't you? #5 and #6 don't seem like strong objections. Multiple scenarios could happen multiple times in the interval we are talking about. Only one has to deal the final blow for it to be final, and even blows we survive, we can't necessarily recover from, or recover from quickly. The weaker civilization gets, the less likely it is to survive the next blow. We can hope that warning shots wake up the world enough to make further blows less likely, but consider that the opposite may be true. Damage leads to desperation, which leads to war, which leads to arms races, which leads to cutting corners on safety, which leads to the next blow. Or human manipulation/deception through AI leads to widespread mistrust, which prevents us from coordinating on our collective problems in time. Or AI success leads to dependence, which leads to reluctance to change course, which makes recovery harder. Or repeated survival leads to complacency until we boil the frog to death. Or some combination of these, or similar cascading failures. It depends on the nature of the scenario. There are lots of ways things could go wrong, many roads to ruin; disaster is disjunctive. Would warnings even work? Those in the know are sounding the alarm already. Are we taking them seriously enough? If not, why do you expect this to change?
tilek10

"AI will never be smarter than my dad."

 

I believe ranked comparing intelligence between two artificial or biological agents can only be down subjectively with someone deciding what they value.

Additionally, I think there is no agreed upon whether the definition "intelligence" should include knowledge. For example, can you consider an AI "smart" if it doesn't know anything about humans?

On the other hand, I value my dad's ability to have knowledge about my childhood and have a model of my behavior across tens of years very highly.  Thus, I will neve... (read more)

tilek10

I see two related fundamental problems with the modern discourse around AI.

1) As with most words, there is no agreed upon definition on the term "intelligence".

2) Intelligence is often used in a ranked comparison as a single dimension, e.g. "AI smarter than a human".

When people use the word "intelligence" it seems people often assume it should include various analytical, problem skills, and learning skills. What's less clear if it includes creative skills, communication skills, emotional intelligence, etc.

I think because people often like simplifying conce... (read more)

2gilch
I don't really have a problem with the term "intelligence" myself, but I see how it could carry anthropomorphic baggage for some people. However, I think the important parts are, in fact, analogous between AGI and humans. But I'm not attached to that particular word. One may as well say "competence" or "optimization power" without losing hold of the sense of "intelligence" we mean when we talk about AI. In the study of human intelligence, it's useful to break down the g factor (what IQ tests purport to measure) into fluid and crystallized intelligence. The former being the processing power required to learn and act in novel situations, and the latter being what has been learned and the ability to call upon and apply that knowledge. "Cognitive skills" seems like a reasonably good framing for further discussion, but I think recent experience in the field contradicts your second problem, even given this framing. The Bitter Lesson says it well. Here are some relevant excerpts (it's worth a read and not that long). Your conception of intelligence in the "cognitive skills" framing seems to be mainly about the crystalized sort. The knowledge and skills and application thereof. You see how complex and multidimensional that is and object to the idea that collections of such should be well-ordered, making concepts like "smarter-than human" if not wholly devoid of meaning, at least wrongheaded. I agree that "competence" is ultimately a synonym for "skill", but you're neglecting the fluid intelligence. We already know how to give computers the only "cognitive skills" that matters: the ones that let you acquire all the others. The ability to learn, mainly. And that one can be brute forced with more compute. All the complexity and multidimensionality you see come when something profoundly simple, algorithms measured in mere kilobytes of source code, interacts with data from the complex and multidimensional real world. In the idealized limit, what I call "intelligence" is AIX