Introduction

On more than one occasion, I've seen the following comparisons used to describe how a superintelligence might relate to/perceive humans:

  • Humans to ants
  • Humans to earthworms
  • And similar

More generally, people seem to believe that humans are incredibly far from the peak of attainable intelligence. And that's very not obvious to me? 

 

Argument

I suspect that the median human's cognitive capabilities are qualitatively closer to an optimal bounded superintelligence than they are to a honeybee. The human brain seems to be a universal learner. There are some concepts that no human can fully grasp, but those seems to be concepts that are too large to fit in the working memory of a human. And humans can overcome those working memory limitations with a pen and paper, a smartphone, a laptop or other technological aids.

There doesn't seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time. A concept that no human could ever grasp, seems like a concept that no agent could ever grasp. If it's computable, then a human can learn to compute it (even if they must do so with the aid of technology).

Somewhere in the progression from honeybee to humans, there is a phase shift to a universal learner. Our usage of complex language/mathematics/abstraction seems like a difference in kind of cognition. I do not believe there are any such differences in kinds ahead of us on the way to a bounded superintelligence.

 

I don't think "an agent whose cognitive capabilities are as far above humans as humans are above ants" is necessarily a well-defined, sensible or coherent concept. I don't think it means anything useful or points to anything real.

I do not believe there are any qualitatively more powerful engines of cognition than the human brain (more powerful in the sense that a Turing machine is more powerful than a finite state machine). There are engines of cognition with better serial/parallel processing speed, larger working memories, faster recall, etc. But they don't have a cognitive skill on the level of "use of complex language/symbolic representation" that we lack. There is nothing they can learn that we are fundamentally incapable of learning (even if we need technological aid to learn it).

The difference between a human and a bounded superintelligence is a difference of degree. It's not at all obvious to me that superintelligences would be cognitively superior to sufficiently enhanced brain emulations.

 

I am not even sure the "human - chimpanzee gap" is a sensible notion for informing expectations of superintelligence. That seems to be a difference of kind I simply don't think will manifest. Once you make the jump to universality, there's nowhere higher to jump to.

Perhaps, superintelligence is just an immensely smart human that also happens to be equipped with faster processing speeds, much larger working memories, larger attention spans, etc.

 

Addenda

And even then, there are still fundamental constraints to attainable intelligence:

  1. What can be computed
    1. Computational tractability
  2. What can be computed efficiently
    1. Computational complexity
  3. Translating computation to intelligence
    1. Mathematical optimisation
    2. Algorithmic and statistical information theories
    3. Algorithmic and statistical learning theories
  4. Implementing computation within physics
    1. Thermodynamics of computation
      1. Minimal energy requirements
      2. Heat dissipation
      3. Maximum information density
    2. Speed of light limits
      1. Latency of communication
      2. Maximum serial processing speeds

 

I do not think humans are necessarily quantitatively close to the physical limits (the brain is extremely energy efficient from a thermodynamic point of view, but it also runs at only 20 watts). AI systems could have much larger power budgets [some extant supercomputers consume gigawatts of power]. But I expect many powerful/useful/interesting cognitive algorithms to be NP hard/or require exponential time (an underlying intuition is that the size of search trees grow exponentially with each "step"/searching for a particular string grows exponentially with string length. Search seems like a natural operationalisation of planning and I expect it to feature in other cognitive skills (searching for efficient encodings, approximations, compressions, patterns, etc. maybe how we generate abstractions and enrich our world model etc.), so I'm also pessimistic on just how useful quantitative progress will turn out to be in practice.

 

Counterargument

There's a common rebuttal along the lines that an ant is also a universal computer and so can in theory compute any computable program. 

The difference is that you cannot actually teach an ant how to implement universal models of computation. Humans on the other hand can actually be taught that (and invented it of their own accord). Perhaps, the hardware of an ant is a universal computer, but the ant software is not a universal learner. Human software is.

New Answer
New Comment

11 Answers sorted by

tailcalled

1714

I'm stupid.

I can obviously do many basic everyday tasks, and I can do adequate software engineering, data science, linear algebra, transgender research, and various other things.

But I know basically nothing about chemistry, biology, neurology, advertising, geology, rocket science, law, business strategy, project management, political campaigning, anthropology, astronomy, etc. etc.. Further, because I'm mentally ill, I'm bad at social stuff and paying attention. I can also only work on one task at a time, rather than being able to work on millions of tasks at a time.

There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I'm not aware of anyone who can do millions of tasks at a time.

I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.

Now that's smart.

There are other humans whose stupidity lies in somewhat different areas than me. Most of the areas I can think of are covered by someone, though there are exceptions, e.g. I'm not aware of anyone who can do millions of tasks at a time.

I think an AI could in principle become good at all of these things at once, could optimize across these wildly different fields to achieve things experts in the individual fields cannot, and could do it all with massive parallelism in order to achieve much more than I could.

Now that's smart.

I find this a persuasive ... (read more)

a massive quantitative gap in capabilities with a human

Quantity has a quality of its own:

  • An intelligence that becomes an expert in many sciences, could see connections that others would not notice.
  • Being faster can make a difference between solving a problem on time, and solving it too late. Merely being first means you can get a patent, become a Schelling point, establish a monopoly...
  • Reducing your mistake rate from 5% to 0.000001% allows you to design and execute much more complex plans.

(My point is that calling an advantage "quantitative" does not make it mostly harmless.)

6Mo Putera
+1 for "quantity has a quality all its own". "More is different" pops up everywhere.
1Noosphere89
This is because in real life, speed and resources matter because they're both finite. Unlike a Turing machine that can assume both arbitrarily high memory and time, we don't have such things.

I think the focus on quantitative vs qualitative is a distraction. If an AI does become powerful enough to destroy us, it won't matter whether that's qualitatively more powerful vs 'just' quantitatively.

I would state it slightly differently by saying: DragonGod’s original question is about whether an AGI can think a thought that no human could ever understand, not in a billion years, not ever. DragonGod is entitled to ask that question—I mean, there’s no rules, people can ask whatever question they want! But we’re equally entitled to point out that it’s not an important question for AGI risk, or really for any other practical purpose that I can think of.

For my part, I have no idea what the answer to DragonGod’s original question is. ¯\_(ツ)_/¯

johnswentworth

112

Personally, most of my intuition here comes from looking at differences within the existing human distribution.

For instance, consider a medieval lord facing a technologist making gunpowder. That's not even a large tech lead, but it's already enough that the less-knowledgeable human just has absolutely no idea what's going on. Or, consider this example about some protesters vs a lobbyist. (Note that the first example is more about knowledge, the second more about "intelligence" in a sense closer to "AGI" than to IQ; I expect AGI to exceed top humans along both of those dimensions.)

Bear in mind that there's a filter bubble here - people who go to college and then work in the sort of places where everyone has a degree typically hang out with ~zero people who are on the low end of human intelligence/knowledge/willpower. Every now and then there will be some survey that finds most people can't solve simple problems of type X, and the people who don't really hang out with anyone on the low end of the intelligence/knowledge/willpower curve are amazed that the average person manages to get by at all. And "college degree" isn't even selecting people who are all that far on the high side of the curve. There's a quote I've heard attributed to Murray Gell-Mann (although I can't vouch for its authenticity); supposedly he said to a roomful of physics grad students "You are to the rest of the world as the rest of the world is to fish.". And... yeah, that just seems basically true.

Dave Lindbergh

117

The compelling argument to me is the evolutionary one. 

Humans today have mental capabilities essentially identical to our ancestors of 20,000 years ago. If you want to be picky, say 3,000 years ago.

Which means we built civilizations, including our current one, pretty much immediately (on an evolutionary timescale) when the smartest of us became capable of doing so (I suspect the median human today isn't smart enough to do it even now).

We're analogous to the first amphibian that developed primitive lungs and was first to crawl up onto the beach to catch insects or eat eggs. Or the first dinosaur that developed primitive wings and used them to jump a little further than its competitors. Over evolutionary time later air-breathing creatures became immensely better at living on land, and birds developed that could soar for hours at a time.

From this viewpoint there's no reason to think our current intelligence is anywhere near any limits, or is greater than the absolute minimum necessary to develop a civilization at all. We are as-stupid-as-it-is-possible-to-be and still develop a civilization. Because the hominids that were one epsilon dumber than us, for millions of years, never did.

If being smarter helps our inclusive fitness (debatable now that civilization exists), our descendants can be expected to steadily become brighter. We know John von Neumann-level intelligence is possible without crippling social defects; we've no idea where any limits are (short of pure thermodynamics). 

Given that civilization has already changed evolutionary pressures on humans, and things like genetic engineering can be expected to disrupt things further, probably that otherwise-natural course of evolution won't happen. But that doesn't change the fact that we're no smarter than the people who built the pyramids, who were themselves barely smart enough to build any civilization at all.

I do agree that we may be the dumbest universal learners, but we're still universal learners.

I don't think there's any such discontinuous phase shifts ahead of us.

2Dave Lindbergh
It's not obvious to me that "universal learner" is a thing, as "universal Turing machine" is. I've never heard of a rigorous mathematical proof that it is (as we have for UTMs). Maybe I haven't been paying enough attention. Even if it is a thing, knowing a fair number of humans, only a small fraction of them can possibly be "universal learners". I know people that will never understand decimal points as long as they live or how they might study, let alone calculus. Yet are not considered to be mentally abnormal.

Viliam

73

The human brain seems to be a universal learner. There are some concepts that no human can fully grasp, but those seems to be concepts that are too large to fit in the working memory of a human. And humans can overcome those working memory limitations with a pen and paper, a smartphone, a laptop or other technological aids. There doesn't seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time.

Is this true for a human with IQ 70?

Sorry, I actually wanted to ask whether this is true for a human with IQ 80.

Oops, let me try again... is this true for a human with IQ 90?

Okay, I am giving up. Could you please tell me the approximate IQ where this suddenly becomes true, and why exactly? (I mean, why 10 points less than that is not yet universal, but 10 points more cannot bring any advantage beyond saving some time.)

.

To explain: You seem to suggest that there is a black-and-white distinction between intelligences that are universal learners and intelligences that are not. I do not think that all humans are actually universal learners (in the sense of: given eternal youth and a laptop, would invent quantum physics). Do you think they are? Because if they are not, and the concept is black-and-white, then there must be a clear boundary between the humans who are universal learners and the humans who are not, so I am curious where exactly it is. The remaining alternatives are either to admit that no human is a universal learner, or that the concept is actually fuzzy. But if it's fuzzy, then there might be a place for a hypothetical intelligence that is yet more of a universal learner than the smartest human.

Rodrigo Heck

43

Can you predict the shape of a protein from the sequence of its aminoacids? I can't and I suspect no human (even with the most powerful non-AI software) can. There is so much we are unable to understand. Another example is how we still seem to struggle to make advances on Quantum Physics.

Isn't the Foldit experiment evidence against this?

1Rodrigo Heck
No. It performs much worse than AI systems.

Thane Ruthenis

32

I basically agree with your core point — that (reasonably smart) humans are generally intelligent, and there's nowhere further to climb qualitatively than being "generally" intelligent, and that general intelligence is yes/no binary. I've been independently making some very similar arguments: that general intelligence is the ability to simulate any mathematical structure, chunk these structures into memory-efficient abstractions, then perform problem-solving in the resultant arbitrary mathematical environment.

But I think you're underestimating the power of "merely quantitative" improvements: working-memory size, long-term memory size, faster processing speed, freedom from biases and instincts, etc.

  • As per Dave Lindbergh's answer, we should expect humans to be as bad at general intelligence as they can get while still being powerful enough to escape evolution's training loop. All of these quantitative variables are set as low as they can get.
    • In particular, most humans are not generally intelligent most of the time, and many probably don't even know how to turn on their "general intelligence" at will. They use cached computations and heuristics instead, acting on autopilot. In turn, that makes our entire civilization (and any given organization in it) not generally intelligent all of the time as well, which would put obvious disadvantages to us in a conflict with a true optimizer.
  • As per tailcalled's answer, any given human is potentially a universal problem-solver but in practice has only limited understanding of some limited domains.
  • As per John Wentworth's answer, the ability to build abstraction-chains more quickly conveys you massive advantages. A very smart person today would trivially outplay any equally-intelligent caveman, or any equally-intelligent medieval lord, given the same resources and all of the relevant domain knowledge of their corresponding eras. And a superintelligent AI would be able to make as much cognitotech progress over us as we have over these cavemen/medievals, and so trivially destroy us.

The bottom line: Yeah, an ASI won't be "qualitatively" more powerful than humans, so ant:human::human:ASI isn't a precise mechanical analogy. But it's a pretty good comparison of levels of effective cognitive power anyway.

Viliam

20

Imagine that some superpowerful Omega, after reading this article, decides to run an experiment. It puts you in a simulation, which seems similar to this universe, except that the resources are magically unlimited there -- new oil keeps appearing underground (until you develop technology that makes it obsolete), the Sun will shine literally forever, and you are given eternal youth. You get a computer containing all current knowledge of humankind: everything that exists online, with paywalls and ciphers removed, plus a scan of every book that was ever written.

Your task it to become smart enough to get out of the simulation. The only information you get from Omega, is that it is possible, and that for someone like Omega it would actually be a piece of cake.

The way out is not obfuscated on purpose. Like, if it is a physical exit, placed somewhere in the universe, it would not be hidden somewhere in the middle of a planet or a star, but it would be something like a planet-sized shining box with letters "EXIT" on it, clearly visible when you enter the right solar system. Omega says to take the previous sentence as an analogy; it is not necessarily a physical place. Maybe it is a law of physics that you can discover, designed in a way such that if you know the equation, it suggests an obvious way how to use it. Maybe the simulation has a bug you can exploit to crash the simulation; that would count as solving the test. Or perhaps, once you understand the true nature of reality as clearly as Omega does, you will be able to use the resources available in the simulation to somehow acausally get yourself out of it; maybe by simulating the entire Tegmark multiverse inside the simulation, or creating an infinite chain of simulations within simulations... something like that. Again, Omega says that these are all merely analogies, serving to illustrate that the task is fair (for a superintelligence); it is not necessarily any of the above. A superintelligence in the same situation would quickly notice what needs to be done, by exploring a few thousand most obvious (for a superintelligence) options.

To avoid losing your mind because of loneliness, you are allowed to summon other people into the simulation, under the condition that they are not smarter than you. (Omega decides.) This restriction exists to prevent you from passing the task fully to someone else, as in: "I would summon John for Neumann and tell him to solve the problem; he surely would know how, even if I don't." You are not allowed to cheat by simply summoning the people you love, and living happily forever, ignoring the fact that you are in Omega's simulation. Omega is reading your thoughts, and will punish you if you stop sincerely working to get out the simulation. (But as long as you are sincerely trying, there is no time pressure, the summoned people also get eternal youth, etc.) Omega will also stop the simulation and punish you, if it would see that you made yourself incapable of solving the task; for example if you would wirehead yourself in a way that keeps you (falsely) sincerely believing that you are still successfully working on the task. The punishment comes even if you wirehead yourself accidentally.

Do you feel ready for the task? Or can you imagine some way you could fail?

Would I be able to figure it out under those conditions?  No, I don't think I'm capable of being able to think of a resolution to this scenario.  But I may have a solution...

I bring in someone of the opposite sex with the intent to procreate (you didn't mention how children develop but I'm going to assume it's a simulation of the normal process).  I bring in more people, so they can also procreate.  I encourage polygamy and polyamory to generate as many kids as possible.  We live happily and create a society where people have jobs,... (read more)

TAG

20

There’s a common rebuttal along the lines that an ant is also a universal computer and so can in theory compute any computable program.

Nothing finite, which is to.say nothing, is a universal computer, because there are programmes too big for it..

martinkunev

10

"There doesn't seem to be anything a sufficiently motivated and resourced intelligent human is incapable of grasping given enough time"

  - a human

 

If there is such a thing, what would a human observe?

deepthoughtlife

10

I agree with you. The biggest leap was going to human generality level for intelligence. Humanity already is a number of superintelligences working in cooperation and conflict with each other; that's what a culture is. See also corporations and governments. Science too. This is a subculture of science worrying that it is superintelligent enough to create a 'God' superintelligence.

To be slightly uncharitable, the reason to assume otherwise is fear -either their own or to play on that of others. Throughout history people have looked for reasons why civilization would be destroyed, and this is just the latest. Ancient prophesiers of doom were exactly the same as modern ones. People haven't changed that much.

That doesn't mean we can't be destroyed, of course. A small but nontrivial percentage of doomsayers were right about the complete destruction of their civilization. They just happened to be right by chance most of the time.

I also agree that quantitative differences could possibly end up being very large, since we already have immense proof of that in one direction given that we have superintelligences massively larger than we are already, and computers have already made them immensely faster than they used to be.

I even agree that it is likely that they key advantages quantitatively would likely be in supra-polynomial arenas that would be hard to improve too quickly even for a massive superintelligence. See the exponential resources we are already pouring into chip design for continued smooth but decreasing progress and even higher exponential resources being poured into dumb tool AIs for noticeable but not game changing increases. While I am extremely impressed by some of them like Stable Diffusion (an image generation AI that has been my recent obsession) there is such a long way to go that resources will be a huge problem before we even get to human level, much less superhuman.

Gustavo Ramires

10

I mostly agree, and I want to echo 'tailcalled' that there's another layer of intelligence that builds upon humans: civilization, or human culture (although surely there's some merit to our "architecture", so to speak, to be sure!). We've found that you can teach machines essentially any task (because of Turing completeness). That doesn't mean a single machine, by itself, might warrant being called an 'universal learner'. Such universality would come from algorithms running on said machine. I think there's a degree of universality inherent to animals and hence to humans as well. We can learn to predict and plan very well from scratch (many animals learn with little or no parenting required), are curious for learning more, can memorize and recall things from the past, etc.. 

However, I think the perspective of our integration with society is important. We also probably would not learn to reach remotely similar levels of intelligence (in the sense of the ability to solve problems, act in the world, and communicate) without instruction -- much like the instruction Turing machines receive when programmed. And this instruction has undergone refinement from many generations, through other improvement algorithms (like 'quasi-genetic' improvement of which cultures have the best teaching methods and better outcomes, and of course teachers thinking how to teach best, what to teach, etc.).

I think there's the insight that our brain is universal, simply because yes, we can probably follow/memorize any algorithm (i.e. explicit set of instructions) which fits our memory. But also our culture equips us with more powerful forms of universality where we detect most important problems, solve them, and evolve as a civilization. 

I think the most important form of universality is that of meaning and ethics: dicovering what is meaningful, what activities we should pursue, what is ethical and isn't, and what is a good life. I think we're still not very firmly in this ground of universality, lest the machines we create. 

19 comments, sorted by Click to highlight new comments since:

Can you clarify your point about chimps - in particular, what part of your argument could not also have been made by a chimp looking at an ant shortly before humans arrived and conquered the world?

That's a powerful intuition pump. Sadly, aside from the emotional power, I'm left utterly unconvinced.

I still think:

  • The human brain is a universal learner
  • There is a qualitative difference between humans and chimps
  • I still cannot teach a chimpanzee what computation is, or how to implement a Turing machine

One crux appears to be how different we think humans and chimps are. My very uninformed opinion is that the chimp brain is a "universal learner" to basically the same extent that a human's is, and that the reason you can't teach a chimp what a Turing machine is boils down to bounded resources, such as the chimp's lifespan, the chimp's "working memory size", and how much time and money you are willing to spend trying to teach chimps math.

I think the bigger difference between humans and chimps is the high prosocial-ness of humans. this is what allowed humans to evolve complex cultures that now bear a large part of our knowledge and intuitions. And the lack of that prosocial-ness is the biggest obstacle to teaching chimps math.

My very uninformed opinion is a strong disagreement. I don't think you could teach the median chimps maths even if you tried.

 

(I'm also under the impression that median chimp working memory was higher than median human?)

This. It would be a very surprising fact if they didn't have Turing completeness, as it's very easy to do so. Given arbitrary time and memory, any computer can do any task, but that's not how real life works.

An ant's brain is probably also Turing complete, but it's not a "universal learner" in the sense I'm imagining it.

 

"Universal learning" is a property of the particular learning software, not the actual neural hardware.

I don't know how to "implement" a Turing machine, but I would think inventing a spear is enough to dominate chimps.

I am not even sure the "human - chimpanzee gap" is a sensible notion for informing expectations of superintelligence. That seems to be a difference of kind I simply don't think will manifest. Once you make the jump to universality, there's nowhere higher to jump to.

For me it's the opposite. It seems the Main difference is we are slightly better than apes at language and abstract reasoning and that's basically enough to completely dominate them. You bring up software which is one of the areas where I feel having adversaries that are way smarter than you is really scarry. Software seems mostly bottlenecked by things like our limited working memory etc.

Software seems mostly bottlenecked by things like our limited working memory etc.

Technology can alleviate this. We somewhat cheat by taking notes and stuff, but brain computing interfaces may allow us enhance our working memory.

 

It seems the Main difference is we are slightly better than apes at language and abstract reasoning and that's basically enough to completely dominate them.

Yes, that qualitative difference is very powerful.

I don't think the line between what you're calling qualitative vs quantitative is at all clear in prospect. It's easy to say afterward that our language skills are qualitatively different than an ape's, but can you point to what features would have made you say that 'in advance', without watching humans use their slightly better language to take over the world? And if I gave you some quantitative differences between me and a plausible AGI (it runs ___x faster, it spends 0x as much time doing things it does not reflectively endorse, it lives ___x longer, etc), how do you know that those won't have a "qualitative"-sized impact as well?

I have been persuaded that an AI may be able to perform multiple cognitive tasks at the same time in a way that homo sapiens simply cannot (let's call this "multithreaded"). I expect that AI will naturally also have larger working memories, longer attention spans, better recall, faster clock cycles, etc.

 

The above properties (especially multithreaded thought) may constitute a difference that I would consider "qualitatively huge".

 

For example:

  • It could enable massively parallel learning, allowing the AI to attain immense breadth and depth of domain knowledge
    • The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)
    • This would give it a cross-disciplinary perspective/viewpoint that no human can attain
  • It could perform multiple cognitive processes at the same time
    • This may be equivalent to having n minds collaborating on a problem but without any of the problems of collaboration, massively higher communication bandwidth and sharing of full, complex cognitive representations (unlike the lossy transmissions of language)
    • It may be able to effectively solve problems no human teams can due to their inherent limitations
  • Multithreaded thought may allow them to represent (, manipulate and navigate) abstractions that single threaded brains cannot (within reasonable compute)
    • A difference in what abstractions are available to us could constitute a qualitative difference
  • Larger working memory could allow it to learn abstractions too large to fit in human brains
  • The above may allow it to derive/synthesise insight that human brains will never find in any reasonable time frame

I think there will be problems that it would take human mathematicians/scientists/philosophers centuries to solve that this AI can probably get done in reasonable time frames. That's powerful

 

But it still doesn't feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/more efficiently than humans. Solve problems that it would take us longer to solve.

It doesn't feel like the AI can solve problems that humans will never solve period in the way that humans can solve many problems that chimpanzees will never solve period (assuming static intelligence across chimpanzee generations).

 

I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?

Can you direct me to material informing your take that our language skills are only slightly better. I am under the impression that chimpanzees don't have language.

 

And the qualitative thing is "universality". Once you jump to universality, you can't jump higher. Not all language systems are universal, but a universal language system is maximally powerful. Better systems can be more expressive, but not more powerful. They can't express something that another universal language system is fundamentally incapable of expressing.

 

(Though I'm again under the impression that chimps don't even have non universal but powerful language systems. Humans didn't start out with universal language, and innovated our way there.)

[-]TAG10

There's no evidence that any language is universal.

Languages allow the formation of an infinite number of sentences based on a finite vocabulary and set of syntactical rules, but it doesn't follow that they can express "everything". If you feel your language does not allow you to. express ayour thoughts , then you can extend your language...as far as your thought If your language can't express a concept that you also can 't conceive, how would you know?

The situation is analogous to number systems. There are ways of writing numerals that don't allow you to write arbitrarily large numerals, and ways that do. So the ways that do are universal ... in a sense. They don't require actual infinities , like a UTM. On the other hand, the argument only demonstrates universality in a limited sense: a number system that can write any integer , cannot necessarily write fractions or complex numbers, or whatever. So what is the ultimately universal system? No one knows. Integers have been extended to real numbers, surreal numbers, and so on. No one knows where the outer limit is.

If your language can't express a concept that you also can 't conceive, how would you know?

  1. I think these kinds of arguments are bad/weak in general
  2. If you could actually conceive the concept, the language could express it
    1. Any agent that conceived the concept could express it within the language
[-]TAG20

I think these kinds of arguments are bad/weak in general

I am not going to update unless you say why.

If you could actually conceive the concept, the language could express it

Again, you need an argument.

Hm... Yeah, I think I can run with the notion that we would be able to kinda understand anything that a superintelligence was trying to convey to me on some level in a way that chimps would not grasp basic logic arguments (not sure how much logic some apes are able to grasp?). This actually made me think of one area where I could imagine such a difference between humans and AI: our motivational system feels capability wise similar to chimps language skills (or maybe that's just me?), as there are "some" improvements knowledge/technology (self-help literature, stimulants, building institutions) gives you here, but at the end of the day all your tools won't help if your stupid brain lost track of why you were doing anything in the first place.

I'm curious how you think your views here cash out differently from (your model of) most commenters here, especially as pertains to alignment work (timelines, strategy, prioritization, whatever else), but also more generally. If I'm interpreting you correctly, your pessimism on the usefulness-in-practice of quantitative progress probably cashes out in some sort of bet against scaling (i.e. maybe you think the "blessings of scale" will dry up faster than others think)? 

Oh, I think superintelligences will be much less powerful than others seem to think.

 

Less human vs ant, and more "human vs very smart human that can think faster, has much larger working memory, longer attention spans, better recall and parallelisation ability".