Hey, I wonder what's your policy on linking blog posts? I have some texts that might be interesting to this community, but I don't really feel like copying everything from HTML here and duplicating the content. At the same time I know that some communities don't like people promoting their content. What are the best practices here?
In general, LM-generated text is still easily distinguishable by other LMs. Even though we humans can not tell the difference, the way they generate text is not really human-like. They are much more predictable, simply because they are not trying to convey information as humans do, they are guessing the most probable sequence of tokens.
Humans are less predictable, because they have always something new to say, LMs on the other hand are like the most cliche person ever.
No indication in this context means that:
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
There is no indication for many catastrophic scenarios and truthfully I don't worry about any of them.
I don't see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore's law is dead, the fact that we are now relying on HW accelerators is not a good thing, it's a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.
If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.
BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/symbol_interpretation
You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.
It's not goapost moving, it's the hype that's moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.
I agree that LMs are concetually more similar to ELIZA than to AGI.
I believe that over time we will understand that producing human-like text is not a sign of intelligence. In the past people believed that only intelligent agents are able to solve math equations (naturally, since only people can do it and animals can). Then came computer and they were able to do all kinds of calculations much faster and without errors. However, from our current point of view we now understand that doing math calculations is not really that intelligent and even really simple machines can do that. Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well. People were afraid that chess-algorithm-like machines can be programmed to conquer the world, but from our perspective, that's a ridiculous proposition.
I believe that text generation will be a similar case. We think that you have to be really intelligent to produce human-like outputs, but in the end with enough data, you can produce something that looks nice and it can even be useful sometimes, but there is no intelligence in there. We will slowly develop an intuition about what are the capabilities of large-scale ML models. I believe that in the future we will think about them as basically a kinda fuzzy databases that we can query with natural language. I don't think that we will think about them as intelligent agents capable of autonomous actions.
in order for this to occupy any significant probability mass, I need to hear an argument for how our current dumb architectures do as much as they do, and why that does not imply near-term weirdness. Like, "large transformers are performing {this type of computation} and using {this kind of information}, which we can show has {these bounds} which happens to include all the tasks it has been tested on, but which will not include more worrisome capabilities because {something something something}."
What about: State-of-the-art models with 500+B parameters still can't do 2-digit addition with 100% reliability. For me, this shows that the models are perhaps learning some associative rules from the data, but there is no sign of intelligence. An intelligent agent should notice how addition works after learning from TBs of data. Associative memory can still be useful, but it's not really an AGI.
One additional maxim to consider is that the AI community in general can only barely conceptualize and operationalize difficult concepts, such as safety. Historically, the AI community was good at maximizing some measure of performance, usually pretty straight forward test set metrics such as classification accuracy. Culturally this is how the community approaches all the problems -- by aggregating complex phenomena into a single number. Note that this approach is not used in that many fields outside of AI and math, as you always have to make some lossy simplifications.
We can observe this malpractice in AI safety as well. There is a cottage industry of datasets and papers collecting "safety" samples, and we use these to measure some safety metric. We can then compare the numbers for different models and this makes AI folks happy. But there is barely any discussion about how representative these datasets really are for real-life risks, how comprehensive the data collection process is, or how sound it is to use random crowd-sourced workers or LLMs to generate such samples. The threats and risks are also rarely described in more detail -- often it's just a lot of hand-waving.
Based on my pretty deep experience with one aspect of AI safety (societal biases), I have very little confidence in our ability to understand AI behavior. Compared to measuring performance on well-defined NLP tasks, once we involve societal context, the intricacies of what we are trying to measure are beyond simple benchmarks. Note that we have entire fields that are trying to understand some of these problem in human societies, but we are to believe that we can collect a test set with few thousand samples and this should be enough to understand how AI works.