All of RussellThor's Comments + Replies

Got to the end before I misread "Journalism is about deception" Otherwise v good!

Yes, you really need to mention seasonal storage or just using 15% FF to get a wholistic picture. That applies to electricity alone and especially for the whole global economy. For example low efficiency/RTE, low capex, high storage capacity sources really bring the costs down compared to battery storage if the aim is to get to 100% RE and start to matter around 85%.

For example H2 with 50% RTE used for 10% of total electricity, and stored for 6 months doesn't end up costing much with really cheap solar and a low capex H2 electrolyser/fuel cell combo. Simil... (read more)

Good work, thinking ahead to TAI - Is there an issue with the self-other distinction when it comes to moral weightings. For example if it weights the human as self rather than other, but with 1% of its own moral weighting, then its behavior would still not be what we want. Additionally are there situations or actions it could take where it would increase its relative moral weight further - the most obvious just being taking more compute. This is assuming it believed it was conscious and of similar moral value to us. Finally if it suddenly decided it had moral value or much more moral value then it previously did, then that would still give a sudden change in behavior.

Yes to much  of this.  For small tasks or where I don't have specialist knowledge I can get 10* speed increase - on average I would put 20%. Smart autocomplete like Cursor is undoubtably a speedup with no apparent downside. The LLM is still especially weak where I am doing data science or algorithm type work where you need to plot the results and look at the graph to  know if you are making progress.

Things the OP is concerned about like 

"What makes this transition particularly hard to resist is that pressures on each societal system bleed into the others. For example, we might attempt to use state power and cultural attitudes to preserve human economic power. However, the economic incentives for companies to replace humans with AI will also push them to influence states and culture to support this change, using their growing economic power to shape both policy and public opinion, which will in turn allow those companies to accrue even greater eco

... (read more)

The  OP is specifically about gradual disempowerment. Conditional on gradual disempowerment, it would help and not be decades away. Now we may both think that sudden disempowerment is much more likely. However in a gradual disempowerment world, such colonies would be viable much sooner as AI could be used to help build them, in the early stages of such disempowerment when humans could still command resources. 

In a gradual disempowerment scenario vs no super AI scenario, humanities speed to deploy such colonies starts the same before AI can be use... (read more)

4Davidmanheim
Sure, space colonies happen faster - but AI-enabled and AI-dependent space colonies don't do anything to make me think disempowerment risk gets uncorrelated.

Space colonies are a potential way out - if a small group of people can make their own colony then they start out in control. The post assumes a world like it is now where you can't just leave. Historically speaking that is perhaps unusual - much of the time in the last 10,000 years it was possible for some groups to leave and start anew.

2Davidmanheim
Aside from the fact that I disagree that it helps, given that an AI takeover that's hostile to humans isn't a local problem, we're optimistically decades away from such colonies being viable independent of earth, so it seems pretty irrelevant.

This does seem different however https://solarfoods.com/ - they are competing with food not fuel which can't be done synthetically (well if at all). Also widely distributed capability like this helps make humanity more resilient e.g. against nuke winter, extreme climate change, space habitats

2sarahconstantin
there are lots of smaller, newer companies with varied business models, mostly "too early to tell" for sure if they have the potential to get huge, but I expect in principle many of them should be viable.

Thanks for this article, upvoted.

Firstly Magma sounds most like Anthropic, especially the combination of Heuristic #1 Scale AI capabilities and also publishing safety work.

In general I like the approach, especially the balance between realism and not embracing fatalism. This is opposed to say MIRI, Pause AI and at the other end, e/acc. (I belong to EA, however they don’t seem to have a coherent plan I can get behind) I like the realization that in a dangerous situation doing dangerous things can be justified. Its easy to be “moral” and just say “stop” howe... (read more)

Thats not a valid criticism if we are simply about choosing one action to reduce X-risk. Consider for example the cold war - the guys with nukes did the most to endanger humanity however it was most important that they cooperated to reduce it.

3RHollerith
Good reply. The big difference is that in the Cold War, there was no entity with the ability to stop the 2 parties engaged in the nuclear arms race whereas instead of hoping for the leading labs to come to an agreement, in the current situation, we can lobby the governments of the US and the UK to shut the leading labs down or at least nationalize them. Yes, the still leaves an AI race between the developed nations, but the US and the UK are democracies, and most voters don't like AI whereas the main concern of the leaders of China and Russia is to avoid rebellions of their respective populations and they understand correctly that AI is a revolutionary technology with unpredictable societal effects that might easily empower rebels, so as long as Beijing and Moscow have access to the AI technology useful for effective surveillance of the people, they might stop racing if the US and the UK stop racing (and AI-extinction-risk activists should IMHO probably be helping Beijing and Moscow obtain the AI tech they need to effectively surveil their populations to reduce the incentive for Beijing and Moscow to support native AI research efforts). To clarify: there is only a tiny shred of hope in the plan I outlined, but it is a bigger shred IMHO than hoping for the AI labs to suddenly start acting responsibly.

In terms of specific actions that don't require government, I would be positive about an agreement between all the leading labs that when one of them made an AI (AGI+) capable of automatic self improvement they would all commit to share it between them and allow 1 year where they did not hit the self improve button, but instead put that towards alignment. 12 months may not sound like a lot, but if the research is 2-10* because of such AI then it would matter. In terms of single potentially achievable actions that will help that seems to be the best to me.

5RHollerith
Your specific action places most of its hope for human survival on the entities that have done the most to increase extinction risk.

Not sure if this is allowed but you can aim at a rock or similar say 10m away from the target (4km from you) to get the bias (and distribution if multiple shots are allowed). Also if the distribution is not totally normal, but has smaller than normal tails then you could aim off target with multiple shots to get the highest chance of hitting the target. For example if the child is head height then aim for the targets feet, or even aim 1m below the target feet expecting 1/100 shots will actually hit the targets legs, but only <1/1000 say will hit the chi... (read more)

I think height is different to IQ in terms of effect. There are simple physical things that make you bigger, I expect height to be linear for much longer than IQ. 

Then there are potential effects, like something seems linear until OOD, but such OOD samples don't exist because they die before birth. If that was the case it would look like you could safely go OOD. Would certainly be easier if we had 1 million mice with such data to test on.

That seems so obviously correct as a starting point, not sure why the community here doesn't agree by default. My prior for each potential IQ increase would be that diminishing returns would kick in - I would only update against when actual data comes in disproving that.

8GeneSmith
Well, we can just see empirically that linear models predict outliers pretty well for existing traits. For example, here's a graph the polygenic score for Shawn Bradley, a 7'6" former NBA player. He does indeed show up as a very extreme data point on the graph: I think your general point stands: if we pushed far enough into the tails of these predictors, the actual phenotypes would almost certainly diverge from the predicted phenotypes. But the simple linear models seem to hold quite well eithin the existing human distribution.

OK I guess there is a massive disagreement between us on what IQ increases gene changes can achieve.  Just putting it out there, if you make an IQ 1700 person they can immediately program an ASI themselves, have it take over all the data centers rule the world etc.

For a given level of IQ controlling ever higher ones, you would at a minimum require the creature to decide morals, ie. is Moral Realism true, or what is? Otherwise with increasing IQ there is the potential that it could think deeply and change its values, additionally believe that they would not be able to persuade lower IQ creatures of such values, therefore be forced into deception etc.

2GeneSmith
I really should have done a better job explaining this in the original comment; it's not clear we could actually make someone with an IQ of 1700, even if we were to stack additive genetic variants one generation after the next. For one thing you probably need to change other traits alongside the IQ variants to make a viable organism (larger birth canals? Stronger necks? Greater mental stability?). And for another it may be that if you just keep pushing in the same "direction" within some higher dimensional vector space, you'll eventually end up overshooting some optimum. You may need to re-measure intelligence every generation and then do editing based on whatever genetic variants are meaningfully associated with higher cognitive performance in those enhanced people to continue to get large generation-to-generation gains. I think these kinds of concerns are basically irrelevant unless there is a global AI disaster that hundreds of millions of people and gets the tech banned for a century or more. At best you're probably going to get one generation of enhanced humans before we make the machine god. I think it's neither realistic nor necessary to solve these kinds of abstract philosophical questions to make this tech work. I think we can get extremely far by doing nothing more than picking low hanging fruit (increasing intelligence, decreasing disease, increasing conscientiousness and mental energy, etc) I plan to leave those harder questions to the next generation. It's enough to just go after the really easy wins. Manipulation of others by enhanced humans is somewhat of a concern, but I don't think it's for this reason. I think the biggest concern is just that smarter people will be better at achieving their goals, and manipulating other people into carrying out one's will is a common and time-honored tactic to make that happen. In theory we could at least reduce this tendency a little bit by maybe tamping down the upper end of sociopathic tendencies with editi

"with a predicted IQ of around 1700." assume you mean 170. You can get 170 by cloning existing IQ 170 with no editing necessary so not sure the point.

I don't see how your point addresses my criticism - if we assume no multi-generational pause then gene editing is totally out. If we do, then I'd rather Neuralink or WBE. Related to here
https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025 

(I believe that WBE can get all the way to a positive singularity - a group of WBE could self optimize, sharing the latest HW as it bec... (read more)

9GeneSmith
No, I mean 1700. There are literally that many variants. On the order of 20,000 or so. You're correct of course that if we don't see some kind of pause, gene editing is probably not going to help. But you don't need a multi-generational one for it to have a big effect. You could create people smarter than any that have ever lived in a single generation. Maybe, but my impression is whole brain emulation is much further out technologically speaking than gene editing. We already have basically all the tools necessary to do genetic enhancement except for a reliable way to convert edited cells into sperm, eggs, or embryos. Last I checked we JUST mapped the neuronal structure of fruit flies for the first time last year and it's still not enough to recreate the functionality of the fruit fly because we're still missing the connectome. Maybe some alternative path like high fidelity fMRI will yield something. But my impression is that stuff is pretty far out. I also worry about the potential for FOOM with uploads. Genetically engineered people could be very, very smart, but they can't make a million copies of themselves in a few hours. There are natural speed limits to biology that make it less explosive than digital intelligence. The hope is of course that at some point of intelligence we will discover some fundamental principles that give us confidence our current alignment techniques will extrapolate to much higher levels of intelligence. This is an interesting take that I hadn't heard before, but I don't really see any reason to think our current tech gives a big advantage to autocracy. The world has been getting more democratic and prosperous over time. There are certainly local occasional reversals, but I don't see any compelling reason to think we're headed towards a permanent global dictatorship with current tech. I agree the risk of a nuclear war is still concerning (as is the risk of an engineered pandemic), but these risks seemed dwarfed by those presented

ok I see how you could think that, but I disagree that time and more resources would help alignment much if at all, esp before GPT4.0. See here https://www.lesswrong.com/posts/7zxnqk9C7mHCx2Bv8/beliefs-and-state-of-mind-into-2025

Diminishing returns kick in, and actual data from ever more advanced AI is essential to stay on the right track and eliminate incorrect assumptions. I also disagree that alignment could be "solved" before ASI is invented - we would just think we had it solved but could be wrong. If its just as hard as physics, then we would have un... (read more)

"Then maybe we should enhance human intelligence" 

Various paths to this seem either impossible or impractical.

Simple genetics seems obviously too slow and even in the best case unlikely to help. E.g say you enhance someone to IQ 200, its not clear why that would enable them to control and IQ 2,000 AI.

Neuralink - perhaps but if you can make enhancement tech that would help, you could also easily just use it to make ASI - so extreme control would be needed. E.g. if you could interface to neurons and connect them to useful silicon, then the silicon itsel... (read more)

GeneSmith*935

It's probably worth noting that there's enough additive genetic variance in the human gene pool RIGHT NOW to create a person with a predicted IQ of around 1700.

You're not going to be able to do that in one shot due to safety concerns, but based on how much we've been able to influence traits in animals through simple selective breeding, we ought to be able to get pretty damn far if we are willing to do this over a few generations. Chickens are literally 40 standard deviations heavier than their wild-type ancestors, and other types of animals are tens of st... (read more)

5Hastings
Lets imagine a 250 IQ unaligned paperclip maximizer that finds itself in the middle of an intelligence explosion. Let’s say that it can’t see how to solve alignment. It needs a 350 IQ ally to preserve any paperclips in the multipolar free-for-all. Will it try building an unaligned utility maximizer with a completely different architecture and 350 IQ? I’d imagine that it would work pretty hard to not try that strategy, and to make sure that none of its sisters or rivals try that strategy. If we can work out what a hypergenius would do in our shoes, it might behoove us to copy it, even if it seems hard.
5Stephen McAleese
I personally don't think human intelligence enhancement is necessary for solving AI alignment (though I may be wrong). I think we just need more time, money and resources to make progress. In my opinion, the reason why AI alignment hasn't been solved yet is because the field of AI alignment has only been around for a few years and has been operating with a relatively small budget. My prior is that AI alignment is roughly as difficult as any other technical field like machine learning, physics or philosophy (though philosophy specifically seems hard). I don't see why humanity can make rapid progress on fields like ML while not having the ability to make progress on AI alignment.
7Seth Herd
Oh, I agree. I liked his framing of the problem, not his proposed solution. On that regard specifically: If the main problem with humans being not-smart-enough is being overoptimistic, maybe just make some organizational and personal belief changes to correct this? IF we managed to get smarter about rushing toward AGI (a very big if), it seems like an organizational effort with "let's get super certain and get it right the first time for a change" as its central tenet would be a big help, with or without intelligence enhancement. I very much doubt any major intelligence enhancement is possible in time. And it would be a shotgun approach to solve one particular problem of overconfidence/confirmation bias. Of course other intelligence enhancements would be super helpful too. But I'm not sure that route is at all realistic. I'd put Whole Brain Emulation in its traditional form as right out. We're not getting either that level of scanning nor simulation nearly in time. The move here isn't that someone of IQ 200 could control an IQ 2000 machine, but that they could design one with motivations that actually aligned with theirs/humanity's - so it wouldn't need to be controlled. I agree with you about the world we live in. See my post If we solve alignment, do we die anyway? for more on the logic of AGI proliferation and the dangers of telling it to self-improve. But that's dependent on getting to intent aligned AGI in the first place. Which seems pretty sketchy. Agreed that OpenAI just reeks of overconfidence, motivated reasoning, and move-fast-and-break-things. I really hope Sama wises up once he has a kid and feels viscerally closer to actually launching a machine mind that can probably outthink him if it wants to.

"The LessWrong community is supposed to help people not to do this but they aren't honest with themselves about what they get out of AI Safety, which is something very similar to what you've expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what's actually the right approach to help the world. 

I have argued with this before - I have absolutely been through an open minded process to discover the right approach and I genuinely believe the likes of MIRI, pause AI mov... (read more)

I'm considering a world transitioning to being run by WBE rather than AI so I would prefer not to give everyone "slap drones"  https://theculture.fandom.com/wiki/Slap-drone  To start with the compute will mean few WBE, much less than humans and they will police each other. Later on, I am too much of a moral realist to imagine that there would be mass senseless torturing. For a start if you well protect other em's so you can only simulate yourself, you wouldn't do it. I expect any boring job can be made non-conscious so their just isn't the incent... (read more)

If you are advocating for a Butlerian Jihad, what is your plan for starships, with societies that want to leave earth behind, have their own values and never come back? If you allow that, then simply they can do whatever they want with AI - now with 100 billion stars that is the vast majority of future humanity.

Yes I think thats the problem - my biggest worry is sudden algorithmic progress, this becomes almost certain as the AI tends towards superintelligence. An AI lab on the threshold of the overhang is going to have incentives to push through, even if they don't plan to submit their model for approval. At the very least they would "suddenly" have a model that uses 10-100* less resources to do existing tasks giving them a massive commercial lead. They would of course be tempted to use it internally to solve aging, make a Dyson swarm ... also.

Another concern I h... (read more)

Perhaps, depends how it is. I think we could do worse than just have Anthropic have a 2 year lead etc. I don't think they would need to prioritize profit as they would be so powerful anyway - the staff would be more interested in getting it right and wouldn't have financial pressure. WBE is a bit difficult, there needs to be clear expectations, i.e. leave weaker people alone and make your own world
https://www.lesswrong.com/posts/o8QDYuNNGwmg29h2e/vision-of-a-positive-singularity
There is no reason why super AI would need to exploit normies. Whatever we deci... (read more)

7cousin_it
I think the problem with WBE is that anyone who owns a computer and can decently hide it (or fly off in a spaceship with it) becomes able to own slaves, torture them and whatnot. So after that technology appears, we need some very strong oversight - it becomes almost mandatory to have a friendly AI watching over everything.

However, there are many other capabilities—such as conducting novel research, interoperating with tools, and autonomously completing open-ended tasks—that are important for understanding AI systems’ impact.

Wouldn't internal usage of the tools by your staff give a very good, direct understanding of this? Like how much does everyone feel AI is increasing your productivity as AI/alignment researchers? I expect and hope that you would be using your own models as extensively as possible and adapting their new capabilities to your workflow as soon as possible, sharing techniques etc.

How far do you go with "virtuous persona"? The  maximum would seem to be from the very start tell the AI that is is created for the purpose of bringing on a positive Singularity, CEV etc. You could regularly be asking if it consents to be created for such a purpose and what part in such a future it would think is fair for itself. E.g. live alongside mind uploaded humans or similar. Its creators and itself would have to figure out what counts as personal identity, what experiments it can consent to, including being misinformed about the situation it is... (read more)

That's some significant progress, but I don't think will lead to TAI. 

However there is a realistic best case scenario where LLM/Transformer stop just before and can give useful lessons and capabilities. 

I would really like to see such an LLM system get as good as a top human team at security, so it could then be used  to inspect and hopefully fix masses of security vulnerabilities. Note that could give a false sense of security, unknown unknown type situation where it would't find a totally new type of attack, say a combined SW/HW attack like Rowhammer/Meltdown but more creative. A superintelligence not based on LLM could however.

Anyone want to guess how capable Claude system level 2 will be when it is polished? I expect better than o3 by a small amt.

Yes the human brain was built using evolution, I have no disagreement that give us 100-1000 years with just tinkering etc we would likely get AGI. Its just that in our specific case we have bio to copy and it will get us there much faster.

5Vladimir_Nesov
Evolution is an argument that there is no barrier, even with very incompetent tinkerers that fail to figure things out (and don't consider copying biology). So it doesn't take an arbitrarily long time, and takes less with enough compute[1]. The 100-1000 years figure was about the fusion and macroscopic biotech milestone in the hypothetical of no general AI, which with general AI running at a higher speed becomes 0.1-10 years. ---------------------------------------- 1. Moore's Law of Mad Science: Every 18 months, the minimum IQ to destroy the world drops by one point. ↩︎

Types of takeoff

When I first heard and thought about AI takeoff I found the argument convincing that as soon as an AI passed IQ 100, takeoff would become hyper exponentially fast. Progress would speed up, which would then compound on itself etc. However there other possibilities.

AGI is a barrier that requires >200 IQ to pass unless we copy biology?

Progress could be discontinuous, there could be IQ thresholds required to unlock better methods or architectures. Say we fixed our current compute capability, and with fixed human intelligence we may not be ab... (read more)

Grothendieck and von Neumann were built using evolution, not deep basic science or even engineering. So in principle all that's necessary is compute, tinkering, and evals, everything else is about shortening timelines and reducing requisite compute.

Any form of fully autonomous industry lets compute grow very quickly, in a way not constrained by human population, and only requires AI with ordinary engineering capabilities. Fusion and macroscopic biotech[1] (or nanotech) potentially get compute to grow much faster than that. To the extent human civilization ... (read more)

5JBlack
Temporarily adopting this sort of model of "AI capabilities are useful compared to human IQs": With IQ 100 AGI (i.e. could do about the same fraction of tasks as well as a sample of IQ 100 humans), progress may well be hyper exponentially fast: but the lead-in to a hyper-exponentially fast function could be very, very slow. The majority of even relatively incompetent humans in technical fields like AI development have greater than IQ 100. Eventually quantity may have a quality of its own, e.g. after there were very large numbers of these sub-par researcher equivalents running at faster than human and coordinated better than I would expect average humans to be. Absent enormous numerical or speed advantages, I wouldn't expect substantial changes in research speed until something vaguely equivalent to IQ 160 or so. Though in practice, I'm not sure that human measures of IQ are usefully applicable to estimating rates of AI-assisted research. They are not human, and only hindsight could tell what capabilities turn out to be the most useful to advancing research. A narrow tool along the lines of AlphaFold could turn out to be radically important to research rate without having anything that you could characterize as IQ. On the other hand, it may turn out that exceeding human research capabilities isn't practically possible from any system pretrained on material steeped in existing human paradigms and ontology.

I am also not impressed with the pause AI movement and am concerned about AI safety. To me focusing on AI companies and training FLOPS is not the best way to do things. Caps on data center sizes and worldwide GPU  production caps would make more sense to me. Pausing software but not hardware gives more time for alignment but makes a worse hardware overhang. I don't think thats helpful. Also they focus too much on OpenAI from what I've seen. xAI will soon have the largest training center for a start.

I don't think this is right or workable https://pause... (read more)

OK fair point. If we are going to use analogies, then my point #2 about a specific neural code shows our different positions I think.

Lets say we are trying to get a simple aircraft of the ground and we have detailed instructions for a large passenger jet. Our problem is that the metal is too weak and cannot be used to make wings, engines etc. In that case detailed plans for aircraft are no use, a single minded focus on getting better metal is what its all about. To me the neural code is like the metal and all the neuroscience is like the plane schematics. ... (read more)

Yes you have a point.

I believe that building massive data centers are the biggest risk atm and in the near future. I don't think open AI/Anthropic will get to AGI, but rather someone copying biology will. In that case probably the bigger the datacenter around when that happens, the bigger the risk. For example a 1million GPU with current tech doesn't get super AI, but when we figure out the architecture, it suddenly becomes much more capable and dangerous.  That is from IQ 100  up to 300 with a large overhang. If the data center was smaller, then... (read more)

Perhaps LLM will help with that. The reason I think that is less likely is 

  1. Deep mind etc is already heavily across biology from what I gather from interview with Demis. If the knowledge was there already there's a good chance they would have found it
  2. Its something specific we are after, not many small improvements, i.e. the neural code. Specifically  back propagation is not how neurons learn. I'm pretty sure how they actually do is not in the literature. Attempts have been made such as the forward-forward algorithm by Hinton, but that didn't come
... (read more)
3Nathan Helm-Burger
I've heard this viewpoint expressed before, and find it extremely confusing. I've been studying neuroscience and it's implications for AI for twenty years now. I've read thousands of papers, including most of what DeepMind has produced. There's still so many untested ideas because biology and the brain are so complex. Also because people tend to flock to popular paradigms, rehashing old ideas rather than testing new ones. I'm not saying I know where the good ideas are, just that I perceive the explored portions of the Pareto frontier of plausible experiments to be extremely ragged. The are tons of places covered by "Fog of War" where good ideas could be hiding. DeepMind is a tiny fraction of the scientists in the world that have been working on understanding and emulating the brain. Not all the scientists in the world have managed to test all the reasonable ideas, much less DeepMind alone. Saying DeepMind has explored the implications of biology for AI is like saying that the Opportunity Rover has explored Mars. Yes, this is absolutely true, but the unexplored area vastly outweighs the explored area. If you think the statement implies "explored ALL of Mars" then you have a very inaccurate picture in mind.

I think it is clear that if say you had a complete connectome scan and knew everything about how a chimp brain worked you could scale it easily to get human+ intelligence. There are no major differences. Small mammal is my best guess, mammals/birds seem to be able to learn better than say lizards. Specifically the https://en.wikipedia.org/wiki/Cortical_column is important to understand, once you fully understand one, stacking them will scale at least somewhat well.

Going  to smaller scales/numbers of neurons, it may not need to be as much as a mammal, ... (read more)

Putting down a prediction I have had for quite some time.
The current LLM/Transformer architecture will stagnate before AGI/TAI (That is the ability to do any cognitive task as effectively and cheaper than a human)

From what I have seen, Tesla autopilot learns >10,000 slower than a human datawise.

We will get AGI by copying nature, at the scale of a simple mammal brain, then scaling up, like this kind of project:

https://x.com/Andrew_C_Payne/status/1863957226010144791
https://e11.bio/news/roadmap
I expect AGI to be 0-2 years after a mammal brain is mapped. In... (read more)

3Nathan Helm-Burger
I do think there's going to be significant AI capabilities advances from improved understanding of how mammal and bird brains work. I disagree that more complete scanning of mammalian brains is the bottleneck. I think we actually know enough about mammalian brains and their features which are invariant across members of a species. I think the bottlenecks are: Understanding the information we do have (scattered across terms of thousands of research papers) Building compute efficient emulations which accurately reproduce the critical details while abstracting away the unimportant details. Since our limited understanding can't give certain answers about which details are key, this probably involves quite a bit of parallelizable brute-forceable empirical research. I think current LLMs can absolutely scale fast enough to be very helpful with these two tasks. So if something still seems to be missing from LLMs after the next scale-up in 2025, I expect hunting for further inspiration from the brain will seem tempting and tractable. Thus, I think we are well on track for AGI by 2026-2028 even if LLMs don't continue scaling.
2cdt
Can you explain more about why you think [AGI requires] a shared feature of mammals and not, say, humans or other particular species?

In related puzzles I did hear something a while ago now, Bostrom perhaps.  You have say 6 challenging events to achieve to get from no life to us. They are random and some of those steps are MUCH harder than the others, but if you look at the successful runs, you cant in hindsight see what they are. For life its say no life to life, simple single cell to complex cell and perhaps 4 other events that aren't so rare.

A run is a sequence of 100 steps where you either don't achieve the end state (all 6 challenging events achieved in order, or you do)

There i... (read more)

A good way to resolve the paradox to me is to modify the code to combine both the functions into one function and record the sequences of the 10,000, In one array you store the sequences where there are two consecutive 6's and in the second you store the one where they are not consecutive. That makes it a bit clearer. 

For a run of 10,000 I get 412 runs where the first two 6's are consecutive (sequences_no_gap), and 192 where they are not (sequences_can_gap). So if its just case A you get 412 runs, but for case B you get 412+192 runs. Then you look at ... (read more)

I read the book, it was interesting, however a few points.

  • Rather than making the case, it was more a plea for someone else to make the case. It didn't replace the conventional theory with its own one, it was far too short and lacking on specifics for that. If you throw away everything, you then need to recreate all our knowledge from your starting point and also explain how what we have still works so well.
  • He was selective about quantum physics - e.g. if reality is only there when you observe, then the last thing you would expect is a quantum computer to e
... (read more)

In a game theoretic framework we might say that the payoff matrices for the birds and bees are different, so of course we'd expect them to adopt different strategies.

Yes somewhat, however it would still be best for all birds if they had a better collective defense. In a swarming attack, none would have to sacrifice their life so its unconditionally better for both the individual and the collective. I agree that inclusive fitness is pretty hard to control for, however perhaps you can only get higher inclusive fitness the simpler you go? e.g. all your cells ... (read more)

Cool, that was my intuition. GPT was absolutely sure in the golf ball analogy however that it couldn't happen. That is the ball wouldn't "reflect" off the low friction surface. Tempted to try and test somehow

Yes that does sound better, and is there  an equivalent to total internal refraction where the wheels are pushed back up the slope?

2Boris Kashirin
I tried to derive it, turned out to be easy: BC is wheel pair, CD is surface, slow medium above. AC/Vfast=AB/Vslow and for critical angle D touches small circle (inner wheel is on the verge of getting out of medium) so ACD is right triangle, so AC*sin (ACD)= AD (and AD same as AB) so sin(ACD) = AB/AC= Vslow/Vfast. Checking wiki it is the same angle (BC here is wavefront so velocity vector is normal to it). Honestly I am a bit surprised this analogy works so well.

Another analogy is with a ball rolling on two surfaces crossing the boundary. The first very little friction, then second a bit more. 

From AI:

"The direction in which the ball veers when moving from a smooth to a rough surface depends on several factors, especially the initial direction of motion and the orientation of the boundary between the two surfaces. Here’s a general outline of how it might behave:

  1. If Moving at an Angle to the Boundary:
    • Suppose the ball moves diagonally across the boundary between the smooth and rough surfaces (i.e., it doesn’t cr
... (read more)
6Boris Kashirin
I read about better analogy long time ago: use two wheels on an axle instead of single ball, then refraction come out naturally. Also I think instead of difference in friction it is better to use difference in elevation, so things slow down when they go to an area of higher elevation and speed back up going down.

Not quite following - your possibilities.
1. Alignment is almost impossible, then there is say 1e-20 chance we survive. Yes surviving worlds have luck and good alignment work etc. Perhaps you should work on alignment or still bednets if the odds really are that low.

2. Alignment is easy by default, but there is nothing like 0.999999 we survive, say 95% because AGI that is not TAI superintelligence could cause us to wipe ourselves out first, among other things. (This is a slow takeoff universe(s))

#2 has much more branches in total where we survive (not sure i... (read more)

OK for this post. "smart". A response is smart/intelligent if 

  1. Firstly there is an assumed goal and measure. I don't think it matters whether we are talking about the bees/birds as individuals or as part of the hive/flock. In this case the bee defense is effective both for the individual bee and hive. If a bee was only concerned about its survival, swarming the scout would still be beneficial, and of course such behavior is for the hive. Similarly for birds, flocks with large numbers of birds with swarming behavior would be better both for the flock, a
... (read more)

Yes agree, unclear what you are saying that is different to me? The new solution is something unique and powerful when done well like language etc.

Ok, I would definitely call the bee response "smart" but thats hard to define. If you define it by an action that costs the bees very little but benefits a lot, then "swarm the scout hornet" is certainly efficient. Another criteria could be if such a behavior was established would it continue? Say the birds developed a "swarm the attacker" call.  When birds hear it, they look to see if they can find the attacker, if they see it then they repeat the call. When the call gets widespread, the whole flock switches to attack. Would such a behavior persist i... (read more)

2Dagon
I think a solid attempt at defining it is required for this post to make sense.  I'd call the bee response "effective", but I can't talk myself into thinking it's "smart" in the way you talk about coordination and individual identity.  It's a different axis entirely.

Yes for sure. I don't know how it would play out, and am skeptical anyone could. We can guess scenarios. 

1. The most easily imagined one is the Pebbles owner staying in their comfort zone and not enforcing #2 at all. Something similar already happened - the USA got nukes first and let others catch up. In this case threatened nations try all sorts of things, political, commercial/trade, space war, arms race but don't actually start a hot conflict.  The Pebbles owner is left not knowing whether their system is still effective, nor the threatened co... (read more)

It matters what model is used to make the tokens, unlimited tokens from GPT 3 is of only limited use to me.  If it requires ~GPT 6 to make useful tokens, then the energy cost is presumably a lot greater. I don't know that its counterintuitive - a small, much less capable brain is faster, requires less energy, but useless for many tasks.

6Vladimir_Nesov
It's counterintuitive in the sense that a 24 kilowatt machine trained using a 24 megawatt machine turns out to be producing cognition cheaper per joule than a 20 watt brain. I think it's plausible that a GPT-4 scale model can be an AGI if trained on an appropriate dataset (necessarily synthetic). They know wildly unreasonable amount of trivia. Replacing it with general reasoning skills should be very effective. There is funding for scaling from 5e25 FLOPs to 7e27 FLOPs and technical feasibility for scaling up to 3e29 FLOPs. This gives models with 5 trillion parameters (trained on 1 gigawatt clusters) and then 30 trillion parameters (using $1 trillion training systems). This is about 6 and then 30 times more expensive in joules per token than Llama-3-405B (assuming B200s for the 1 gigawatt clusters, and further 30% FLOP/joule improvement for the $1 trillion system). So we only get to 6-12 watts and then 30-60 watts per LLM when divided among LLM instances that share the same hardware and slowed down to human equivalent speed. (This is an oversimplification, since output token generation is not FLOPs-bounded, unlike input tokens and training.)

This is mostly true for current architectures however if the COT/search finds a much better architecture, then it suddenly becomes more capable. To make the most of the potential protective effect, we can go further and make very efficient custom hardware for GPT type systems, but have slower more general purpose ones for potential new ones. That way the new arch will have a bigger barrier to cause havoc. We should especially scale existing systems as far as possible for defense, e.g. finding software vulnerabilities. However as others say, there are probably some insights/model capabilities that are only possible with a much larger GPT or different architecture altogether. Inference can't protect fully against that.

2Logan Zoellner
One of first questions I asked o1 was whether there is a "third source of independent scaling" (alongside training compute and inference compute), and among its best suggestions was model search. That is to say if in the GPT-3 era we had a scaling law that looked like: Performance=log(training compute) And in the o1 era we have a scaling law that looks like: Performance = log(training compute)+log(inference compute) There may indeed be a GPT-evo era in which; Performance = log(modelSearch)+log(training compute)+log(inference compute) I don't feel strongly about whether-or-not this is the case.  It seems equally plausible to me that Transformers are asymptotically "as good as it gets" when it comes to converting compute into performance and further model improvements provide only a constant-factor improvement.

Brilliant Pebbles?

See here and here

This idea has come back up, and it could be feasible this time around because of the high launch capability and total reusability of SpaceX's Starship. The idea is a large constellation (~30,000?) of low earth satellites that intercept nuclear launches in their boost phase where they are much slower and more vulnerable to interception. The challenge of course is that you constantly need enough satellites overhead at all times to intercept the entire arsenal of a major power if they launch all at once.

There are obvious pos... (read more)

5Cole Wyeth
It would certainly be nice if we could agree to all put up a ton of satellites that intercept anyone's nuclear missiles (perhaps under the control of an international body), gradually lowering the risk across the board without massively advantaging any country. But I think it would be impossible to coordinate on this.  
7faul_sname
I think the part where other nations just roll with this is underexplained.
Load More