All of PeterMcCluskey's Comments + Replies

The first year or two of human learning seem optimized enough that they're mostly in evolutionary equilibrium - see Henrich's discussion of the similarities to chimpanzees in The Secret of Our Success.

Human learning around age 10 is presumably far from equilibrium.

I'll guess that I see more of the valuable learning taking place in the first 2 years or so than do other people here.

2Noosphere89
I have 2 cruxes here: 1. I buy Heinrich's theory far less than I used to, because Heinrich made easily checkable false claims that all point in the direction of culture being more necessary for human success. In particular, I do not buy that humans and chimpanzees are nearly that similar as Heinrich describes, and a big reason for this is that the study that showed that had heavily optimized and selected the best chimpanzees against reasonably average humans, which is not a good way to compare performance if you want the results to generalize. I don't think they're wildly different, and I'd usually put chimps effective flops as 1-2 OOMs lower, but I wouldn't go nearly as far as Heinrich on the similarities. I do think culture actually matters, but nowhere near as much as Heinrich wants it to matter. 1. I basically disagree that most of the valuable learning takes place before age 2, and indeed if I wanted to argue the most valuable point for learning, it would probably be from 0-25 years, or more specifically 2-7 years olds and then 13-25 years old again.

I agree with most of this, but the 13 OOMs from the the software feedback loop sounds implausible.

From How Far Can AI Progress Before Hitting Effective Physical Limits?:

the brain is severely undertrained, humans spend only a small fraction of their time on focussed academic learning

I expect that humans spend at least 10% of their first decade building a world model, and that evolution has heavily optimized at least the first couple of years of that. A large improvement in school-based learning wouldn't have much effect on my estimate of the total learning needed.

1Tom Davidson
It does sound like a lot -- that's 5 OOMs to reach human learning efficiency and then 8 OOMs more. But when we BOTECed the sources of algorithmic efficiency gain on top of the human brain, it seemed like you could easily get more than 8. But agreed it seems like a lot. Though we are talking about ultimate physical limits here!   Interesting re the early years. So you'd accept that learning from 5/6 could be OOMs more efficient, but would deny that the early years could be improved?    Though you're not really speaking to the 'undertrained' point, which is about the number of params vs data points
2Noosphere89
I agree evolution has probably optimized human learning, but I don't think that it's so heavily optimized that we can use it to give a tighter upper bound than 13 OOMs, and the reason for this is I do not believe that humans are in equilibrium, and this means that there are probably optimizations left to discover, so I do think the 13 OOMs number is plausible )with high uncertainty). Comment below: https://www.lesswrong.com/posts/DbT4awLGyBRFbWugh/#mmS5LcrNuX2hBbQQE

This general idea has been discussed under the term myopia.

I'm assuming that the AI can accomplish its goal by honestly informing governments. Possibly that would include some sort of demonstration that the of the AI's power that would provide compelling evidence that the AI would be dangerous if it wasn't obedient.

I'm not encouraging you to be comfortable. I'm encouraging you to mix a bit more hope in with your concerns.

One crux is how soon do we need to handle the philosophical problems? My intuition says that something, most likely corrigibility in the Max Harms sense, will enable us to get pretty powerful AIs while postponing the big philosophical questions.

Are there any pivotal acts that aren’t philosophically loaded?

My intuition says there will be pivotal processes that don't require any special inventions. I expect that AIs will be obedient when they initially become capable enough to convince governments that further AI development would be harmful (if it would... (read more)

4Raemon
  Seems like "the AIs are good enough at persuasion to persuade governments and someone is deploying them for that" is right when you need to be very high confidence they're obedient (and, don't have some kind of agenda). If they can persuade governments, they can also persuade you of things. I also think it gets into a point where I'd sure feel way more comfortable if we had more satisfying answers to "where exactly are we supposed to draw the line between 'informing' and 'manipulating'" (I'm not 100% sure what you're imagining here tho) 

It would certainly be valuable to have AIs that are more respected than Wikipedia as a source of knowledge.

I have some concerns about making AIs highly strategic. I see some risk that strategic abilities will be the last step in the development of AI that is powerful enough to take over the world. Therefore, pushing AI intellectuals to be strategic may speed up that risk.

I suggest aiming for AI intellectuals that are a bit more passive, but still authoritative enough to replace academia as the leading validators of knowledge.

2ozziegooen
"I see some risk that strategic abilities will be the last step in the development of AI that is powerful enough to take over the world." Just fyi - I feel like this is similar to what others have said. Most recently, benwr had a post here: https://www.lesswrong.com/posts/5rMwWzRdWFtRdHeuE/not-all-capabilities-will-be-created-equal-focus-on?commentId=uGHZBZQvhzmFTrypr#uGHZBZQvhzmFTrypr Maybe we could call this something like "Strategic Determinism"  I think one more precise claim I could understand might be: 1. The main bottleneck to AI advancement is "strategic thinking" 2. There's a decent amount of uncertainty on when or if "strategic thinking" will be "solved" 3. Human actions might have a lot of influence over (2). Depending on what choices humans make, strategic thinking might be solved sooner or much later. 4. Shortly after "strategic thinking" is solved, we gain a lot of certainty on what future trajectory will be like. As in, the fate of humanity is sort of set by this point, and further human actions won't be able to change it much. 5. "Strategic thinking" will lead to a very large improvement in potential capabilities. One main reason is that it would lead to recursive self-improvement. If there is one firm that has sole access to an LLM with "strategic thinking", it is likely to develop a decisive strategic advantage. I think personally, such a view seems too clean to me. 1. I expect that there will be a lot of time where LLMs get better at different aspects of strategic thinking, and this helps to limited extents. 2. I expect that better strategy will have limited gains in LLM capabilities, for some time. The strategy might suggest better LLM improvement directions, but these ideas won't actually help that much. Maybe a firm with a 10% better strategist would be able to improve it's effectiveness by 5% per year or something. 3. I think there are could be a bunch of worlds where we have "idiot savants" who are amazing at some narrow kinds of tasks (c
2ozziegooen
Alexander Gordon-Brown challenged me on a similar question here: https://www.facebook.com/ozzie.gooen/posts/pfbid02iTmn6SGxm4QCw7Esufq42vfuyah4LCVLbxywAPwKCXHUxdNPJZScGmuBpg3krmM3l One thing I wrote there:   I expect that over time we'll develop better notions about how to split up and categorize the skills that make up strategic work. I suspect some things will have a good risk-reward tradeoff and some won't.  I expect that people in the rationality community over-weight the importance of, well, rationality.    My main point with this topic is that I think our community should be taking this topic seriously, and that I expect there's a lot of good work that could be done that's tractable, valuable, and safe. I'm much less sure about exactly what that work is, and I definitely recommend that work here really try to maximize the reward/risk ratio. Some quick heuristics that I assume would be good are: - Having AIs be more correct about epistemics and moral reasoning on major global topics generally seems good. Ideally there are ways of getting that that don't require huge generic LLM gains. - We could aim for expensive and slow systems. - There might not be a need to publicize such work much outside of our community. (This is often hard to do anyway). - There's a lot of work that would be good for people we generally trust, and alienate most others (or be less useful for other use cases). I think our community focuses much more on truth-seeking, Bayesian analysis, forecasting, etc.  - Try to quickly get the best available reasoning systems we might have access to, to be used to guide strategy on AI safety. In theory, this cluster can be ahead-of-the-curve. - Great epistemic AI systems don't need much agency or power. We can heavily restrict them to be tool AIS. - Obviously, if things seriously get powerful, there are a lot of various techniques that could be done (control, evals, etc) to move slowly and lean on the safe side. 

The book is much better than I expected, and deserves more attention. See my full review on my blog.

The market seems to underestimate the extent to which Micron (MU) is an AI stock. My only options holdings for now are December 2026 MU calls.

6DPiepgrass
I'm curious, what makes it more of an AI stock than... whatever you're comparing it to?
1Rasool
What do you make of Hynix?

I had a vaguely favorable reaction to this post when it was first posted.

When I wrote my recent post on corrigibility, I grew increasingly concerned about the possible conflicts between goals learned during pretraining and goals that are introduced later. That caused me to remember this post, and decide it felt more important now than it did before.

I'll estimate a 1 in 5000 chance that the general ideas in this post turn out to be necessary for humans to flourish.

"OOMs faster "? Where do you get that idea?

Dreams indicate a need for more processing than what happens when we're awake, but likely less than 2x waking time.

1FL33TW00D
The linked video says so at 30:45

I was just thinking about writing a post that overlaps with this, inspired by a recent Drexler post. I'll turn it into a comment.

Leopold Aschenbrenner's framing of a drop-in remote worker anthropomorphizes AI in a way that risks causing AI labs to make AIs more agenty than is optimal.

Anthropomorphizing AI is often productive. I use that framing a fair amount to convince myself to treat AIs as more capable than I'd expect if I thought of them as mere tools. I collaborate better when I think of the AI as a semi-equal entity.

But it feels important to be able ... (read more)

6ryan_b
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need. Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.

I want to register different probabilities:

  1. Jobs hits first (25%).
  2. AGI race hits first (50%).
  3. Alignment hits first (15%).

My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated.

Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so. Existing human institutions aren't likely to adapt fast enough to react competently to that.

I just published a post on Drexler's MSEP software that is relevant to whether people should donate to his project.

two more organizations that seem worthy of consideration

Investing in Eon Systems looks much more promising than donating to Carbon Copies.

I see maybe a 3% chance that they'll succeed at WBE soon enough to provide help with AI x-risk.

The Invention of Lying provides a mostly accurate portrayal of a world where everyone is honest. It feels fairly Hansonian.

No, I don't recall any ethical concerns. Just basic concerns such as the difficulty of finding a boss that I'm comfortable with, having control over my hours, etc.

Oura also has heart rate and VO2 max tracking. Does anyone know of problems with Oura's data?

1Crissman
Oura also fine. Some of the people in the beta group are using them.

The primary motive for funding NASA was definitely related to competing with the USSR, but I doubt that it was heavily focused on military applications. It was more along the lines of demonstrating the general superiority of the US system, in order to get neutral countries to side with us because we were on track to win the cold war.

Manifold estimates an 81% chance of ASI by 2036, using a definition that looks fairly weak and subjective to me.

I've bid the brain emulation market back up a bit.

Brain emulation looks closer than your summary table indicates.

Manifold estimates a 48% chance by 2039.

Eon Systems is hiring for work on brain emulation.

3Knight Lee
Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren't that "close" to these other technologies. Maybe they believe in a ≈38% chance of superintelligence by 2039. PS: Your comment may have caused it to drop to 38%. :)
papetoast*1116

Manifold is pretty weak evidence for anything >=1 year away because there are strong incentives to bet on short term markets.

7TsviBT
I'm not sure how to integrate such long-term markets from Manifold. But anyway, that market seems to have a very vague notion of emulation. For example, it doesn't mention anything about the emulation doing any useful cognitive work!

We can only value lives at $10 million when we have limited opportunities to make that trade, or we’d go bankrupt.

I'm suspicious of the implication that we have many such opportunities. But a quick check suggests says it's very dependent on guesses as to how many lives are saved be treatments.

I did a crude check for lives saved by cancer treatments. Optimistic estimates suggest that lives are being saved at less than $1 million per life. Robin Hanson's writings have implied that the average medical treatments is orders of magnitude less effective than that.

Could last year's revamping of OpenAI's board have been influenced by government pressure to accept some government-approved board members? Nakasone's appointment is looking more interesting after reading this post.

4Deric Cheng
That's a very good point! Technically he's retired, but I wonder how much his appointment is related to preparing for potential futures where OpenAI needs to coordinate with the US government on cybersecurity issues...

Soaking seeds overnight seems to be a good way to reduce phytic acid.

Answer by PeterMcCluskey40

oral probiotics in general might just all be temporary.

The solution to concerns about it being temporary is to take them daily. I take Seed Daily Synbiotic. My gut is probably better as a result, but I don't have evidence that is at all rigorous.

The beginning of this comment is how Lintern expands on that claim. But it sounds like you have an objection that isn't well addressed there.

If cancer merely involved one bad feature, I could imagine software analogies that involved a large variety of mistakes producing that one bad feature.

The hallmarks of cancer indicate that all cancers have a number of bad features in common that look sufficiently unrelated to each other that it seems hard to imagine large sets of unrelated mutations all producing those same hallmarks. Lintern lists many other features... (read more)

Maybe? It doesn't seem very common for infectious diseases to remain in one area. It depends a lot on how they are transmitted. It's also not unusual for a non-infectious disease to have significant geographical patterns. There are cancers which are concentrated in particular areas, but there seem to be guesses for those patterns that don't depend on fungal infections.

Thanks. You've convinced me that Lintern overstates the evidence of mutation-free cancer cells.

But you seem to have missed really obvious consequences of the fungi theory, like, "wouldn't it be infectious then",

I very much did not miss that.

containing some potentially pretty dangerous advice like "don't do chemotherapy".

Where did I say that?

2Yair Halberstadt
I would consider this one of the most central points to clarify, yet the OP doesn't discuss it at all, and your response to it being pointed out was 3 sentences, despite there being ample research on the topic which points strongly in the opposite direction. I never said you said it, I said the book contains such advice:

Enough that it should have been noticed.

My guess is that almost nobody looks for this kind of connection.

Even if they do notice it, they likely conclude that pathogens are just another small influence on cancer risk.

Because radiation cannot spread a fungus

Anything that causes cell damage and inflammation has effects that sometimes make cells more vulnerable to pathogens.

How would transmission be detected? It probably takes years before a tumor grows big enough for normal methods to detect it.

I assume that transmission is common, mild infections are common, and they rarely become harmful tumors.

4Linch
Genes vs environment seems like an obvious thing to track. Most people in most places don't move around that much (unlike many members of our community) so if cancers are contagious for many cancers, especially rarer ones, you'd expect to see strong regional correlations (likely stronger than genetic correlations). 

It probably takes years before a tumor grows big enough for normal methods to detect it.

There exist fast-growing cancers.  I figure that if the fungi theory is correct, then probably a good amount of this is caused by the specific fungus (and perhaps what part of the body that fungus targets), and most of the rest comes from the target's immune system (not sure what else would contribute significantly).  If transmission and mild infections are common, and if, say, 1% of cancers are fast-growing, I feel like there should be lots of cases where an ... (read more)

2mishka
Thanks for the book review! But don't we have a bunch of cancer subtypes where we have had drastic treatment improvements in recent years? Those improvements seem to be arguments against "single cause" approaches (although a particular "single cause" could still dominate a good chunk of the cancer types and could be an important factor almost everywhere).

This comment describes some relevant research.

From Somatic Mutation Theory - Why it's Wrong for Most Cancers:

It should come as no surprise, therefore, that somatic mutations are questioned as representing "the" cause for the majority of cancers [10,11] and it should be noted that some cancers are not associated with any mutations whatsoever.

Importantly, a detailed analysis of 31,717 cancer cases and 26,136 cancer-free controls from 13 genome-wide association studies [48] revealed that "the vast majority, if not all, of aberrations that were observed i

... (read more)
3tailcalled
Isn't case-control GWAS the wrong tool for the job since it's blind to rare mutations? I'd compare a person's cancerous cells to their normal cells instead, though I'm not an expert so maybe there's a problem with this.

An important drawback is that the difficulty of beating the market fluctuates depending on factors such as who the other traders are, and what kind of news is moving the markets.

I'm holding a modest long position in NVIDIA (smaller than my position in Google), and expect to keep it for at least a few more months. I expect I only need NVIDIA margins to hold up for another 3 or 4 years for it to be a good investment now.

It will likely become a bubble before too long, but it doesn't feel like one yet.

It currently looks like the free version of ChatGPT is good enough that I wouldn't get much benefit from a subscription. I have little idea how long this will remain true.

Yeah, and it's not obvious that 4o is currently the best chatbot. I just object to the boycott-without-cost-benefit-analysis.

The more complex the rules get, the harder it gets to enforce them.

If the threshold is used merely for deciding who needs to report to regulators, then it seems appropriate to use the simple rule. We should care mainly that it applies to the most powerful models at any one time, not that it applies to a fixed capability level.

For a threshold that's designed to enforce a long-term pause, it's going to be hard to do more than slow capability progress without restrictions on GPU production.

Answer by PeterMcCluskey30

My favorite power-related stock is CSIQ (Canadian Solar).

I also have positions in lithium mining companies (for grid storage), and construction companies that have some focus on power grids (e.g. MYRG).

Uranium bets are harder to evaluate.

You seem to assume we should endorse something like average utilitarianism. Bostrom and I consider total utilitarianism to be closer to the best moral framework. See Parfit's writings if you want deep discussion of this topic.

1J
Thanks! Just read some summaries of parfit. Do you know any literature that addresses this issue within the context of a) impacts to other species, or b) using artificial minds as the additional population? I assume the total utilitarianism theory assumes arbitrarily growing physical space for populations to expand into and would not apply to finite spaces or resources (I think I recall bostrom addressing that). Reading up on parfit also made me realize that deep utopia really has prerequisites and you were right that it's probably more readily understood by those with philosophy background. I didn't really understand what he was saying about utilitarianism until just reading about parfit.

I can't recall any clear predictions or advice, just a general presumption that it will be used wisely.

Given utopian medicine, Gwern's points seem not very important.

He predicts that it will be possible to do things like engineer away sadness. He doesn't devote much attention to convincing skeptics that such engineering will be possible. He seems more interested in questions of whether we should classify the results as utopian.

2Seth Herd
Thanks! I'm also uninterested in the question of whether it's possible. Obviously it is. The question is how we'll decide to use it. I think that answer is critical to whether we'd consider the results utopian. So, does he consider how we should or will use that ability?

What evidence do you have about how much time it takes per day to maintain the effect after the end of the 2 weeks?

2George3d6
No idea, I would re-do the tests on myself but I was semi-present for the replication so I'd rather wait more time. All 3 of us might try to re-do the tests in a month and I can get 4-5 controls to re-do them too. Then I'd have numbers 1 month in. This is also an important question for me.
Answer by PeterMcCluskey62

The part about "securities with huge variance" is somewhat widely used. See how much EA charities get from crypto and tech startup stock donations.

It's unclear whether the perfectly anti-correlated pair improves this kind of strategy. I guess you're trying to make the strategy more appealing to risk-averse investors? That sounds like it maybe should work, but is hard because risk-averse investors don't want to be early adopters of a new strategy?

Doesn't this depend on what we value?

In particular, you appear to assume that we care about events outside of our lightcone in roughly the way we care about events in our near future. I'm guessing a good deal of skepticism of ECL is a result of people not caring much about distant events.

3Chi Nguyen
Yeah, you're right that we assume that you care about what's going on outside the lightcone! If that's not the case (or only a little bit the case), that would limit the action-relevance of ECL. (That said, there might be some weird simulations-shenanigans or cooperating with future earth-AI that would still make you care about ECL to some extent although my best guess is that they shouldn't move you too much. This is not really my focus though and I haven't properly thought through ECL for people with indexical values.)

I had nitrous oxide once at a dentist. It is a dissociative anesthetic. It may have caused something like selective amnesia. I remember that the dentist was drilling, but I have no clear memory of pain associated with it. It's a bit hard to evaluate exactly what it does, but it definitely has some benefits. Maybe the pain seemed too distant from me to be worth my attention?

Answer by PeterMcCluskey196

A much higher fraction of the benefits of prediction markets are public goods.

Most forms of insurance did took a good deal of time and effort before they were widely accepted. It's unclear whether there's a dramatic difference in the rate of adoption of prediction markets compared to insurance.

I'm reaffirming my relatively extensive review of this post.

The simbox idea seems like a valuable guide for safely testing AIs, even if the rest of the post turns out to be wrong.

Here's my too-terse summary of the post's most important (and more controversial) proposal: have the AI grow up in an artificial society, learning self-empowerment and learning to model other agents. Use something like retargeting the search to convert the AI's goals from self-empowerment to empowering other agents.

Load More