All of Tao Lin's Comments + Replies

I agree with this so much! Like you I very much expect benefits to be much greater than harms pre superintelligence. If people are following the default algorithm "Deploy all AI which is individually net positive for humanity in the near term" (which is very reasonable from many perspectives), they will deploy TEDAI and not slow down until it's too late.

I expect AI to get better at research slightly sooner than you expect.

Interested to see evaluations on tasks not selected to be reward-hackable and try to make performance closer to competitive with standard RL

4Rohin Shah
Us too! At the time we started this project, we tried some more realistic settings, but it was really hard to get multi-step RL working on LLMs. (Not MONA, just regular RL.) I expect it's more doable now. For a variety of reasons the core team behind this paper has moved on to other things, so we won't get to it in the near future, but it would be great to see others working on this!

a hypothetical typical example would be it tries to use the file /usr/bin/python because it's memorized that that's the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something

not running amock, just not reliably following instructions "only modify files in this folder" or "don't install pip packages". Claude follows instructions correctly, some other models are mode collapsed into a certain way of doing things, eg gpt-4o always thinks it's running python in chatgpt code interpreter and you need very strong prompting to make it behave in a way specific to your computer

5Tao Lin
a hypothetical typical example would be it tries to use the file /usr/bin/python because it's memorized that that's the path to python, that fails, then it concludes it must create that folder which would require sudo permissions, if it can it could potentially mess something

i've recently done more AI agents running amok and i've found Claude was actually more aligned and did stuff i asked it not to much less than oai models enough that it actaully made a difference lol

2Daniel Kokotajlo
lol what? Can you compile/summarize a list of examples of AI agents running amok in your personal experience? To what extent was it an alignment problem vs. a capabilities problem?

i'd guess effort at google/banks to be more leveraged than demos if you're only considering harm from scams and not general ai slowdown and risk

2ryan_greenblatt
I think I probably agree (though uncertain as demos could prompt this effort), but I wasn't just considering reducing harm from scams. I care more about general societal understanding of AI and risks and a demo has positive spill over effects.

Working on anti spam/scam features at Google or banks could be a leveraged intervention on some worldviews. As AI advances it will be more difficult for most people to avoid getting scammed, and including really great protections into popular messaging platforms and banks could redistribute a lot of money from AIs to humans

2ryan_greenblatt
Why not think the scams will be run by humans (using AIs) and thus the intervention would reduce the transfer to these groups? In principle, groups could legally eat (some of) the free energy here by just red teaming everyone using a similar approach, but not actually taking their money. Currently, I'm more interested in work demonstrating that AI scams could get really good.

Like the post! I'm very interested in how the capabilities of prediction vs character are changing with more recent models. Eg sonnet new may have more of its capabilities tied to its character. And Reasoning models have maybe a fourth layer between ground and character, possibly even completely replacing ground layer in highly distilled models

Some of it, but not the main thing. I predict (without having checked) that if you do the analysis (or check an analysis that has already been done), it will have approximately the same amount of contamination from plastics, agricultural additives, etc as the default food supply.

Wow thank you for replying so fast! I donated $5k just now, mainly because you reminded me that lightcone may not meet goal 1 and that's definitely worth meeting. 

About web design, am only slightly persuaded by your response. In the example of Twitter, I don't really buy that there's public evidence that twitter's website work besides user-invisible algorithm changes has had much impact. I only use Following page, don't use spaces, lists, voice, or anything on twitter. Comparing twitter with bluesky/threads/whatever, really looks to me like cultural s... (read more)

4philh
It's hard for me! I had to give up on trying. The problem is that if I read the titles of most posts, I end up wanting to read the contents of a significant minority of posts, too many for me to actually read.
4habryka
I do think you are very likely overfitting heavily on your experience :P  As an example, the majority of traffic on LW goes to posts >1 year old, and for those, it sure matters how people discover them, and what UI you have for highlighting which of the ~100k LessWrong posts to read. Things like the Best of LessWrong, Sequences and Codex pages make a big difference in what people read and what gets traffic, as does the concept page. I agree for some of the most engaged people it matters more what the culture and writing tools and other things are, but I think for the majority of LessWrong users, even weighted by activity, recommendation systems and algorithm changes and UI affordances make a big difference.

My main crux about how valuable Lightcone donations are is how impactful great web dev on LessWrong is. If I look around, impact of websites doesn't look strongly correlated with web design, expecially on the very high end. My model is more like platforms / social networks rise or fall by zeitgeist, moderation, big influencers/campaigns (eg elon musk for twitter), web design, in that order. Olli has thought about this much more than me, maybe he's right. I certainly don't believe there's a good argument for LW web dev is responsible for its user metrics. Zeitgeist, moderation, and lightcone people personally posting seems likely more important to me. Lightcone is still great despite my (uninformed) disagreement!

3Said Achmiz
I strongly disagree. In fact, Less Wrong is an excellent example of the effect of web design on impact/popularity/effectiveness (both for better and for worse; mostly better, lately).
5habryka
I think you are probably thinking of "web design" as something too narrow.  I think the key attribute of"good web design" is not that it looks particularly beautiful, but that it figures out how to manage high levels of complexity in a way that doesn't confuse people. And of course, a core part of that managing complexity is to make tradeoffs about the relative importance of different user actions, and communicating the consequences of user actions in a way that makes sense with the core incentives and reward loops you want to set up for your site. On Twitter, "web design" choices are things like "do you have Twitter spaces", "what dimensions of freedom do you give users for customizing their algorithm?", "how do you display long-form content on Twitter?". These choices have large effect sizes and make-or-break a platform.  On LessWrong, these choices are things like "developing quick takes and figuring out how to integrate it into the site", or "having an annual review", or "having inline reacts" or "designing the post page in a way that causes people to link to them externally". And then the difficulty is not in making things nice, but in figuring out how to display all of these things in ways that doesn't obviously look overwhelming and broken. As a concrete example, I think quick takes have been great for the site, but they only really took off in 2023. This is because we (in this case largely thanks to the EA Forum team) finally figured out how to give them the right level of visibility for the site where it's subdued enough to not make anything you write on shortform feel high stakes, but where the best shortforms can get visibility comparable to the best posts. (I could also go into the relationship between web design and moderation, which is large and where of course how your website is structured will determine what kind of content people write, which will determine the core engine of your website. Moderation without tech changes I think is rarely tha

The AI generally feels as smart as a pretty junior engineer (bottom 25% of new Google junior hires)

I expect it to be more smart than that. Plausibly o3 now generally feels as smart as  60th percentile google junior hires

note: the minecraft agents people use have far greater ability to act than to sense. They have access to commands which place blocks anywhere, and pick up blocks from anywhere, even without being able to see them, eg the llm has access to mine(blocks.wood) command which does not require it to first locate or look at where the wood is currently. If llms played minecrafts using the human interface these misalignments would happen less

1Yonatan Cale
I agree.

Building in california is bad for congresspeople! better to build across all 50 states like United Launch Alliance

I likely agree that anthropic-><-palantir is good, but i disagree about blocking hte US government out of AI being a viable strategy. It seems to me like many military projects get blocked by inefficient beaurocracy, and it seems plausible to me for some legacy government contractors to get exclusive deals that delay US military ai projects for 2+ years

1Tao Lin
Building in california is bad for congresspeople! better to build across all 50 states like United Launch Alliance

Why would the defenders allow the tunnels to exist? Demolishing tunnels isnt expensive, if attackers prefer to attack through tunnels there likely isn't enough incentive for defenders to not demolish tunnels

3Daniel Kokotajlo
The expensiveness of demolishing tunnels scales with the density of the tunnel network. (Unless the blast effects of underground explosives are generally stronger than I expect; I haven't done calculations). For sufficiently dense tunnel networks, demolishing enough of them would actually be quite expensive. E.g. if there are 1000 tunnels that you need to demolish per 1km of frontline, the quantity of explosive needed to do that would probably be greater than the quantity you'd need to make a gigantic minefield on the surface. (Minefields can be penetrated... but also, demolished tunnels can be re-dug.) 

I'm often surprised how little people notice, adapt to, or even punish self deception. It's not very hard to detect when someone's deceiving them self, people should notice more and disincentivise that

9Valentine
A few notes: * Sometimes this is obviously true. I agree. * It's a curious question why many folk turn their attention away from someone else's self-deception when it's obvious. Often they don't, but sometimes they do. Why they (we) do that is an interesting question worthy of some sincere curiosity. * Confirmation bias. You don't notice the cases where you don't pick up on someone else's self-deception.   Boy oh boy do I disagree. If someone's only option for dealing with a hostile telepath is self-deception, and then you come in and punish them for using it, thou art a dick. Like, do you think it helps the abused mothers I named if you punish them somehow for not acknowledging their partners' abuse? Does it even help the social circle around them? Even if the "hostile telepath" model is wrong or doesn't apply in some cases, people self-deceive for some reason. If you don't dialogue with that reason at all and just create pain and misery for people who use it, you're making some situation you don't understand worse. I agree that getting self-deception out of a culture is a great idea. I want less of it in general. But we don't get there by disincentivizing it.

This reads to me as, "We need to increase the oppression even more."

I prefer to just think about utility, rather than probabilities. Then you can have 2 different "incentivized sleeping beauty problems"

  • Each time you are awakened, you bet on the coin toss, with $ payout. You get to spend this money on that day or save it for later or whatever
  • At the end of the experiment, you are paid money equal to what you would have made betting on your average probability you said when awoken.

In the first case, 1/3 maximizes your money, in the second case 1/2 maximizes it.

To me this implies that in real world analogues to the Sleeping Beauty problem, you need to ask whether your reward is per-awakening or per-world, and answer accordingly

1Radford Neal
That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie. EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.

I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.

Either things have improved in the past or they haven't, and either people trying to "steer the future" in some sense have been influential on these improvements. I think things have improved, and I think there's definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and mo... (read more)

3sarahconstantin
"Let's abolish slavery," when proposed, would make the world better now as well as later. I'm not against trying to make things better! I'm against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.

Do these options have a chance to default / are the sellers stable enough?

2ESRogs
Default seems unlikely, unless the market moves very quickly, since anyone pursuing this strategy is likely to be very small compared to the market for the S&P 500. (Also consider that these pay out in a scenario where the world gets much richer — in contrast to e.g. Michael Burry's "Big Short" swaps, which paid out in a scenario where the market was way down — so you're just skimming a little off the huge profits that others are making, rather than trying to get them to pay you at the same time they're realizing other losses.)

A core part of Paul's arguments is that having 1/million of your values towards humans only applies a minute amount of selection pressure against you. It could be that coordinating causes less kindness because without coordination it's more likely some fraction of agents have small vestigial values that never got selected against or intentionally removed

to me "alignment tax" usually only refers to alignment methods that don't cost-effectively increase capabilities, so if 90% of alignment methods did cost effectively increase capabilities but 10% did not, i would still say there was an "alignment tax", just ignore the negatives.

Also, it's important to consider cost-effective capabilities rather than raw capabilities - if a lab knows of a way to increase capabilities more cost-effectively than alignment, using that money for alignment is a positive alignment tax

I think this risks getting into a definitions dispute about what concept the words ‘alignment tax’ should point at. Even if one grants the point about resource allocation being inherently zero-sum, our whole claim here is that some alignment techniques might indeed be the most cost-effective way to improve certain capabilities and that these techniques seem worth pursuing for that very reason.

there's steganography, you'd need to limit total bits not accounted for by the gating system or something to remove them

4Davidmanheim
I partly disagree; steganography is only useful when it's possible for the outside / receiving system to detect and interpret the hidden messages, so if the messages are of a type that outside systems would identify, they can and should be detectable by the gating system as well.  That said, I'd be very interested in looking at formal guarantees that the outputs are minimally complex in some computationally tractable sense, or something similar - it definitely seems like something that @davidad would want to consider.

yes, in some cases a much weaker (because it's constrained to be provable) system can restrict the main ai, but in the case of llm jailbreaks there is no particular hope that such a guard system could work (eg jailbreaks where the llm answers in base64 require the guard to understand base64 and any other code the main ai could use)

2Davidmanheim
I agree that in the most general possible framing, with no restrictions on output, you cannot guard against all possible side-channels. But that's not true for proposals like safeguarded AI, where a proof must accompany the output, and it's not obviously true if the LLM is gated by a system that rejects unintelligible or not-clearly-safe outputs.

interesting, this actually changed my mind, to the extent i had any beliefs about this already. I can see why you would want to update your prior, but the iterated mugging doesn't seem like the right type of thing that should cause you to update. My intuition is to pay all the single coinflip muggings. For the digit of pi muggings, i want to consider how different this universe would be if the digit of pi was different. Even though both options are subjectively equally likely to me, one would be inconsistent with other observations or less likely or have something wrong with it, so i lean toward never paying it

2abramdemski
Yeah, in hindsight I realize that my iterated mugging scenario only communicates the intuition to people who already have it. The Lizard World example seems more motivating.

Train two nets, with different architectures (both capable of achieving zero training loss and good performance on the test set), on the same data.
...
Conceptually, this sort of experiment is intended to take all the stuff one network learned, and compare it to all the stuff the other network learned. It wouldn’t yield a full pragmascope, because it wouldn’t say anything about how to factor all the stuff a network learns into individual concepts, but it would give a very well-grounded starting point for translating stuff-in-one-net into stuff-in-another-net

... (read more)

yeah, i agree the movie has to be very high quality to work. This is a long shot, although the best rationalist novels are actually high quality which gives me some hope that someone could write a great novel/movie outline that's more targeted at plausible ASI scenarios

it's sad that open source models like Flux have a lot of potential for customized workflows and finetuning but few people use them

5Raemon
We've talked (a little) about integrating Flux more into LW, to make it easier to make good images. (maybe with a soft-nudge towards using "LessWrong watercolor style" by default if you don't specify something else), Although something habryka brought up is a lot of people's images seem to be coming from substack, which has it's own (bad) version of it.

yeah. One trajectory could be someone in-community-ish writes an extremely good novel about a very realistic ASI scenario with the intention to be adaptable into a movie, it becomes moderately popular, and it's accessible and pointed enough to do most of the guidence for the movie. I don't know exactly who could write this book, there are a few possibilities.

Another way this might fail is if fluid dynamics is too complex/difficult for you to constructively argue that your semantics are useful in fluid dynamics. As an analogy, if you wanted to show that your semantics were useful for proving fermat's last theorem, you would likely fail because you simply didn't apply enough power to the problem, and I think you may fail that way in fluid dynamics.

6Thane Ruthenis
I'd expect that if the natural-abstractions theory gets to the point where it's theoretically applicable to fluid dynamics, then demonstrating said applicability would just be a matter of devoting some amount of raw compute to the task; it wouldn't be bottlenecked on human cognitive resources. You'd be able to do things like setting up a large-scale fluid simulation, pointing the pragmascope at it, and seeing it derive natural abstractions that match the abstractions human scientists and engineers derived for modeling fluids. And in the case of fluids specifically, I expect you wouldn't need that much compute. (Pure mathematical domains might end up a different matter. Roughly speaking, because of the vast gulf of computational complexity between solving some problems approximately (BPP) vs. exactly. "Deriving approximately-correct abstractions for fluids" maps to the former, "deriving exact mathematical abstractions" to the latter.)

Great post!

I'm most optimistic about "feel the ASI" interventions to improve this. I think once people understand the scale and gravity of ASI, they will behave much more sensibly here. The thing I intuitively feel most optimistic (whithout really analyzing it) is movies or generally very high quality mass appeal art.

I think better AGI-depiction in movies and novels also seems to me like a pretty good intervention. I do think these kinds of things are very hard to steer on-purpose (I remember some Gwern analysis somewhere on the difficulty of getting someone to create any kind of high-profile media on a topic you care about, maybe in the context of hollywood).

you can recover lost momentum by decelerating things to land. OP mentions that briefly

And they need a regular supply of falling mass to counter the momentum lost from boosting rockets. These considerations mean that tethers have to constantly adapt to their conditions, frequently repositioning and doing maintenance.

If every launch returns and lands on earth, that would recover some but not all lost momentum, because of fuel spent on the trip. it's probably more complicted than that though

two versions with the same posttraining, one with only 90% pretraining are indeed very similar, no need to evaluate both. It's likely more like one model with 80% pretraining and 70% posttraining of the final model, and the last 30% of posttraining might be significant

 if you tested a recent version of the model and your tests have a large enough safety buffer, it's OK to not test the final model at all.

I agree in theory but testing the final model feels worthwhile, because we want more direct observability and less complex reasoning in safety cases.

2Zach Stein-Perlman
Thanks. Is this because of posttraining? Ignoring posttraining, I'd rather that evaluators get the 90% through training model version and are unrushed than the final version and are rushed — takes?

With modern drones, searching in places with as few trees as Joshua tree could be done far more effectively. I don't know if any parks have trained teams with ~$50k with of drones ready but if they did they could have found him quickly

2kave
See also frontier64 and eukaryote on helicopter searches.

I am guilty of citing sources I don't believe in, particularly in machine learning. There's a common pattern where most papers are low quality, and no can/will investigate the validity of other people's papers or write review papers, so you usually form beliefs by an ensemble of lots of individually unreliable papers and your own experience. Then you're often asked for a citation and you're like "there's nothing public i believe in, but i guess i'll google papers claiming the thing i'm claiming and put those in". I think many ML people have ~given up on citing papers they believe in, including me.

2Elizabeth
Do you have any guesses for how that's affecting progress in ML?

I don't particularly like the status hierarchy and incentive landscape of the ML community, which seems quite well-optimized to cause human extinction

the incentives are indeed bad, but more like incompetent and far from optimized to cause extinction

the reason why etched was less bandwidth limited is they traded latency for throughput by batching prompts and completions together. Gpus could also do that but they don't to improve latency

the reason airplanes need speed is basically because their propeller/jet blades are too small to be efficient at slow speed. You need a certain amount of force to lift off, and the more air you push off of at once the more force you get per energy. The airplanes go sideways so that their wings, which are very big, can provide the lift instead of their engines. Also this means that if you want to go fast and hover efficiently, you need multiple mechanisms because the low volume high speed engine won't also be efficient at low speed

yeah learning from distant near misses is important! Feels that way in risky electric unicycling. 

No, the mi300x is not superior to nvidias chips, largely because It costs >2x to manufacture as nvidias chips

This makes a much worse lesswrong post than twitter thread, it's just a very rudimentary rehashing of very long standing debates

4habryka
Yeah, I do think that's what Twitter tends to do. Which is helpful as someone casually engaged with things, but it's not a good venue for anyone who wants to actually engage with a long-standing conversation.

for reference, just last week i rented 3 8xh100 boxes without any KYC

I don't think slaughtering billions of people would be very useful. As a reference point, wars between countries almost never result in slaughtering that large a fraction of people

2ryan_greenblatt
Unfortunately, if the AI really barely cares (e.g. <1/billion caring), it might only need to be barely useful. I agree it is unlikely to be very useful.

lol Paul is a very non-disparaging person. He always makes his criticism constructive, i don't know if there's any public evidence of him disparaging anyone regardless of NDAs

I've recently gotten into partner dancing and I think it's a pretty superior activity

One lesson you could take away from this is "pay attention to the data, not the process" - this happened because the data had longer successes than failures. If successes were more numerous than failures, many algorithms would have imitated those as well with null reward.

I think the "fraction of Training compute" going towards agency vs nkn agency will be lower in video models than llms, and llms will likely continue to be bigger, so video models will stay behind llms in overall agency

Helpfullness finetuning might make these models more capable when they're on the correct side of the debate. Sometimes RLHF(like) models simply perform worse on tasks they're finetuned to avoid even when they don't refuse or give up. Would be nice to try base model debaters

1Akbir Khan
Hey Tao,  We agree this is a major limitation, and discuss this within the Discussion and Appendix section.  We tried using base GPT-4, unfortunately, as it has no helpfulness training - it finds it exceptionally hard to follow instructions.  We'd love access to Helpful-only models but currently, no scaling labs offer this. It's on the list. 

A core advantage of bandwidth limiting over other cybersec interventions is its a simple system we can make stronger arguments about, implemented on a simple processor, without the complexity and uncertainty of modern processors and OSes

Load More