All of leogao's Comments + Replies

leogao20

i'm happy to grant that the 0.1% is just a fermi estimate and there's a +/- one OOM error bar around it. my point still basically stands even if it's 1%.

i think there are also many factors in the other direction that just make it really hard to say whether 0.1% is an under or overestimate.

for example, market capitalization is generally an overestimate of value when there are very large holders. tesla is also a bit of a meme stock so it's most likely trading above fundamental value.

my guess is most things sold to the public sector probably produce less econ... (read more)

leogao110

you might expect that the butterfly effect applies to ML training. make one small change early in training and it might cascade to change the training process in huge ways.

at least in non-RL training, this intuition seems to be basically wrong. you can do some pretty crazy things to the training process without really affecting macroscopic properties of the model (e.g loss). one very well known example is that using mixed precision training results in training curves that are basically identical to full precision training, even though you're throwing out a ton of bits of precision on every step.

leogao120

there's an obvious synthesis of great man theory and broader structural forces theories of history.

there are great people, but these people are still bound by many constraints due to structural forces. political leaders can't just do whatever they want; they have to appease the keys of power within the country. in a democracy, the most obvious key of power is the citizens, who won't reelect a politician that tries to act against their interests. but even in dictatorships, keeping the economy at least kind of functional is important, because when the citize... (read more)

2Joseph Miller
I think there's a spectrum between great man theory and structural forces theory and I would classify your view as much closer to the structural forces view, rather than a combination of the two. The strongest counter-example might be Mao. It seems like one man's idiosyncratic whims really did set the trajectory for hundreds of millions of people. Although of course as soon as he died most of the power vanished, but surely China and the world would be extremely different today without him.
2Thomas Kwa
Musk only owns 0.1% of the economic output of the US but he is responsible for more than this, including large contributions to * Politics * Space * SpaceX is nearly 90% of global upmass * Dragon is the sole American spacecraft that can launch humans to ISS * Starlink probably enables far more economic activity than its revenue * Quality and quantity of US spy satellites (Starshield has ~tripled NRO satellite mass) * Startup culture through the many startups from ex-SpaceX employees * Twitter as a medium of discourse, though this didn't change much * Electric cars probably sped up by ~1 year by Tesla, which still owns over half the nation's charging infrastructure * AI, including medium-sized effects on OpenAI and potential future effects through xAI Depending on your reckoning I wouldn't be surprised if Elon's influence added up to >1% of Americans combined. This is not really surprising because a Zipfian relationship would give the top person in a nation of 300 million 5% of the total influence.
leogao277

there are a lot of video games (and to a lesser extent movies, books, etc) that give the player an escapist fantasy of being hypercompetent. It's certainly an alluring promise: with only a few dozen hours of practice, you too could become a world class fighter or hacker or musician! But because becoming hypercompetent at anything is a lot of work, the game has to put its finger on the scale to deliver on this promise. Maybe flatter the user a bit, or let the player do cool things without the skill you'd actually need in real life. 

It's easy to dismiss... (read more)

1weightt an
The alternative is to pit people against each other in some competitive games, 1 on 1 or in teams. I don't think the feeling you get from such games is consistent with "being competent doesn't feel like being competent, it feels like the thing just being really easy", probably mainly because there is skill level matching, there are always opponents who pose you a real challenge.  Hmm maybe such games need some more long tail probabilistic matching, to sometimes feel the difference. Or maybe variable team sizes, with many incompetent people versus few competent, to get a more "doomguy" feeling.
3trevor
"power fantasies" are actually a pretty mundane phenomenon given how human genetic diversity shook out; most people intuitively gravitate towards anyone who looks and acts like a tribal chief, or towards the possibility that you yourself or someone you meet could become (or already be) a tribal chief, via constructing some abstract route that requires forging a novel path instead of following other people's. Also a mundane outcome of human genetic diversity is how division of labor shakes out; people noticing they were born with savant-level skills and that they can sink thousands of hours into skills like musical instruments, programming, data science, sleight of hand party tricks, social/organizational modelling, painting, or psychological manipulation. I expect the pool to be much larger for power-seeking-adjacent skills than art, and that some proportion of that larger pool of people managed to get their skills's mental muscle memory sufficiently intensely honed that everyone should feel uncomfortable sharing a planet with them.
leogao7411

when i was new to research, i wouldn't feel motivated to run any experiment that wouldn't make it into the paper. surely it's much more efficient to only run the experiments that people want to see in the paper, right?

now that i'm more experienced, i mostly think of experiments as something i do to convince myself that a claim is correct. once i get to that point, actually getting the final figures for the paper is the easy part. the hard part is finding something unobvious but true. with this mental frame, it feels very reasonable to run 20 experiments for every experiment that makes it into the paper.

3Sheikh Abdur Raheem Ali
This is also because of Jevon's Paradox. As the cost of doing an experiment reduces with experience, the number of experiments run tends to rise.
6Gunnar_Zarncke
What is often left out in papers is all of these experiments and the though chains people had about them.
leogao468

libraries abstract away the low level implementation details; you tell them what you want to get done and they make sure it happens. frameworks are the other way around. they abstract away the high level details; as long as you implement the low level details you're responsible for, you can assume the entire system works as intended.

a similar divide exists in human organizations and with managing up vs down. with managing up, you abstract away the details of your work and promise to solve some specific problem. with managing down, you abstract away the mis... (read more)

leogao34

the laws of physics are quite compact. and presumably most of the complexity in a zygote is in the dna.

leogao83

a thriving culture is a mark of a healthy and intellectually productive community / information ecosystem. it's really hard to fake this. when people try, it usually comes off weird. for example, when people try to forcibly create internal company culture, it often comes off as very cringe.

leogao3710

don't worry too much about doing things right the first time. if the results are very promising, the cost of having to redo it won't hurt nearly as much as you think it will. but if you put it off because you don't know exactly how to do it right, then you might never get around to it.

2Viliam
yep. doing it and then redoing it can still be much faster than procrastinating on it
leogao6039

the tweet is making fun of people who are too eager to do something EMPIRICAL and SCIENTIFIC and ignore the pesky little detail that their empirical thing actually measures something subtly but importantly different from what they actually care about

5RedMan
We won't let our lack of data stop us from running our analysis program!
leogao1711

i've changed my mind and been convinced that it's kind of a big deal that frontiermath was framed as something that nobody would have access to for hillclimbing when in fact openai would have access and other labs wouldn't. the undisclosed funding before o3 launch still seems relatively minor though

leogao1039

lol i was the one who taped it to the wall. it's one of my favorite tweets of all time

leogao14-14

this doesn't seem like a huge deal

leogao1711

i've changed my mind and been convinced that it's kind of a big deal that frontiermath was framed as something that nobody would have access to for hillclimbing when in fact openai would have access and other labs wouldn't. the undisclosed funding before o3 launch still seems relatively minor though

Daniel Tan1111

am curious why you think this; it seems like some people were significantly misled and disclosure of potential conflicts-of-interest seems generally important

leogaoΩ9186

in retrospect, we know from chinchilla that gpt3 allocated its compute too much to parameters as opposed to training tokens. so it's not surprising that models since then are smaller. model size is a less fundamental measure of model cost than pretraining compute. from here on i'm going to assume that whenever you say size you meant to say compute.

obviously it is possible to train better models using the same amount of compute. one way to see this is that it is definitely possible to train worse models with the same compute, and it is implausible that the ... (read more)

leogao50

suppose I believe the second coming involves the Lord giving a speech on capitol hill. one thing I might care about is how long until that happens. the fact that lots of people disagree about when the second coming is doesn't mean the Lord will give His speech soon.

similarly, the thing that I define as AGI involves AIs building Dyson spheres. the fact that other people disagree about when AGI is doesn't mean I should expect Dyson spheres soon.

tangerine100

The amount of contention says something about whether an event occurred according to the average interpretation. Whether it occurred according to your specific interpretation depends on how close that interpretation is to the average interpretation.

You can't increase the probability of getting a million dollars by personally choosing to define a contentious event as you getting a million dollars.

4Noosphere89
My response to this is to focus on when a Dyson Swarm is being built, not AGI, because it's easier to define the term less controversially. And a large portion of disagreements here fundamentally revolves around being unable to coordinate on what a given word means, which from an epistemic perspective doesn't matter at all, but it does matter from a utility/coordination perspective, where coordination is required for a lot of human feats.
leogao66

people disagree heavily on what the second coming will look like. this, of course, means that the second coming must be upon us

2tangerine
You’re kind of proving the point; the Second Coming is so vaguely defined that it might as well have happened. Some churches preach this. If the Lord Himself did float down from Heaven and gave a speech on Capitol Hill, I bet lots of Christians would deride Him as an impostor.
leogao30

I agree that labs have more compute and more top researchers, and these both speed up research a lot. I disagree that the quality of responses is the same as outside labs, if only because there is lots of knowledge inside labs that's not available elsewhere. I think these positive factors are mostly orthogonal to the quality of software infrastructure.

leogao3915

some random takes:

  • you didn't say this, but when I saw the infrastructure point I was reminded that some people seem to have a notion that any ML experiment you can do outside a lab, you will be able to do more efficiently inside a lab because of some magical experimentation infrastructure or something. I think unless you're spending 50% of your time installing cuda or something, this basically is just not a thing. lab infrastructure lets you run bigger experiments than you could otherwise, but it costs a few sanity points compared to the small experiment
... (read more)
3Sheikh Abdur Raheem Ali
(responding only to the first point) It is possible to do experiments more efficiently in a lab because you have privileged access to top researchers whose bandwidth is otherwise very constrained. If you ask for help in Slack, the quality of responses tends to be comparable to teams outside labs, but the speed is often faster because the hiring process selects strongly for speed. It can be hard to coordinate busy schedules, but if you have a collaborator's attention, what they say will make sense and be helpful. People at labs tend to be unusually good communicators, so it is easier to understand what they mean during meetings, whiteboard sessions, or 1:1s. This is unfortunately not universal amongst engineers. It's also rarer for projects to be managed in an unfocused way leading to them fizzling out without adding value, and feedback usually leads to improvement rather than deadlock over disagreements.  Also, lab culture in general benefits from high levels of executive function. For instance, when a teammate says they spent an hour working on a document, you can be confident that progress has been made even if not all changes pass review. It's less likely that they suffered from writer's block or got distracted by a lower priority task. Some of these factors also apply at well-run startups, but they don't have the same branding, and it'd be difficult for a startup to e.g line up four reviewers of this calibre: https://assets.anthropic.com/m/24c8d0a3a7d0a1f1/original/Alignment-Faking-in-Large-Language-Models-reviews.pdf. I agree that (without loss of generality) the internal RL code isn't going to blow open source repos out of the water, and if you want to iterate on a figure or plot, that's the same amount of work no matter where you are even if you have experienced people helping you make better decisions. But you're missing that lab infra doesn't just let you run bigger experiments, it also lets you run more small experiments, because resourcing for compute/

I think safetywashing is a problem but from the perspective of an xrisky researcher it's not a big deal because for the audiences that matter, there are safetywashing things that are just way cheaper per unit of goodwill than xrisk alignment work - xrisk is kind of weird and unrelatable to anyone who doesn't already take it super seriously. I think people who work on non xrisk safety or distribution of benefits stuff should be more worried about this.

Weird it may be, but it is also somewhat influential among people who matter. The extended LW-sphere is not... (read more)

leogao20

I think this is probably true of you and people around you but also you likely live in a bubble. To be clear, I'm not saying why people reading this should travel, but rather what a lot of travel is like, descriptively.

leogao192

theory: a large fraction of travel is because of mimetic desire (seeing other people travel and feeling fomo / keeping up with the joneses), signalling purposes (posting on IG, demonstrating socioeconomic status), or mental compartmentalization of leisure time (similar to how it's really bad for your office and bedroom to be the same room).

this explains why in every tourist destination there are a whole bunch of very popular tourist traps that are in no way actually unique/comparatively-advantaged to the particular destination. for example: shopping, amusement parks, certain kinds of museums.

6Nina Panickssery
I used to agree with this but am now less certain that travel is mostly mimetic desire/signaling/compartmentalization (at least for myself and people I know, rather than more broadly). I think “mental compartmentalization of leisure time” can be made broader. Being in novel environments is often pleasant/useful, even if you are not specifically seeking out unusual new cultures or experiences. And by traveling you are likely to be in many more novel environments even if you are a “boring traveler”. The benefit of this extends beyond compartmentalization of leisure, you’re probably more likely to have novel thoughts and break out of ruts. Also some people just enjoy novelty.
1CstineSublime
What fraction would you say is genuinely motivated by "seeing and experiencing another culture"? I don't doubt that most travel is performative, but I also think most of the people I interact with seem to have different motivations and talk about things from their travels which are a world away from the Pulp Fiction beer in a McDonalds discussion.
leogao2217

ok good that we agree interp might plausibly be on track. I don't really care to argue about whether it should count as prosaic alignment or not. I'd further claim that the following (not exhaustive) are also plausibly good (I'll sketch each out for the avoidance of doubt because sometimes people use these words subtly differently):

  • model organisms - trying to probe the minimal sets of assumptions to get various hypothesized spicy alignment failures seems good. what is the least spoonfed demonstration of deceptive alignment we can get that is analogous me
... (read more)
5Leon Lang
Thanks for the list! I have two questions: 1: Can you explain how generalization of NNs relates to ELK? I can see that it can help with ELK (if you know a reporter generalizes, you can train it on labeled situations and apply it more broadly) or make ELK unnecessary (if weak to strong generalization perfectly works and we never need to understand complex scenarios). But I’m not sure if that’s what you mean. 2: How is goodhart robustness relevant? Most models today don’t seem to use reward functions in deployment, and in training the researchers can control how hard they optimize these functions, so I don’t understand why they necessarily need to be robust under strong optimization.
8johnswentworth
All four of those I think are basically useless in practice for purposes of progress toward aligning significantly-smarter-than-human AGI, including indirectly (e.g. via outsourcing alignment research to AI). There are perhaps some versions of all four which could be useful, but those versions do not resemble any work I've ever heard of anyone actually doing in any of those categories. That said, many of those do plausibly produce value as propaganda for the political cause of AI safety, especially insofar as they involve demoing scary behaviors. EDIT-TO-ADD: Actually, I guess I do think the singular learning theorists are headed in a useful direction, and that does fall under your "science of generalization" category. Though most of the potential value of that thread is still in interp, not so much black-box calculation of RLCTs.
leogao312

in capabilities, the most memetically successful things were for a long time not the things that actually worked. for a long time, people would turn their noses at the idea of simply scaling up models because it wasn't novel. the papers which are in retrospect the most important did not get that much attention at the time (e.g gpt2 was very unpopular among many academics; the Kaplan scaling laws paper was almost completely unnoticed when it came out; even the gpt3 paper went under the radar when it first came out.)

one example of a thing within prosaic alig... (read more)

If you're thinking mainly about interp, then I basically agree with what you've been saying. I don't usually think of interp as part of "prosaic alignment", it's quite different in terms of culture and mindset and it's much closer to what I imagine a non-streetlight-y field of alignment would look like. 90% of it is crap (usually in streetlight-y ways), but the memetic selection pressures don't seem too bad.

If we had about 10x more time than it looks like we have, then I'd say the field of interp is plausibly on track to handle the core problems of alignment.

leogao91

some concrete examples

  • "agi happens almost certainly within in the next few decades" -> maybe ai progress just kind of plateaus for a few decades, it turns out that gpqa/codeforces etc are like chess in that we only think they're hard because humans who can do them are smart but they aren't agi-complete, ai gets used in a bunch of places in the economy but it's more like smartphones or something. in this world i should be taking normie life advice a lot more seriously.
  • "agi doesn't happen in the next 2 years" -> maybe actually scaling current technique
... (read more)
leogao107

i think it's quite valuable to go through your key beliefs and work through what the implications would be if they were false. this has several benefits:

  • picturing a possible world where your key belief is wrong makes it feel more tangible and so you become more emotionally prepared to accept it.
  • if you ever do find out that the belief is wrong, you don't flinch away as strongly because it doesn't feel like you will be completely epistemically lost the moment you remove the Key Belief
  • you will have more productive conversations with people who disagree with
... (read more)
2Viliam
Making a list of your beliefs can be complicated. Recognizing the belief as a "belief" is the necessary first step, but the strongest beliefs (those that examining them would be most useful?) are probably transparent, they feel like "just how the world is". Then again, maybe listing all the strong beliefs would actually be useless, because the list would contain tons of things like "I believe that 2+2=4", and examining those would be mostly a waste of time. We want the beliefs that are strong but possibly wrong. But when you notice that they are "possibly wrong", you have already made the most difficult step; the question is how to get there.
4Daniel Tan
what are some of your key beliefs and what were the implications if they were false?
leogao80

there are two different modes of learning i've noticed.

  • top down: first you learn to use something very complex and abstract. over time, you run into weird cases where things don't behave how you'd expect, or you feel like you're not able to apply the abstraction to new situations as well as you'd like. so you crack open the box and look at the innards and see a bunch of gears and smaller simpler boxes, and it suddenly becomes clear to you why some of those weird behaviors happened - clearly it was box X interacting with gear Y! satisfied, you use your newf
... (read more)
3Daniel Tan
Seems to strongly echo Karpathy, in that top-down learning is most effective for building expertise https://x.com/karpathy/status/1325154823856033793?s=46&t=iz509DJpCAibJadbMh4TvQ
leogao31

there is always too much information to pay attention to. without an inexpensive way to filter, the field would grind to a complete halt. style is probably a worse thing to select on than even academia cred, just because it's easier to fake.

leogao9977

I'm sympathetic to most prosaic alignment work being basically streetlighting. However, I think there's a nirvana fallacy going on when you claim that the entire field has gone astray. It's easiest to illustrate what I mean with an analogy to capabilities.

In capabilities land, there were a bunch of old school NLP/CV people who insisted that there's some kind of true essence of language or whatever that these newfangled neural network things weren't tackling. The neural networks are just learning syntax, but not semantics, or they're ungrounded, or they don... (read more)

4eggsyntax
It's a bit tangential to the context, but this is a topic I have an ongoing interest in: what leads you to believe that the skeptics (in particular NLP people in the linguistics community) have shifted away from their previous positions? My impression has been that many of them (though not all) have failed to really update to any significant degree. Eg here's a paper from just last month which argues that we must not mistake the mere engineering that is LLM behavior for language understanding or production.

I think you have two main points here, which require two separate responses. I'll do them opposite the order you presented them.

Your second point, paraphrased: 90% of anything is crap, that doesn't mean there's no progress. I'm totally on board with that. But in alignment today, it's not just that 90% of the work is crap, it's that the most memetically successful work is crap. It's not the raw volume of crap that's the issue so much as the memetic selection pressures.

Your first point, paraphrased: progress toward the the hard problem does not necessarily i... (read more)

leogao50

sure, the thing you're looking for is the status system that jointly optimizes for alignedness with what you care about, and how legible it is to the people you are trying to convince.

2habryka
(My guess is you meant to agree with that, but kind of the whole point of my comment was that the dimension that is more important than legibility and alignment with you is the buy-in your audience has for a given status system. Youtube is not very legible, and not that aligned, but for some audiences has very high buy-in.)
leogao*2314

a lot of unconventional people choose intentionally to ignore normie-legible status systems. this can take the form of either expert consensus or some form of feedback from reality that is widely accepted. for example, many researchers especially around these parts just don't publish at all in normal ML conferences at all, opting instead to depart into their own status systems. or they don't care whether their techniques can be used to make very successful products, or make surprisingly accurate predictions etc. instead, they substitute some alternative st... (read more)

1CstineSublime
What kind of changes or outcomes would you expect to see if people around these parts instead of publishing their work independently started trying to get it into traditional ML conferences and related publications?
9habryka
A thing that I often see happening when people talk about "normie-legible status systems" is that they gaslight themselves into believing that some status system that is extraordinarily legible, or they are part of, is something that is consensus. Academia is the most intense example of this. Most people don't care that much about academic status! This also happens in the other direction. Youtube is a major source of status in much of the world, especially among young people, but is considered low-brow whenever people argue about this, and so people dismiss it. I also think people tend to do a fallacy of gray thing where if a status system is not maximally legible (like writing popular blogposts, or running a popular podcast, or making popular Youtube videos, or being popular on Twitter), they dismiss the status system as not real and "illegible". I think modeling the real status and reputation systems that are present in the world is important, but for example, trying to ascent the academic status hierarchy is a bad use of time and resources. It's extremely competitive, and not actually that influential outside of the academic bubble. It is in some fields better correlated with actual skills and integrity and intelligence, and so I still think a reasonable thing to consider, but I think most people are better placed to trade off a bit of legibility against a whole amount of net realness in status (this importantly does not mean your LW quick takes will be the thing that causes you to become world-renowned, I am not saying "just say smart things and the world will recognize you", I am saying "don't think that only the most legible status systems, or the one with the most mobs hunting dissenters from the status system are the only real ways of gaining recognition in the world").
3Oliver Daniels
Two common failure modes to avoid when doing the legibly impressive things 1. Only caring instrumentally about the project (decreases motivation) 2. Doing "net negative" projects 
4Cole Wyeth
It's possible that this wouldn't work for everyone, but so far I am very satisfied working on a PhD on agent foundations (AIXI). There are a lot of complaints here about academic incentives, but mostly I just ignore them. Possibly this will eventually interfere with my academic career prospects, but in the meantime I get years to work on basically whatever I think is interesting and important, and at the end of it I can reasonably expect to end up with a PhD and a thesis I'm proud of, which seems like enough to land on my feet. Looks like the best of both worlds to me.

This comment seems to implicitly assume markers of status are the only way to judge quality of work. You can just, y'know, look at it? Even without doing a deep dive, the sort of papers or blog posts which present good research have a different style and rhythm to them than the crap. And it's totally reasonable to declare that one's audience is the people who know how to pick up on that sort of style.

The bigger reason we can't entirely escape "status"-ranking systems is that there's far too much work to look at it all, so people have to choose which information sources to pay attention to.

9Daniel Murfet
There is a passage from Jung's "Modern man in search of a soul" that I think about fairly often, on this point (p.229 in my edition)  
leogaoΩ372

simple ideas often require tremendous amounts of effort to make work.

leogao43

twitter is great because it boils down saying funny things to purely a problem of optimizing for funniness, and letting twitter handle the logistics of discovery and distribution. being e.g a comedian is a lot more work.

leogao82

corollary: oftentimes, when smart people say things that are clearly wrong, what's really going on is they're saying the closest thing in their frame that captures the grain of truth

leogao94

the world is too big and confusing, so to get anything done (and to stay sane) you have to adopt a frame. each frame abstracts away a ton about the world, out of necessity. every frame is wrong, but some are useful. a frame comes with a set of beliefs about the world and a mechanism for updating those beliefs.

some frames contain within them the ability to become more correct without needing to discard the frame entirely; they are calibrated about and admit what they don't know. they change gradually as we learn more. other frames work empirically but are a... (read more)

4quetzal_rainbow
"...you learn that there's three kinds of intellectuals. There's intellectuals that work in one frame. There's intellectuals that work in two frames. And there's intellectuals that change frames like you and I change clothes."
8Vladimir_Nesov
It's as efficient to work on many frames while easily switching between them. Some will be poorly developed, but won't require commitment and can anchor curiosity, progress on blind spots of other frames.
8leogao
corollary: oftentimes, when smart people say things that are clearly wrong, what's really going on is they're saying the closest thing in their frame that captures the grain of truth
leogao30

it's (sometimes) also a mechanism for seeking domains with long positive tail outcomes, rather than low variance domains

leogao42

the financial industry is a machine that lets you transmute a dollar into a reliable stream of ~4 cents a year ~forever (or vice versa). also, it gives you a risk knob you can turn that increases the expected value of the stream, but also the variance (or vice versa; you can take your risky stream and pay the financial industry to convert it into a reliable stream or lump sum)

leogao71

I think the most important part of paying for goods and services is often not the raw time saved, but the cognitive overhead avoided. for instance, I'd pay much more to avoid having to spend 15 minutes understanding something complicated (assuming there is no learning value) than 15 minutes waiting. so it's plausibly more costly to have to figure out the timetable, fare system, remembering to transfer, navigating the station, than the additional time spent in transit (especially applicable in a new unfamiliar city)

9Viliam
I guess is depends on the kind of work you do (and maybe whether you have ADHD). From my perspective, yes, attention is even more scarce than time or money, because when I get home from work, it feels like all my "thinking energy" is depleted, and even if I could somehow leverage the time or money for some good purpose, I am simply unable to do that. Working even more would mean that my private life would fall apart completely. And people would probably ask "why didn't he simply...?", and the answer would be that even the simple things become very difficult to do when all my "thinking energy" is gone. There are probably smart ways to use money to reduce the amount of "thinking energy" you need to spend in your free time, but first you need enough "thinking energy" to set up such system. The problem is, the system needs to be flawless, because otherwise you still need to spend "thinking energy" to compensate for its flaws. EDIT: I especially hate things like the principal-agent problem, where the seemingly simple answer is: "just pay a specialist to do that, duh", but that immediately explodes to "but how can I find a specialist?" and "how can I verify that they are actually doing a good job?", which easily become just as difficult as the original problem I tried to solve.
3CstineSublime
I wasn't asking how most people go about determining which goods or services to pay for generally, but rather if you're noticing that they are using the working hours by salary equation to determine what their time is worth, if it's to put a dollar figure on what they do in fact value it at, (and that isolates the time element from the effort or cognitive load element) I didn't specify nor imply that one route took more cognitive load than the other, only that one was quicker than the other, and that differential would be one such way of revealing the value of time.  (Otherwise they're not, in fact, trying to ascertain what their time is worth at all... but something else) Nowadays using Public Transport is often no more complicated or takes no more effort than using Uber thanks to Google Maps, but this tangent is immaterial to my question: are you noticing these people are trying to measure how much they DO value their time, or are they trying to ascertain how much they SHOULD value their time?
leogao1411

agree it goes in both directions. time when you hold critical context is worth more than time when you don't. it's probably at least sometimes a good strategy to alternate between working much more than sustainable and then recovering.

my main point is this is a very different style of reasoning than what people usually do when they talk about how much their time is worth.

leogao5333

people around these parts often take their salary and divide it by their working hours to figure out how much to value their time. but I think this actually doesn't make that much sense (at least for research work), and often leads to bad decision making.

time is extremely non fungible; some time is a lot more valuable than other time. further, the relation of amount of time worked to amount earned/value produced is extremely nonlinear (sharp diminishing returns). a lot of value is produced in short flashes of insight that you can't just get more of by spen... (read more)

1CstineSublime
Are these people trying to determine how much they (subjectively) value their time or how much they should value their time? Because I think if it's the former and Descriptive, wouldn't the obvious approach be to look at what time-saving services they have employed recently or in the past and see how much they have paid for them relative to how much time they saved? I'm referring to services or products where they could have done it themselves as they have the tools, abilities and freedom to commit to it, but opted to buy a machine or outsource the task to someone else. (I am aware that the hidden variable of 'effort' complicates this model). For example, in what situations will I walk or take public transport to get somewhere, and which ones will I order an Uber: There's a certain cross-over point where if the time-saved is enough I'll justify the expense to myself, which would seem to be a good starting point for evaluating in descriptive terms how much I value my time. I'm guessing if you had enough of these examples where the effort-saved was varied enough then you'd begin to get more accurate model of how one values their time?
habryka299

but actually diminishing returns means one more hour on the margin is much less valuable than the average implies

This importantly also goes in the other direction!

One dynamic I have noticed people often don't understand is that in a competitive market (especially in winner-takes-all-like situations) the marginal returns to focusing more on a single thing can be sharply increasing, not only decreasing.

In early-stage startups, having two people work 60 hours is almost always much more valuable than having three people work 40 hours. The costs of growing a te... (read more)

leogao60

I'd be surprised if this were the case. next neurips I can survey some non native English speakers to see how many ML terms they know in English vs in their native language. I'm confident in my ability to administer this experiment on Chinese, French, and German speakers, which won't be an unbiased sample of non-native speakers, but hopefully still provides some signal.

leogao60

only 2 people walked away without answering (after saying yes initially); they were not counted as yes or no. another several people refused to even answer, but this was also quite rare. the no responders seemed genuinely confused, as opposed to dismissive.

feel free to replicate this experiment at ICML or ICLR or next neurips.

1lewis smith
i mean i think that its' definitely an update (anything short of 95% i think would have been quite surprising to me)
leogao60

not sure, i didn't keep track of this info. an important data point is that because essentially all ML literature is in english, non-anglophones generally either use english for all technical things, or at least codeswitch english terms into their native language. for example, i'd bet almost all chinese ML researchers would be familiar with the term CNN and it would be comparatively rare for people to say 卷积神经网络. (some more common terms like 神经网络 or 模型 are used instead of their english counterparts - neural network / model - but i'd be shocked if people di... (read more)

leogao150

the specific thing i said to people was something like:

excuse me, can i ask you a question to help settle a bet? do you know what AGI stands for? [if they say yes] what does it stand for? [...] cool thanks for your time

i was careful not to say "what does AGI mean".

most people who didn't know just said "no" and didn't try to guess. a few said something like "artificial generative intelligence". one said "amazon general intelligence" (??). the people who answered incorrectly were obviously guessing / didn't seem very confident in the answer. 

if they see... (read more)

3lewis smith
not to be 'i trust my priors more than your data', but i have to say that i find the AGI thing quite implausible; my impression is that most AI researchers (way more than 60%), even ones working in like something very non-deep learning adjacent, have heard of the term AGI, but many of them are/were quite dismissive of it as an idea or associate it strongly (not entirely unfairly) with hype /bullshit, hence maybe walking away from you when you ask them about it. e.g deepmind and openAI have been massive producers of neurips papers for years now (at least since I started a phd in 2016), and both organisations explictly talked about AGI fairly often for years. maybe neurips has way more random attendees now (i didn't go this year), but I still find this kind of hard to believe; I think I've read about AGI in the financial times now.
leogao810

I decided to conduct an experiment at neurips this year: I randomly surveyed people walking around in the conference hall to ask whether they had heard of AGI

I found that out of 38 respondents, only 24 could tell me what AGI stands for (63%)

we live in a bubble

(https://x.com/nabla_theta/status/1869144832595431553)

Reply9433
3Eli Tyre
Was this possibly a language thing? Are there Chinese or Indian machine learning researchers who would use a different term than AGI in their native language?
2Nathan Helm-Burger
I'd be curious to hear some of the guesses people make when they say they don't know.
4lc
I think if I got asked randomly at an AI conference if I knew what AGI was I would probably say no, just to see what the questioner was going to tell me.
6Daniel Kokotajlo
Very interesting! Those who couldn't tell you what AGI stands for -- what did they say? Did they just say "I don't know" or did they say e.g. "Artificial Generative Intelligence...?" Is it possible that some of them totally HAD heard the term AGI a bunch, and basically know what it means, but are just being obstinate? I'm thinking of someone who is skeptical of all the hype and aware the lots of people define AGI differently. Such a person might respond to "Can you tell me what AGI means" with "No I can't (because it's a buzzword that means different things to different people)"
9Eric Neyman
What's your guess about the percentage of NeurIPS attendees from anglophone countries who could tell you what AGI stands for?
leogaoΩ9154

I'm very excited about approaches to add hierarchy to SAEs - seems like an important step forward. In general, approaches that constraint latents in various ways that let us have higher L0 without reconstruction becoming trivial seem exciting.

I think it would be cool to get follow up work on bigger LMs. It should also be possible to do matryoshka with block size = 1 efficiently with some kernel tricks, which would be cool.

3Noa Nabeshima
Yes, follow up work with bigger LMs seems good! I use number of prefix-losses per batch = 10 here; I tried 100 prefixes per batch and the learned latents looked similar at a quick glance, so I wonder if naively training with block size = 1 might not be qualitatively different. I'm not that sure and training faster with kernels on its own seems good also! Maybe if you had a kernel for training with block size = 1 it would create surface area for figuring out how to work on absorption when latents are right next to each other in the matryoshka latent ordering.
leogao52

I won't claim to be immune to peer pressure but at least on the epistemic front I think I have a pretty legible track record of believing things that are not very popular in the environments I've been in.

leogao90

a medium with less limitations is strictly better for making good art, but it's also harder to identify good art among the sea of bad art because the medium alone is no longer as good a signal of quality

leogao50

to be clear, a "winter/slowdown" in my typology is more about the vibes and could only be a few years counterfactual slowdown. like the dot-com crash didn't take that long for companies like Amazon or Google to recover from, but it was still a huge vibe shift

leogao50

also to further clarify this is not an update I've made recently, I'm just making this post now as a regular reminder of my beliefs because it seems good to have had records of this kind of thing (though everyone who has heard me ramble about this irl can confirm I've believed sometime like this for a while now)

leogao2310

people often say that limitations of an artistic medium breed creativity. part of this could be the fact that when it is costly to do things, the only things done will be higher effort

9leogao
a medium with less limitations is strictly better for making good art, but it's also harder to identify good art among the sea of bad art because the medium alone is no longer as good a signal of quality
2Noosphere89
This seems the likely explanation for any claim that constraints breed creativity/good things in a field, when the expectation is that the opposite outcome would occur.
5TsviBT
Yes, but this also happens within one person over time, and the habit (of either investing, or not, in long-term costly high-quality efforts) can gain Steam in the one person.
leogao3416

also a lot of people will suggest that alignment people are discredited because they all believed AGI was 3 years away, because surely that's the only possible thing an alignment person could have believed. I plan on pointing to this and other statements similar in vibe that I've made over the past year or two as direct counter evidence against that

(I do think a lot of people will rightly lose credibility for having very short timelines, but I think this includes a big mix of capabilities and alignment people, and I think they will probably lose more credibility than is justified because the rest of the world will overupdate on the winter)

4the gears to ascension
I was someone who had shorter timelines. At this point, most of the concrete part of what I expected has happened, but the "actually AGI" thing hasn't. I'm not sure how long the tail will turn out to be. I only say this to get it on record.
5leogao
also to further clarify this is not an update I've made recently, I'm just making this post now as a regular reminder of my beliefs because it seems good to have had records of this kind of thing (though everyone who has heard me ramble about this irl can confirm I've believed sometime like this for a while now)
8Jozdien
My timelines are roughly 50% probability on something like transformative AI by 2030, 90% by 2045, and a long tail afterward. I don't hold this strongly either, and my views on alignment are mostly decoupled from these beliefs. But if we do get an AI winter longer than that (through means other than by government intervention, which I haven't accounted for), I should lose some Bayes points, and it seems worth saying so publicly.
Load More