All of Garrett Baker's Comments + Replies

He has mentioned the phrase a bunch. I haven’t looked through enough of these links enough to form an opinion though.

thank you for this search. Looking at the results, top 3 are by commentors. 

Then one about not thinking a short book could be this good.

I don't think this is Cowen actually saying he made a wrong prediction, just using it to express how the book is unexpectedly good at talking about a topic that might normally take longer, though happy to hear why I'm wrong here. 

Another commentor:

another commentor:

Ending here for now, doesn't seem to be any real instances of Tyler Cowen saying he was wrong about something he thought was true yet. 

wary of some kind of meme poisoning

I can think of reasons why some would be wary, and am waried of something which could be called “meme poisoning” myself when I watch moves, but am curious what kind of meme poisoning you have in mind here.

You can destroy others’ value intentionally, but only in extreme circumstances where you’re not thinking right or have self-destructive tendencies can you “intentionally” destroy your own value. But then we hardly describe the choices such people make as “intentional”. Eg the self-destructive person doesn’t “intend” to lose their friends by not paying back borrowed money. And those gambling at the casino, despite not thinking right, can’t be said to “intend” to lose all their money, though they “know” the chances they’ll succeed.

To complete your argument, ‘and therefore the action has some deadweight loss associated with it, meaning its destroying value’.

But note that by the same logic, any economic activity destroys value, since you are also not homo economicus when you buy ice cream, and there will likely be smarter things you can do with your money, or better deals. Therefore buying ice cream, or doing anything else destroys value.

But that is absurd, and we clearly don’t have a so broad definition of “destroy value”. So your argument proves too much.

If you are destroying something you own, you would value the destruction of that thing more than any other use you have for that thing and any price you could sell it for on the market, so this creates value in the sense that there is no deadweight loss to the relevant transactions/actions.

2the gears to ascension
You might not value the destruction as much as others valued the thing you destroyed. In other words, you're assuming homo economicus, I'm not.
4Viliam
This sounds like by definition value cannot be destroyed intentionally.

An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average.

The studies which are usually cited in support of this effect show nowhere near this level of hedonic treadmill. I suggest you read them (you can probably ask Claude for the recs here).

Just buy something with negative externalities. Eg invest in the piracy stock exchange.

The Marubo, a 2,000-member tribe in the Brazilian Amazon, recently made headlines: they received Starlink in September 2023. Within 9 months, their youth stopped learning traditional body paint and jewelry making, young men began sharing pornographic videos in group chats (in a culture that frowns on public kissing), leaders observed "more aggressive sexual behavior," and children became addicted to short-form video content. One leader reported: "Everyone is so connected that sometimes they don't even talk to their own family."

Note that the tribe in que... (read more)

Of course this is now used as an excuse to revert any recent attempts to improve the article.

From reading the relevant talk-page it is pretty clear those arguing against the changes on these bases aren’t exactly doing so in good faith, and if they did not have this bit of ammunition to use they would use something else, but then with fewer detractors (since clearly nobody else followed or cared about that page).

any shocking or surprising result in your own experiment is 80% likely to be a bug until proven otherwise. your first thought should always be to comb for bugs.

I will add: 80% likely to be a bug, or a result from random-matrix theory.

A post is coming! I will get future posts out earlier in the future, sorry.

Yeah, sorry, I think I had my very negative reaction mostly because of the title. Looking at the other comments & the upvotes which are not mine on the article, clearly my view here is non-representative of the rest of lesswrong.

I think the disconnect might be what you want this article to be (a guide to baldness) vs what it is (the author's journey and thoughts around baldness mixed with gallows humor regarding his potential fate) [...] That said, setting audience expectations is also an important part of writing, and I think "A Rationalist's Guide to Male Pattern Baldness" is probably setting the wrong expectation.

Yeah, good point. I agree, I think this is the problem.

The Norword/forest comparison gets used consistently throughout (including the title) and is the only metap

... (read more)

Hm, I read "A rationalist's guide to male pattern baldness" and thought the post would be in the same genera as other lesswrong "guide" articles, and much less a personal blog-post. Most centrally the guide to rationalist interior decorating. I read the article with this expectation, and felt I mostly got duped. If it was for example just called "Escaping the Jungles of Norwood", I just wounldn't have read it, but if I did I would not be as upset.

I still maintain my criticism holds even for a personal blog post, but would not have strong downvoted it, just... (read more)

There is too much fluff in this article, and it reads like AI slop. Maybe it is, but in either case this is because the fluff and the content do not support each other. Taking an example quote:

Before you can escape a jungle, you have to know its terrain. The Norwood-Hamilton scale is the cartographer’s map for male pattern baldness (androgenetic alopecia, or AGA), dividing male scalps into a taxonomy of misery, from Norwood I (thick, untamed rainforest) to Norwood VII (Madagascar, post-deforestation). It’s less a continuum and more a sequence of discrete w

... (read more)
4Brendan Long
I read the whole thing and disagree. I think the disconnect might be what you want this article to be (a guide to baldness) vs what it is (the author's journey and thoughts around baldness mixed with gallows humor regarding his potential fate). The Norword/forest comparison gets used consistently throughout (including the title) and is the only metaphor used this way. Whether you like this comparison or not, it's not a case of AI injecting constant flowery language. That said, setting audience expectations is also an important part of writing, and I think "A Rationalist's Guide to Male Pattern Baldness" is probably setting the wrong expectation. I upvoted since I thought it was interesting and I learned a little bit.
4AlphaAndOmega
I'm quite confused by a lot of the criticism here. A lot of the verbosity is simply my style, as well as an intentional attempt to inject levity into what can otherwise be a grim/dry subject.  Like, I'm genuinely at a loss. I just thought that the pun Nor-*wood* was cute, and occasionally worth alluding to.  I wrote the post with the background assumption that most prospective readers would be passingly familiar with the Norwood scale, though I agree that I should have thrown in a picture.  But arguing about an obvious metaphor? Really? At the end of the day, hair strands are discrete entities, you don't have fewer even after a haircut. You can assume the Norwood scale is a simplification if there's not 100k stages in it.  I have no control over your ability to strong downvote, but I can register my disagreement that it was remotely warranted. 

Of course it is, I did not think otherwise, but my point stands.

2tailcalled
No it doesn't. I obviously understood my old posts (and still do - the posts make sense if I imagine ignoring LDSL). So I'm capable of understanding whether I've found something that reveals problems in them. It's possible I'm communicating LDSL poorly, or that you are too ignorant to understand it, or that I'm overestimating how broadly it applies, but those are far more realistic than that I've become a pure crank. If you still prefer my old posts to my new posts, then I must know something relevant you don't know.

And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven't just abandoned the conversation.

I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.

If you were always lazy, I wouldn’t be having this conversation, but once you were not.

2tailcalled
A lot of my new writing is as a result of the conclusions of or in response to my old research ideas.

"Why are adversarial eamples so easy to find?" is a problem that is easily solvable without my model. You can't solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.

What is the solution then?

AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.

I would honestly be interested in any concrete model you build based on this. You don't necessarily have to compare it against some other field's existing model, though it does help for credibility's sake. But I would like to at least be able to compare the model you make against data.

I'm also not sure this is true about epidemiologists, and if it is I'd guess its true to the extent that they have like 4 diffe... (read more)

2tailcalled
The most central aspect of my model is to explain why it's generally not relevant to fit quantitative models to data. Each disease (and even different strands of the same disease and different environmental conditions for the same strand) has its own parameters, but they don't fit a model that contains all the parameters of all diseases at once, they just focus on one disease at a time.

Before you said

"How do we interpret the inner-workings of neural networks." is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.

Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow "input/output pair" problems. Yet now you say

You just keep on treating it like the narrow domain-specific models count as competition when they

... (read more)
-2tailcalled
The relevance of zooming in on particular input/output problems is part of my model. "Why are adversarial eamples so easy to find?" is a problem that is easily solvable without my model. You can't solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving. "Why are people so insistent about outliers?" is not vague at all! It's a pretty specific phenomenon that one person mentions a general theory and then another person says it can't be true because of their uncle or whatever. The phrasing in the heading might be vague because headings are brief, but I go into more detail about it in the post, even linking to a person who frequently struggles with that exact dynamic. As an aside, you seem to be trying to probe me for inconsistencies and contradictions, presumably because you've written me off as a crank. But I don't respect you and I'm not trying to come off as credible to you (really I'm slightly trying to come off as non-credible to you because your level of competence is too low for this theory to be relevant/good for you). And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven't just abandoned the conversation. Theories of everything that explain why intelligence can't model everything and you need other abilities.

Lets go through your sequence shall we? And enumerate the so-called "concrete examples" you list

[LDSL#0] Some epistemological conundrums

Here you ask a lot of questions, approximately each of the form "why do 'people' think <thing-that-some-people-think-but-certainly-not-all". To list a few,

Why are people so insistent about outliers?

Seems to have a good answer. Sometimes they're informative!

Why isn’t factor analysis considered the main research tool?

Seems also to have a good answer, it is easy to fool yourself if you do it improperly.

How can pr

... (read more)
1tailcalled
This post has the table example. That's probably the most important of all the examples. That's accounting, not statistics. AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs. There is basically no competition. You just keep on treating it like the narrow domain-specific models count as competition when they really don't because they focus on something different than mine.

It sounds like, as I predicted, your theory doesn't apply to the problems I presented, so how about you provide an example

-8tailcalled

I do think I'm "good at" AI, I think many who are "good at" AI are also pretty confused here.

2tailcalled
I don't really care what you think.

Ok, then why do AI systems have so many adversarial examples? I have no formal model of this, though it plausibly makes some intuitive sense.

2tailcalled
... can you pick some topic that you are good at instead of focusing on AI? That would probably make the examples more informative.

I don't think you're using the actual arguments I presented in the LDSL series to evaluate my position.

I remember reading LDSL and not buying the arguments! At the time, I deeply respected you and your thinking, and thought "oh well I'm not buying these arguments, but surely if they're as useful as they say, tailcalled will apply them to various circumstances and that will be pie on my face, and in that circumstance I should try to figure out why I was mistaken". But then you didn't, and you started vague-posting constantly, and now we're here and you're g... (read more)

It would, but you didn't ask for such a thing. Are you asking for such a thing now? If so, here is one in AI, which is on everyone's minds: How do we interpret the inner-workings of neural networks.

I expect though, that you will say that your theory isn't applicable here for whatever reason. Therefore it would be helpful if you gave me an example of what sort of puzzle your theory is applicable to.

0tailcalled
"How do we interpret the inner-workings of neural networks." is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.

I don't care who calls themselves what, complexity science calls itself anti-reductionist, I don't dismiss them. Therefore I can't dismiss people just because they call themselves anti-reductionist, I must use their actual arguments to evaluate their positions.

I will also say that pleading to the community's intrinsic bias and claiming I've made arguments I haven't or have positions I don't is not doing much to make me think you less a crank.

0tailcalled
I'm not saying you're dismissing me because I call myself anti-reductionist, I'm saying you're dismissing me because I am an anti-reductionist. I don't think you're using the actual arguments I presented in the LDSL series to evaluate my position.

I never said that, I am asking you for solutions to any puzzle of your choice! You're just not giving me any!

Edit: I really honestly don't know where you got that impression, and it kinda upsets me you seemingly just pulled that straight out of thin air.

0tailcalled
Wouldn't it be more impressive if I could point you to a solution to a puzzle you've been stuck on than if I present my own puzzle and give you the solution to that?

I am not dismissing you because of your anti-reductionism! Where did I say that? Indeed, I have been known to praise some "anti-reductionist" theories--fields even!

I'm dismissing you because you can't give me examples of where your theory has been concretely useful!

-1tailcalled
If you don't have any puzzles within Economics/Sociology/Biology/Evolution/Psychology/AI/Ecology where it would be useful with a more holistic theory, then it's not clear why I should talk to you.
0tailcalled
You praise someone who wants to do agent-based models, but agent-based models are a reductionistic approach to the field of complexity science, so this sure seems to prove my point. (I mean, approximately all of the non-reductionistic approaches to the field of complexity science are bad too.)

idk what to say, this is just very transparently an excuse for you to be lazy here, and clearly crank-talk/cope.

0tailcalled
More specifically, my position is anti-reductionist, and rationalist-empiricist-reductionists dismiss anti-reductionists as cranks. As long as you are trying to model whether I am that and then dismiss me if you find I am, it is a waste of time to try to communicate my position to you.

Ok, I will first note that this is different from what you said previously. Previously, you said “probing for whether rationalists will get the problem if framed in different ways than the original longform” but now you say “I'm trying to probe the obviousness of the claims.”. It’s good to note when such switches occur.

Second, you should stop making lazy posts with no arguments regardless of the reasons. You can get just as much, and probably much more information through making good posts, there is not a tradeoff here. In fact, if you try to explain why y... (read more)

I do not think I could put my response here better than said did 7 years ago on a completely unrelated post, so I will just link that.

-11tailcalled

Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.

Then everything you say will seem either trivial or absurd because you don’t give arguments! Please post arguments for your claims!

2tailcalled
But that would probe the power of the arguments whereas really I'm trying to probe the obviousness of the claims.

This is the laziness I’m talking about! Do you really not understand why it would be to your theory-of-everything’s credit to have some, any, any at all, you know, actual use?

How suspicious it is that when I ask for explicit concrete examples, you explain that your theory is not really about particular examples, despite that if your vague-posting is indeed applying your theory of everything to particular examples, we can derive the existence of circumstances you believe your theory can well model?

And that excuse being that its good at deciding what to make... (read more)

0tailcalled
I can think of reasons why you'd like to know what theories would be smart to make using this framework, e.g. so you can make those theories instead of bothering to learn the framework. However, that's not a reason it would be good for me to share it with you, since I think that'd just distract you from the point of my theory.

Then give those examples!

Edit: and also back up those examples by actually making the particular model, and demonstrate why such models are so useful through means decorrelated with your original argument.

-7tailcalled

This may well be true (though I think not), but what is your argument about not even linking to your original posts? Or how often you don’t explain yourself even in completely unrelated subjects? My contention is that you are not lazily trying on a variety of different reframings of your original arguments or conclusions to see what sticks, and are instead just lazy.

2tailcalled
I don't know of anyone who seems to have understood the original posts, so I kinda doubt people can understand the point of them. Plus often what I'm writing about is a couple of steps removed from the original posts. Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.

So it sounds like your general theory has no alpha over narrow theories. What, then, makes it any good? Is it just that its broad enough to badly model many systems? Then it sounds useful in every case where we can’t make any formal predictions yet, and you should give those examples!

This sounds like a bad excuse not to do the work.

0tailcalled
It's mainly good for deciding what phenomena to make narrow theories about.

It sounds like there’s an implied “and therefore we have no influence over such discussions”. If so, then what are we arguing for? What does it matter if Julian Bradshaw and others think animals being left out of the CEV makes it a bad alignment target?

In either case, I don’t think we will only hear about any of these discussions until after they’re finalized. The AI labs are currently aligning and deploying (internally and externally) their AI models through what is likely to be the same process they use for ‘the big one’. Those discussions are these discussions, and we are hearing about them!

3Buck
I wasn't arguing about this because I care what Julian advocates for in a hypothetical global referendum on CEV, I was just arguing for the usual reason of wanting to understand things better and cause others to understand them better, under the model that it's good for LWers (including me) to have better models of important topics. My guess is that the situation around negotiations for control of the long run future will be different.

Yeah, I think those were some of your last good posts / first bad posts.

rationalists will get the problem if framed in different ways than the original longform.

Do you honestly think that rationalists will suddenly get your point if you say

I don't think RL or other AI-centered agency constructions will ever become very agentic.

with no explanation or argument at all, or even a link to your sparse lognormals sequence?

Or what about

Ayn Rand's book "The Fountainhead" is an accidental deconstruction of patriarchy that shows how it is fractally terrible

... (read more)
2tailcalled
I think this is the crux. To me after understanding these ideas, it's retroactively obvious that they are modelling all sorts of phenomena. My best guess is that the reason you don't see it is that you don't see the phenomena that are failing to be modelled by conventional methods (or at least don't understand how those phenomena related to the birds-eye perspective), so you don't realize what new thing is missing. And I can't easily cure this kind of cluelessness with examples, because my theories aren't necessary if you just consider a single very narrow and homogenous phenomenon as then you can just make a special-built theory for that.

The answer is not obviously “Less Wrong”, which is alarming.

Why alarming? I don't think LessWrong is the hub for any one sort of feedback, but on balance it seems like a good source of feedback. Certainly Said & his approach isn't the best possible response in every circumstance, I'm sure even he would agree with that, even if he thinks there should be more of it.

4Three-Monkey Mind
Because it used to be the obvious place to post something rationality-related where one could get good critical feedback, up to and including “you’re totally wrong, here’s why” or “have you considered…?” (where considering the thing totally invalidates or falsifies the idea I was trying to put forward).

Yeah reflecting a bit, I think my true objection is your parenthetical, because I’m convinced by your first paragraph’s logic.

2tailcalled
The thing about slop effects is that my updates (attempted to be described e.g. here https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62 ) makes huge fractions of LessWrong look like slop to me. Some of the increase in vagueposting is basically lazy probing for whether rationalists will get the problem if framed in different ways than the original longform.

I think you have a more general point, but I think it only really applies if the person making the post can back up their claim with good reasoning at some point, or will actually end up creating the room for such a discussion. Tailcalled has, in recent years, been vagueposting more and more, and I don't think they or their post will serve as a good steelman or place to discuss real arguments against the prevailing consensus.

Eg see their response to Noosphere's thoughtful comment.

6Vladimir_Nesov
My point doesn't depend on ability or willingness of the original poster/commenter to back up or clearly make any claim, or even participate in the discussion, it's about their initial post/comment creating a place where others can discuss its topic, for topics where that happens too rarely for whatever reason. If the original poster/commenter ends up fruitfully participating in that discussion, even better, but that is not necessary, the original post/comment can still be useful in expectation. (You are right that tailcalled specifically is vagueposting a nontrivial amount, even in this thread the response to my request for clarification ended up unclear. Maybe that propensity crosses the threshold for not ignoring the slop effect of individual vaguepostings in favor of vague positive externalities they might have.)

I think the point of the "weigh the preference of everyone in the world equally" position here is not in spite of, but because of the existence of powerful actors who will try to skew the decision such that they or their group have maximal power. We (you and me) would rather this not happen, and I at least would like to team up with others who would rather this not happen, and we others can have the greatest chance of slapping down those trying to take over the world by advocating for the obvious. That is, by advocating that we should all be equal.

If the v... (read more)

3Buck
My guess is that neither of us will hear about any of these discussions until after they're finalized.

I think you're comparing the goals of past lesswrong to the goals of present lesswrong. I don't think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:

... (read more)

there appears to be no way for me to neutrally discuss these doubts with a psychiatrist

Why not discuss openly with one psychiatrist (or therapist!), then choose another to exaggerate if you decide to experiment.

Also, note that I don’t think psychiatrists are particularly averse to experimenting with drugs with few long term consequences or risks.

@Lucius Bushnaq I'm curious why you disagree

i think that the non-strawman versions of the sjw takes listed are all actually geninely really interesting and merit at least some consideration. ive been reading up on local indigenous history recently and it's the most fascinating topic i've rabbit holed in on in ages.

I am interested in what/who you recommend reading here.

It underestimates the effect of posttraining. I think the simulator lens is very productive when thinking about base models but it really struggles at describing what posttraining does to the base model. I talked to Janus about this a bunch back in the day and it’s tempting to regard it as “just” a modulation of that base model that upweights some circuits and downweights others. That would be convenient because then simulator theory just continues to apply, modulo some affine transformation.

To be very clear here, this seems straightforwardly false. The en... (read more)

2Garrett Baker
@Lucius Bushnaq I'm curious why you disagree

There will be! I haven’t decided on the readings yet though, sorry.

3Lorxus
Got it. See you Tuesday!

My guess is the people asking such questions really mean "why don't I win more, despite being a rationalist", and their criticisms make much more sense as facts about them or mistakes they've made which they attribute to holding them back on winning.

Load More