wary of some kind of meme poisoning
I can think of reasons why some would be wary, and am waried of something which could be called “meme poisoning” myself when I watch moves, but am curious what kind of meme poisoning you have in mind here.
You can destroy others’ value intentionally, but only in extreme circumstances where you’re not thinking right or have self-destructive tendencies can you “intentionally” destroy your own value. But then we hardly describe the choices such people make as “intentional”. Eg the self-destructive person doesn’t “intend” to lose their friends by not paying back borrowed money. And those gambling at the casino, despite not thinking right, can’t be said to “intend” to lose all their money, though they “know” the chances they’ll succeed.
To complete your argument, ‘and therefore the action has some deadweight loss associated with it, meaning its destroying value’.
But note that by the same logic, any economic activity destroys value, since you are also not homo economicus when you buy ice cream, and there will likely be smarter things you can do with your money, or better deals. Therefore buying ice cream, or doing anything else destroys value.
But that is absurd, and we clearly don’t have a so broad definition of “destroy value”. So your argument proves too much.
If you are destroying something you own, you would value the destruction of that thing more than any other use you have for that thing and any price you could sell it for on the market, so this creates value in the sense that there is no deadweight loss to the relevant transactions/actions.
An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average.
The studies which are usually cited in support of this effect show nowhere near this level of hedonic treadmill. I suggest you read them (you can probably ask Claude for the recs here).
The Marubo, a 2,000-member tribe in the Brazilian Amazon, recently made headlines: they received Starlink in September 2023. Within 9 months, their youth stopped learning traditional body paint and jewelry making, young men began sharing pornographic videos in group chats (in a culture that frowns on public kissing), leaders observed "more aggressive sexual behavior," and children became addicted to short-form video content. One leader reported: "Everyone is so connected that sometimes they don't even talk to their own family."
Note that the tribe in que...
Of course this is now used as an excuse to revert any recent attempts to improve the article.
From reading the relevant talk-page it is pretty clear those arguing against the changes on these bases aren’t exactly doing so in good faith, and if they did not have this bit of ammunition to use they would use something else, but then with fewer detractors (since clearly nobody else followed or cared about that page).
any shocking or surprising result in your own experiment is 80% likely to be a bug until proven otherwise. your first thought should always be to comb for bugs.
I will add: 80% likely to be a bug, or a result from random-matrix theory.
A post is coming! I will get future posts out earlier in the future, sorry.
Yeah, sorry, I think I had my very negative reaction mostly because of the title. Looking at the other comments & the upvotes which are not mine on the article, clearly my view here is non-representative of the rest of lesswrong.
I think the disconnect might be what you want this article to be (a guide to baldness) vs what it is (the author's journey and thoughts around baldness mixed with gallows humor regarding his potential fate) [...] That said, setting audience expectations is also an important part of writing, and I think "A Rationalist's Guide to Male Pattern Baldness" is probably setting the wrong expectation.
Yeah, good point. I agree, I think this is the problem.
...The Norword/forest comparison gets used consistently throughout (including the title) and is the only metap
Hm, I read "A rationalist's guide to male pattern baldness" and thought the post would be in the same genera as other lesswrong "guide" articles, and much less a personal blog-post. Most centrally the guide to rationalist interior decorating. I read the article with this expectation, and felt I mostly got duped. If it was for example just called "Escaping the Jungles of Norwood", I just wounldn't have read it, but if I did I would not be as upset.
I still maintain my criticism holds even for a personal blog post, but would not have strong downvoted it, just...
There is too much fluff in this article, and it reads like AI slop. Maybe it is, but in either case this is because the fluff and the content do not support each other. Taking an example quote:
...Before you can escape a jungle, you have to know its terrain. The Norwood-Hamilton scale is the cartographer’s map for male pattern baldness (androgenetic alopecia, or AGA), dividing male scalps into a taxonomy of misery, from Norwood I (thick, untamed rainforest) to Norwood VII (Madagascar, post-deforestation). It’s less a continuum and more a sequence of discrete w
Of course it is, I did not think otherwise, but my point stands.
And to some extent you know that your heuristics for identifying cranks is not going to solely pop out at people who are forever lost to crankdom because you haven't just abandoned the conversation.
I liked your old posts and your old research and your old ideas. I still have some hope you can reflect on the points you’ve made here, and your arguments against my probes, and feel a twinge of doubt, or motivation, pull on that a little, and end up with a worldview that makes predictions, lets you have & make genuine arguments, and gives you novel ideas.
If you were always lazy, I wouldn’t be having this conversation, but once you were not.
"Why are adversarial eamples so easy to find?" is a problem that is easily solvable without my model. You can't solve it because you suck at AI, so instead you find some AI experts who are nearly as incompetent as you and follow along their discourse because they are working at easier problems that you have a chance of solving.
What is the solution then?
AFAIK epidemiologists usually measure particular diseases and focus their models on those, whereas LDSL would more be across all species of germs.
I would honestly be interested in any concrete model you build based on this. You don't necessarily have to compare it against some other field's existing model, though it does help for credibility's sake. But I would like to at least be able to compare the model you make against data.
I'm also not sure this is true about epidemiologists, and if it is I'd guess its true to the extent that they have like 4 diffe...
Before you said
"How do we interpret the inner-workings of neural networks." is not a puzzle unless you get more concrete an application of it. For instance an input/output pair which you find surprising and want an interpretation for, or at least some general reason you want to interpret it.
Which seems to imply you (at least 3 hours ago) believed your theory could handle relatively well-formulated and narrow "input/output pair" problems. Yet now you say
...You just keep on treating it like the narrow domain-specific models count as competition when they
Lets go through your sequence shall we? And enumerate the so-called "concrete examples" you list
[LDSL#0] Some epistemological conundrums
Here you ask a lot of questions, approximately each of the form "why do 'people' think <thing-that-some-people-think-but-certainly-not-all". To list a few,
Why are people so insistent about outliers?
Seems to have a good answer. Sometimes they're informative!
Why isn’t factor analysis considered the main research tool?
Seems also to have a good answer, it is easy to fool yourself if you do it improperly.
...How can pr
It sounds like, as I predicted, your theory doesn't apply to the problems I presented, so how about you provide an example
I do think I'm "good at" AI, I think many who are "good at" AI are also pretty confused here.
Ok, then why do AI systems have so many adversarial examples? I have no formal model of this, though it plausibly makes some intuitive sense.
I don't think you're using the actual arguments I presented in the LDSL series to evaluate my position.
I remember reading LDSL and not buying the arguments! At the time, I deeply respected you and your thinking, and thought "oh well I'm not buying these arguments, but surely if they're as useful as they say, tailcalled will apply them to various circumstances and that will be pie on my face, and in that circumstance I should try to figure out why I was mistaken". But then you didn't, and you started vague-posting constantly, and now we're here and you're g...
It would, but you didn't ask for such a thing. Are you asking for such a thing now? If so, here is one in AI, which is on everyone's minds: How do we interpret the inner-workings of neural networks.
I expect though, that you will say that your theory isn't applicable here for whatever reason. Therefore it would be helpful if you gave me an example of what sort of puzzle your theory is applicable to.
I don't care who calls themselves what, complexity science calls itself anti-reductionist, I don't dismiss them. Therefore I can't dismiss people just because they call themselves anti-reductionist, I must use their actual arguments to evaluate their positions.
I will also say that pleading to the community's intrinsic bias and claiming I've made arguments I haven't or have positions I don't is not doing much to make me think you less a crank.
I never said that, I am asking you for solutions to any puzzle of your choice! You're just not giving me any!
Edit: I really honestly don't know where you got that impression, and it kinda upsets me you seemingly just pulled that straight out of thin air.
I am not dismissing you because of your anti-reductionism! Where did I say that? Indeed, I have been known to praise some "anti-reductionist" theories--fields even!
I'm dismissing you because you can't give me examples of where your theory has been concretely useful!
idk what to say, this is just very transparently an excuse for you to be lazy here, and clearly crank-talk/cope.
Ok, I will first note that this is different from what you said previously. Previously, you said “probing for whether rationalists will get the problem if framed in different ways than the original longform” but now you say “I'm trying to probe the obviousness of the claims.”. It’s good to note when such switches occur.
Second, you should stop making lazy posts with no arguments regardless of the reasons. You can get just as much, and probably much more information through making good posts, there is not a tradeoff here. In fact, if you try to explain why y...
I do not think I could put my response here better than said did 7 years ago on a completely unrelated post, so I will just link that.
Part of the probing is to see which of the claims I make will seem obviously true and which of them will just seem senseless.
Then everything you say will seem either trivial or absurd because you don’t give arguments! Please post arguments for your claims!
This is the laziness I’m talking about! Do you really not understand why it would be to your theory-of-everything’s credit to have some, any, any at all, you know, actual use?
How suspicious it is that when I ask for explicit concrete examples, you explain that your theory is not really about particular examples, despite that if your vague-posting is indeed applying your theory of everything to particular examples, we can derive the existence of circumstances you believe your theory can well model?
And that excuse being that its good at deciding what to make...
Then give those examples!
Edit: and also back up those examples by actually making the particular model, and demonstrate why such models are so useful through means decorrelated with your original argument.
This may well be true (though I think not), but what is your argument about not even linking to your original posts? Or how often you don’t explain yourself even in completely unrelated subjects? My contention is that you are not lazily trying on a variety of different reframings of your original arguments or conclusions to see what sticks, and are instead just lazy.
So it sounds like your general theory has no alpha over narrow theories. What, then, makes it any good? Is it just that its broad enough to badly model many systems? Then it sounds useful in every case where we can’t make any formal predictions yet, and you should give those examples!
This sounds like a bad excuse not to do the work.
It sounds like there’s an implied “and therefore we have no influence over such discussions”. If so, then what are we arguing for? What does it matter if Julian Bradshaw and others think animals being left out of the CEV makes it a bad alignment target?
In either case, I don’t think we will only hear about any of these discussions until after they’re finalized. The AI labs are currently aligning and deploying (internally and externally) their AI models through what is likely to be the same process they use for ‘the big one’. Those discussions are these discussions, and we are hearing about them!
Yeah, I think those were some of your last good posts / first bad posts.
rationalists will get the problem if framed in different ways than the original longform.
Do you honestly think that rationalists will suddenly get your point if you say
I don't think RL or other AI-centered agency constructions will ever become very agentic.
with no explanation or argument at all, or even a link to your sparse lognormals sequence?
Or what about
...Ayn Rand's book "The Fountainhead" is an accidental deconstruction of patriarchy that shows how it is fractally terrible
The answer is not obviously “Less Wrong”, which is alarming.
Why alarming? I don't think LessWrong is the hub for any one sort of feedback, but on balance it seems like a good source of feedback. Certainly Said & his approach isn't the best possible response in every circumstance, I'm sure even he would agree with that, even if he thinks there should be more of it.
Yeah reflecting a bit, I think my true objection is your parenthetical, because I’m convinced by your first paragraph’s logic.
I think you have a more general point, but I think it only really applies if the person making the post can back up their claim with good reasoning at some point, or will actually end up creating the room for such a discussion. Tailcalled has, in recent years, been vagueposting more and more, and I don't think they or their post will serve as a good steelman or place to discuss real arguments against the prevailing consensus.
Eg see their response to Noosphere's thoughtful comment.
I think the point of the "weigh the preference of everyone in the world equally" position here is not in spite of, but because of the existence of powerful actors who will try to skew the decision such that they or their group have maximal power. We (you and me) would rather this not happen, and I at least would like to team up with others who would rather this not happen, and we others can have the greatest chance of slapping down those trying to take over the world by advocating for the obvious. That is, by advocating that we should all be equal.
If the v...
I think you're comparing the goals of past lesswrong to the goals of present lesswrong. I don't think present lesswrong really has the goal of refining the art of rationality anymore. Or at least, it has lost interest in developing the one meta-framework to rule them all, and gained much more interest in applying rationality & scholarship to interesting & niche domains, and seeing what generalizable heuristics it can learn from those. Most commonly AI, but look no further than the curated posts to find other examples. To highlight a few:
...there appears to be no way for me to neutrally discuss these doubts with a psychiatrist
Why not discuss openly with one psychiatrist (or therapist!), then choose another to exaggerate if you decide to experiment.
Also, note that I don’t think psychiatrists are particularly averse to experimenting with drugs with few long term consequences or risks.
i think that the non-strawman versions of the sjw takes listed are all actually geninely really interesting and merit at least some consideration. ive been reading up on local indigenous history recently and it's the most fascinating topic i've rabbit holed in on in ages.
I am interested in what/who you recommend reading here.
It underestimates the effect of posttraining. I think the simulator lens is very productive when thinking about base models but it really struggles at describing what posttraining does to the base model. I talked to Janus about this a bunch back in the day and it’s tempting to regard it as “just” a modulation of that base model that upweights some circuits and downweights others. That would be convenient because then simulator theory just continues to apply, modulo some affine transformation.
To be very clear here, this seems straightforwardly false. The en...
There will be! I haven’t decided on the readings yet though, sorry.
My guess is the people asking such questions really mean "why don't I win more, despite being a rationalist", and their criticisms make much more sense as facts about them or mistakes they've made which they attribute to holding them back on winning.
He has mentioned the phrase a bunch. I haven’t looked through enough of these links enough to form an opinion though.
thank you for this search. Looking at the results, top 3 are by commentors.
Then one about not thinking a short book could be this good.
I don't think this is Cowen actually saying he made a wrong prediction, just using it to express how the book is unexpectedly good at talking about a topic that might normally take longer, though happy to hear why I'm wrong here.
Another commentor:
another commentor:
Ending here for now, doesn't seem to be any real instances of Tyler Cowen saying he was wrong about something he thought was true yet.