All of Kabir Kumar's Comments + Replies

I like the red and yellow block ones - clean and easy to read. 

I've come around to it. It's got a bit of a distinctive BRAT feel to it and can be modified into memes easily. 

I'm looking forward to the first people to use it for a hit song. 

An actual better analogy would be a company in a country whose gdp is growing faster than that of the country

The main thing I don't understand is the full thought processes that leads to not seeing this as stealing opportunity from artists by using their work non consensually, without credit or compensation. 
I'm trying to understand if folk who don't see this as stealing don't think that stealing opportunity is a significant thing, or don't get how this is stealing opportunity, or something else that I'm not seeing. 

2CstineSublime
  And what arguments have they raised? Whether you agree or feel they hold water or not is not what I'm asking - I'm wondering what arguments have you heard from the "it is not theft" camp? I'm wondering if they are different from the ones I've heard

Makes sense. Do you think it's stealing to train on someone's data/work without their permission? This isn't a 'gotcha', btw - if you think it's not, I want to know and understand.

1CstineSublime
What prior reading have you done on this question? I did a DDG search "AI duplicating artists style controversy" and have found dozens of journalism pieces which appear, for the most part, seem to be arguing broadly with "it is theft". What is your understanding of the discourse on this at the moment? What have you read? What has been persuasive? What don't you understand?
2Archimedes
I don't think there's a simple answer to that. My instinct is that most publicly accessible material (not behind a paywall) is largely "fair use", but it gets messier for things like books not yet in the public domain. LLM pre-training is both transformative and extractive. There is no sensible licensing infrastructure for this yet, AFAIK, so many companies are grabbing whatever they can and dealing with legalities later. I think, at minimum, they should pay some upfront fee to train on a copyrighted book, just like humans do when they buy rather than pirate or borrow from libraries.

Do you think you can steal someone's parking spot? 

If yes, what exactly do you think you're stealing? 

2JBlack
Literally steal?  No, except in cases that you probably don't mean such as where it's part of a building and someone physically removes that part of the building. "Steal" in the colloquial but not in the legal sense, sure. Legally it's usually more like tortious interference, e.g. you have a contract that provides the service of using that space to park your car, and someone interferes with that by parking their own car there and deprives you of its use in an economically damaging way (such as having to pay for parking elsewhere).  Sometimes it's trespass, such as when you actually own the land and can legally forbid others from entering. It is also relatively common for it to be both: tortious interference with the contracted user of the parking space, and trespass against the lot owner who sets conditions for entry that are being violated.
9Archimedes
You're "stealing" their opportunity to use that space. In legal terms, assuming they had a right to the spot, you'd be committing an unauthorized use of their property, causing deprivation of benefit or interference with use.

That's a pretty standard thing with bigoted bloggers/speakers/intellectuals. 

Have a popular platform where you say 95% things which are ok/interesting/entertaining. And 5% to 10% poison (bigotry).

Then a lead in to something that's 90% ok/interesting/entertaining and 10% to 15% poison (bigotry). 

Etc. 

Atrioc explains it pretty well here, with Sam Hyde as an example:

Maybe there's a filtering effect for public intellectuals. 

If you only ever talk about things you really know a lot about, unless that thing is very interesting or you yourself are something that gets a lot of attention (e.g. a polyamorous cam girl who's very good at statistics, a Muslim Socialist running for mayor in the world's richest city, etc), you probably won't become a 'public intellectual'. 

And if you venture out of that and always admit it when you get something wrong, explicitly, or you don't have an area of speciality and admit to get... (read more)

1Decaeneus
One can say that being intellectually honest, which often comes packaged with being transparent about the messiness and nuance of things, is anti-memetic.
1sam
Seems to rhyme with the criticism of pundits in Superforecasting i.e. (iirc), most high profile pundits make general, sweeping, dramatic sounding statements that make good TV but are difficult to falsify after the fact

Ok, I was going to say that's a good one. 

But this line ruins it for me:

So I think I'm wrong there but I could actually turn out to be right

Thank you for searching and finding it though!! Do you think other public intellectuals might have more/less examples? 

Because it's not true - trying does exist. 

In the comment's of Eliezer's post, I saw "Stop trying to hit me and hit me!" by Morpheus, which I like more.

Btw, for Slatestarcodex, found it in the first search, pretty easily.

2gjm
Sure, but plausibly that's Scott being unusually good at admitting error, rather than Tyler being unusually bad.

This seems to be a really explicit example of him saying that he wss wrong about something, thank you! 

Didn't think this would exist/be found, but glad I was wrong.

4gjm
It's still pretty interesting if it turns out that the only clear example to be found of T.C. admitting to error is in a context where everyone involved is describing errors they've made: he'll admit to concrete mistakes, but apparently only when admitting mistakes makes him look good rather than bad. (Though I kinda agree with one thing Joseph Miller says, or more precisely implies: perhaps it's just really rare for people to say publicly that they were badly wrong about anything of substance, in which case it could be that T.C. has seldom done that but that this shouldn't much change our opinion of him.)

They also did a lot of calling to US representatives, as did people they reached out to. 

ControlAI did something similar and also partnered with SiliConversations, a youtuber, to get the word out to more people, to get them to call their representatives. 

3Buck
Yep, that seems great!

I think PauseAI is also extremely underappreciated. 

Buck*102

Plausibly, but their type of pressure was not at all what I think ended up being most helpful here!

I suggest something on Value Alignment itself. The actual problem of trying make a model have the values you want, be certain of it, be certain it will scale and other parts of the Hard Part of Alignment.

I suggest something on Value Alignment itself. The actual problem of trying make a model have the values you want, be certain of it, be certain it will scale and other parts of the Hard Part of Alignment.

Using the bsky Mutuals feed is such a positive experience, it makes me very happy ♥️♥️♥️

Please don't train an AI on anything I write without my explicit permission, it would make me very sad.

I'm annoyed by the phrase 'do or do not, there is no try', because I think it's wrong and there very much is a thing called trying and it's important. 

However, it's a phrase that's so cool and has so much aura, it's hard to disagree with it without sounding at least a little bit like an excuse making loser who doesn't do things and tries to justify it. 

Perhaps in part, because I feel/fear that I may be that?

1CstineSublime
Why is it wrong? Or perhaps more specifically - what are some examples of conditions or environments where you think it is counterproductive?
4MichaelDickens
I think it's a good quote. I will refer to this post from The Sequences: Trying to Try

Btw, I really dont have my mind set on this, if someone finds Tyler Cowen explictly saying he was wrong about something, please link it to me - you dont have to give an explanation to justify it, to prepare for some confirmation biasy 'here's why I was actually right and this isnt it' thing (though, any opinions/thoughts are very welcome), please feel free to just give a link or mention some post/moment. 

It's only one datapoint, but did a similar search for SlateStarCodex and almost immediately found him explictly saying he was wrong. 

It's the title of a post, even: https://slatestarcodex.com/2018/11/06/preschool-i-was-wrong/ 

In the post he also says:

I’ve written before about how when you make an update of that scale, it’s important to publicly admit error before going on to justify yourself or say why you should be excused as basically right in principle or whatever, so let me say it: I was wrong about Head Start.

That having been said, on to the

... (read more)

Is the 200k context itself available to use anywhere? How different is it from the Stampy.ai dataset? Nw if you don't know due to not knowing what exactly stampy's dataset is.

I get questions a lot, from regular ml researchers on what exactly alignment is and I wish I had an actually good thing to send them. Currently I either give a definition myself or send them to alignmentforum. 

2Mikhail Samin
Nope, I’m somewhat concerned about unethical uses (eg talking to a lot of people without disclosing it’s ai), so won’t publicly share the context. If the chatbot answers questions well enough, we could in principle embed it into whatever you want if that seems useful. Currently have a couple of requests like that. DM me somewhere? Stampy uses RAG & is worse.

Maybe the option of not specifying the writing style at all, for impatient people like me? 

Unless you see this as more something to be used by advocacy/comms groups to make materials for explaining things to different groups, which makes sense. 

If the general public is really the target, then adding some kind of voice mode seems like it would reduce latency a lot

2Mikhail Samin
This specific page is not really optimized for any use by anyone whatsoever; there are maybe five bugs each solvable with one query to claude, and all not a priority; the cool thing i want people to look at is the chatbot (when you give it some plausible context)! (Also, non-personalized intros to why you should care about ai safety are still better done by people.) I really wouldn't want to give a random member of the US general public a thing that advocates for AI risk while having a gender drop-down like that.[1] The kinds of interfaces it would have if we get to scale it[2] would be very dependent on where specific people are coming from. I.e., demographic info can be pre-filled and not necessarily displayed if it's from ads; or maybe we ask one person we're talking to to share it with two other people, and generate unique links with pre-filled info that was provided by the first person; etc. Voice mode would have a huge latency due to the 200k token context and thinking prior to responding. 1. ^ Non-binary people are people, but the dropdown creates unnecessary negative halo effect for a significant portion of the general public. Also, dropdowns = unnecessary clicks = bad. 2. ^ which I really want to! someone please give us the budget and volunteers! at the moment, we have only me working full-time (for free), $10k from SFF, and ~$15k from EAs who considered this to be the most effective nonprofit in this field. reach out if you want to donate your time or money. (donations are tax-deductible in the us.)

thank you for this search. Looking at the results, top 3 are by commentors. 

Then one about not thinking a short book could be this good.

I don't think this is Cowen actually saying he made a wrong prediction, just using it to express how the book is unexpectedly good at talking about a topic that might normally take longer, though happy to hear why I'm wrong here. 

Another commentor:

another commentor:

Ending here for now, doesn't seem to be any real instances of Tyler Cowen saying he was wrong about something he thought was true yet. 

9Kabir Kumar
Btw, I really dont have my mind set on this, if someone finds Tyler Cowen explictly saying he was wrong about something, please link it to me - you dont have to give an explanation to justify it, to prepare for some confirmation biasy 'here's why I was actually right and this isnt it' thing (though, any opinions/thoughts are very welcome), please feel free to just give a link or mention some post/moment. 

Good job trying and putting this out there. Hope you iterate on it a lot and make it better.

Personally, I utterly despise this current writing style. Maybe you can look at the Void bot on Bluesky, which is based on Gemini pro - it's one of the rare bots I've seen whose writing is actually ok. 

2Mikhail Samin
Thanks, but, uhm, try to not specify “your mom” as the background and “what the actual fuck is ai alignment” as your question if you want it to have a writing style that’s not full of “we’re toast”

Has Tyler Cowen ever explicitly admitted to being wrong about anything? 

Not 'revised estimates' or 'updated predictions' but 'I was wrong'. 

Every time I see him talk about learning something new, he always seems to be talking about how this vindicates what he said/thought before. 

Gemini 2.5 pro didn't seem to find anything, when I did a max reasoning budget search with url search on in aistudio. 

EDIT: An example was found by Morpheus, of Tyler Cowen explictly saying he was wrong - see the comment and the linked PDF below

Reply8211

Deep Research found this PDF. Search for "I was wrong" in the PDF.

In the post 'Can economics change your mind?' he has a list of examples where he has changed his mind due to evidence:

1. Before 1982-1984, and the Swiss experience, I thought fixed money growth rules were a good idea.  One problem (not the only problem) is that the implied interest rate volatility is too high, or exchange rate volatility in the Swiss case.

2. Before witnessing China vs. Eastern Europe, I thought more rapid privatizations were almost always better.  The correct answer depends on circumstance, and we are due to learn yet more about

... (read more)
5komlowneowna
(source: https://brinklindsey.substack.com/p/interview-with-tyler-cowen)
7Joseph Miller
Downvoted. This post feels kinda mean. Tyler Cowen has written a lot and done lots of podcasts - it doesn't seem like anyone has actually checked? What's the base rate for public intellectuals ever admitting they were wrong? Is it fair to single out Tyler Cowen?

this is evidence that tyler cowen has never been wrong about anything

2Garrett Baker
He has mentioned the phrase a bunch. I haven’t looked through enough of these links enough to form an opinion though.

one of the teams from the evals hackathon was accepted at an ICML workshop!
hosting this next: https://courageous-lift-30c.notion.site/Moonshot-Alignment-Program-20fa2fee3c6780a2b99cc5d8ca07c5b0 

Will be focused on the Core Problem of Alignment 
for this, I'm gonna be making a bunch of guides and tests for each track
if anyone would be interested in learning and/or working on a bunch of agent foundations, moral neuroscience (neuroscience studying how morals are encoded in the brain, how we make moral choices, etc) and preference optimization, please let me know! DM or email at kabir@ai-plans.com

Why up to 20? (Is that a typo?)

not a typo. He's 50+, grew up in india, without calculators. Yes, he's yelling at her for not 100% knowing her 17 times table. 

How do we know if it's working then?

We won't, but we can get a general sense of whether it might be doing something at all using a bunch of proxies like how robust and secure the system is to human attackers with much more time than the model has and trying to train the model to attack the defenses in a controlled setting.

Can the extent of this 'control' be precisely and unambiguously measured? 

7ryan_greenblatt
No

It's 2025, AIs can solve proofs and my dad is yelling at my 10 year old sister for not memorizing her times tables up to 20


 

2Viliam
Until we get UBI, people will compete against each other, and times tables are a tiny part of that. So the question is whether you are sure that Singularity will happen within the next 15 years enough that you don't see a reason to have a Plan B. Because the times tables are a part of the Plan B. That said, yelling is unproductive. What about spaced repetition? Make cards containing all problems, put the answer on the other side, go through the cards, put then ones with incorrect answer on a heap that you will afterwards reshuffle and try again. Do this every day. In a few weeks the problem should be solved. Why up to 20? (Is that a typo?)
4Karl Krueger
I don't expect the yelling helps with the memorizing. Also, even though a big company can grow potatoes much more efficiently, I still like having a backyard garden.

I'll make a feed on AI Plans for policies/regulations

Safetyist, align thyself

spent lots of hours on making the application good, getting testimonials and confirmation we could share them for the application, really getting down the communication of what we've done, why it's useful, etc. 

There was a doc where donors were supposed to ask questions. Never got a single one.

The marketing, website, etc was all saying 'hey, after doing this, you can rest easy, be in peace, etc, we'll send your application to 50+ donors, it's a bargain for your time, et'

Critical piece of info that was very conveniently not communicated as loudly: ther... (read more)

Yes, I'll make my own version that I think is better. 

I think A Narrow Path has been presented with far too much self satisfaction for what's essentially a long wishlist with some introductory parts. 

1Kabir Kumar
Yes, I'll make my own version that I think is better. 

I think the Safetywashing paper mixed in far too many opinions with actual data and generally mangled what could have been an actually good research idea. 

I'm going to be more blunt and honest when I think AI safety and gov folk are being dishonest and doing trash work.

3MalcolmMcLeod
Would you care to start now by giving an example?

lmao, i dont think this is a joke, right?

my experience with applying to the Non Linear fund was terrible and not worth the time at all

4Kabir Kumar
spent lots of hours on making the application good, getting testimonials and confirmation we could share them for the application, really getting down the communication of what we've done, why it's useful, etc.  There was a doc where donors were supposed to ask questions. Never got a single one. The marketing, website, etc was all saying 'hey, after doing this, you can rest easy, be in peace, etc, we'll send your application to 50+ donors, it's a bargain for your time, et' Critical piece of info that was very conveniently not communicated as loudly: there's no guarantee of when you'll hear back - could be 6 weeks, 6 months, who knows!! Didn't even get a confirmation email about our application being received. Had to email for that. Then in the email I saw this. April 3rd, btw.  Then May 1st, almost a month later, it seems this gets sent out to everyone. Personally, I would discourage anyone from spending any time on a Non Linear application - as far as I know, our application wasn't even sent to any donors.  They completely and utterly disrespected my time and it seems, the time of many others. 

On the Moonshot Alignment Program:

several teams from the prev hackathon are continuing to work on alignment evals and doing good work (one presenting to a gov security body, another making a new eval on alignment faking)

if i can get several new teams to exist who are working on trying to get values actually into models, with rigour, that seems very valuable to me

also, got a sponsorship deal with youtuber who makes technical deep learning videos, with 25K subscribers, he's said he'll be making a full video about the program. 

also, people are gonna be c... (read more)

Hi, have your worked in moral neuroscience or know someone who has?

If so, I'd really really like to talk to you!

https://calendly.com/kabir03999/talk-with-kabir

I'm organizing a research program for the hard part of alignment in August. 

I've already talked to lots of Agent Foundations researchers, learnt a lot about how that research is done, what the bottlenecks are, where new talent can be most useful. 

I'd really really like to do this for the neuroscience track as well please. 

Eliezer's look has been impressively improved recently 

I think updating his amazon picture to one of the more recent pictures would be quite quick and increase the chances of people buying the book.

Seconded. The new hat and the pointier, greyer beard have taken him from "internet atheist" to "world-weary Jewish intellectual." We need to be sagemaxxing. (Similarly, Nate-as-seen-on-Fox-News is way better than Nate-as-seen-at-Google-in-2017.)

Can confirm, was making the server worse - banned him myself, for spam. 

What do you think are better agendas? 
> ASI safety in the setting of AIXI is overrated as a way to reduce existential risk 
Could you please elaborate on this?

I personally don't buy into a lot of hindu rituals, astrology etc. Personally I treat their claims as either metaphorical or testable. I think a lot of ancient "hindu" philosophers would be in the same camp as me

I do the same and so do many Hindus. 

The referencing of the holy texts to say why there aren't holy texts, is quite funny, lol. I assume that was intentional.

1Karl Krueger
If the text says that it is not holy, then who are we to disagree?
  • rationality and EA lack sacred ritualized acts (though there are some things that are near this, they fail to set apart some actions as sacred, so they are instead just rituals) (an exception might be the winter Secular Solstice service like we have in Berkeley each year, but I'd argue the lack of a sustained creation of shared secular rituals means rationalists don't keep in touch with a sacred framing as one would in a religion)
  • rationality and EA isn't high commitment in the right way (might feel strange if you gave up eating meet to be EA or believing f
... (read more)
2Gordon Seidoh Worley
As an outsider, Hinduism's various divisions seem to have a very strong sense of the sacred that seems lacking in EA to me.
3lesswronguser123
Saying one practices Hinduism is more like saying EA is part of western enlightenment tradition. It's an entirely different cultural frame, which has many different philosophical worldviws from atheistic to theistic within it. Hindus even claim buddha as one of their own. The word hindu itself comes from river located in North West of India, so they clustered bunch of philosophical positions together which were reminiscent of that place. Besides labelling one thing as religion and doing away with it is a lazy thing to do, there are various practices within it which may or may not be good or accurate which can be tested for.  I personally don't buy into a lot of hindu rituals, astrology etc. Personally I treat their claims as either metaphorical or testable. I think a lot of ancient "hindu" philosophers would be in the same camp as me, I just think a lot of their disciples didn't take their epistemology to their logical conclusion but got misguided by other cultural memes like absolutism,mysticism etc. 
3Said Achmiz
EA is; “rationality” clearly isn’t. It’s sort of true that there are ritual observances and holy texts… but nah, not really. “Rationality” is not some particular practice or some defined ritual; it’s just doing whatever wins. Thus speak the holy texts.
6Gordon Seidoh Worley
No, or so say I. I prefer not to adjudicate this on some formal basis. There are several attempts by academics to define religion, but I think it's better to ask "does rationality or EA look sufficiently like other things that are definitely religions that we should call them religions". I say "no" on the basis of a few factors: * rationality and EA lack sacred ritualized acts (though there are some things that are near this, they fail to set apart some actions as sacred, so they are instead just rituals) (an exception might be the winter Secular Solstice service like we have in Berkeley each year, but I'd argue the lack of a sustained creation of shared secular rituals means rationalists don't keep in touch with a sacred framing as one would in a religion) * rationality and EA isn't high commitment in the right way (might feel strange if you gave up eating meet to be EA or believing false things to be rationalist, but it's missing commitment at the level of "show up at the same place at the same time every week to do the same thing with the same people", because even if you regularly attend a meetup, no one much thinks you are less committed to EA or rationality if you skip a few meetings) * rationalists and EAs lack strong consensus on what is the best life advice for everyone Rationality and EA are more like ideologies, which share some traits with religions, but not all of them. Only occasionally have ideologies become religions, as arguably Communism briefly did in 1910s Russia, and it wasn't stable enough to persist in its religious form.
Load More