Tenoke

https://svilentodorov.xyz/

Wiki Contributions

Comments

Tenoke20

When considering that my thinking was that I'd expect the last day to be slightly after, but the announcement can be slightly before since that doesn't need to be quite on the last day but can and often would be a little before - e.g. be on the first day of his last week.

Tenoke10-2

The 21st when Altman was reinstated, is a logical date for the resignation, and within a week of 6 months now which is why a notice period/agreement to wait ~half a year/something similar is the first thing I thought of, since obviously the ultimate reason why he is quitting is rooted in what happened around then.

>Is there a particular reason to think that he would have had an exactly 6-month notice

You are right, there isn't, but 1, 3, 6 months is where I would have put the highest probability a priori.

>Sora & GPT-4o were out.

Sora isn't out out, or at least not how 4o is out and Ilya isn't listed as a contributor in any form on it (compared to being an 'additional contributor' for gpt-4 or 'additional leadership' for gpt-4o) and in general, I doubt it had that much to do with the timing. 

GPT-4o of course, makes a lot of sense, timing-wise (it's literally the next day!) and he is listed on it (though not as one of the many contributors or leads). But if he wasn't in the office during that time (or is that just a rumor?) it's just not clear to me if he was actually participating in getting it out as his final project (which yes, is very plausible) or if he was just asked not to announce his departure until after the release, given that the two happen to be so close in time in that case.

Tenoke499

Reasons are unclear 

This is happening exactly 6 months after the November fiasco (the vote to remove Altman was on Nov 17th) which is likely what his notice period was, especially if he hasn't been in the office since then. 

Are the reasons really that unclear? The specifics of why he wanted Altman out might be, but he is ultimately clearly leaving because he didn't think Altman should be in charge, while Altman thinks otherwise.

Tenoke94

I own only ~5 physical books now (prefer digital) and 2 of them are Thinking, Fast and Slow. Despite not being on the site I've always thought of him as something of a founding grandfather of LessWrong.

Tenoke1310

He comes out pretty unsympathetic and stubborn.

Did any of your views of him change?

Tenoke107

I'm sympathetic to some of your arguments but even if we accept that the current paradigm will lead us to an AI that is pretty similar to a human mind, and even in the best case I'm already not super optimistic that a scaled up random almost human is a great outcome. I simply disagree where you say this:

>For example, humans are not perfectly robust. I claim that for any human, no matter how moral, there exist adversarial sensory inputs that would cause them to act badly. Such inputs might involve extreme pain, starvation, exhaustion, etc. I don't think the mere existence of such inputs means that all humans are unaligned. 

Humans aren't that aligned at the extreme and the extreme matters when talking about the smartest entity making every important decision about everything.

 

Also, your general arguments about the current paradigms being not that bad are reasonable but again, I think our situation is a lot closer to all or nothing - if we get pretty far with RLHF or whatever, scale up the model until it's extremely smart and thus eventually making every decision of consequence then unless you got the alignment near perfectly the chance that the remaining problematic parts screw us over seems uncomfortably high to me. 

Tenoke71

I can't even get a good answer of "What's the GiveWell of AI Safety" so I can quickly donate to a very reputable and agreed upon option with little thinking without at best getting old lists to a ton of random small orgs and giving up. I'm not very optimistic ordinary less convinced people who want to help are having an easier time.

Tenoke1511

It seems quite different. Tha main argument in that article is that Climate Change wouldn't make the lives of readers' children much worse or shorter and that's not the case for AI.

Load More