All of Lee.aao's Comments + Replies

Lee.aao20

I'm surprised to see no discussion here or on Substack.

This is a well-structured article with accurate citations, clearly explained reasoning, and a peer review.. that updates the best agi timeline model.

I'm really confused. 

I haven't deeply checked the logic to say if the update is reasonable (that's exactly the kind of conversation I was expecting in the comments). But I agree that Davidson's model was previously the best estimate we had, and it's cool to see that this updated version exlains why Dario/Sama are so confident.

Overall, this is excellent work, and I'm genuinely puzzled as to why it has received 10x fewer upvotes than the recent fictional 2y takeover scenario.

1johncrox
Appreciate it. My sense is that the LW feed doesn't prioritize recent posts if they have low karma, so it's hard to get visibility on posts that aren't widely shared elsewhere and upvoted as a result. If you think it's a good post, please send it around!
2testingthewaters
It's hard to empathise with dry numbers, whereas a lively scenario creates an emotional response so more people engage. But I agree that this seems to be very well done statistical work.
Lee.aao30

I can confirm that this is a pretty much the best introduction to take you from 0 to about 80% in using AI. 
It is intended for general users, don't expect technical information on how to use APIs or build apps.

Lee.aao10

TLDR my reaction is I don’t really know how good these models are right now.

 

I felt exactly the same after the Claude 3.7 post.

But actually.. hasn't LiveBench solved the evals crisis?

It is specifically targeted a “subjective” and “cheating/hacking” problems. 
It also cover a pretty broad set of capabilities.

Lee.aao2-4

The number of different benchmarks and metrics we are using to understand each new model is crazy. I'm so confused. The exec summary helps, but...
I don't think the relative difference between models is big enough to justify switching from the one you're currently used to.

Lee.aao10

Does this mean that Zvi doesn't read the comments on LW?
He seems to be much more active on Substack.

Lee.aao32

So, the most important things I've learned for myself are:

1. Sam was fired because of his sneaky attempts to get rid of some board members.
2. Sam didn't answer the question of why so many high ranking ppl have left the company recently.
3. Sam missed the fact that for some people safety focus was a major decision factor in the early hiring.

There seems to be enough evidence that he doesn't care about safety. 
And he actively uses dark methods to accumulate power.

Lee.aao30

We’re not even preparing reasonably for the mundane things that current AIs can do, in either the sense of preparing for risks, or in the sense of taking advantage of its opportunities. And almost no one is giving much serious thought to what the world full of AIs will actually look like and what version of it would be good for humans, despite us knowing such a world is likely headed our way.

Is there any good post on what to do? Preferrably aimed for a casual person who just use ChatGPT 1-2 times a month

2Nathan Helm-Burger
I think this addresses it pretty well: https://www.youtube.com/watch?v=1j--6JYRLVk
Lee.aao30

The investments in data centers are going big. Microsoft will spend $80 billion in fiscal 2025, versus $64.5 billion on capex in the last year. Amazon is spending $65 billion, Google $49 billion and Meta $31 billion.

About 5 years ago, when Elon promised a $1B investment in OpenAI, it seemed like an unusual leap of faith. And now just 4 top corporations are casually committing over $200B to AI infrastructure. The pace is already crazy.

This is potentially the most powerful technology humanity has ever created. And what's even more interesting is the absence ... (read more)

Lee.aao10

I think I'm confused here.
Is it fair to say that o3 does math and coding better than the average SWE?
If this is true, then I really don't understand why it hasn't made all the headlines. 
Any explanation?

Lee.aao113

Greg Brockman to Elon Musk, (cc: Sam Altman) - Nov 22, 2015 6:11 PM

In response to this follow up, Elon first mentions that $100M is not enough. And that he is encouraging OpenAI to raise more money on their own and promises to increase the amount they can raise to $1B.

I found this on the OpenAI blog: https://openai.com/index/openai-elon-musk/
There is a couple of other messages there. With the vibe that OpenAI team felt a betrayal from Elon.

We're sad that it's come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then to

... (read more)
Lee.aao30

Rather, they didn't foresee the possibility that Microsoft might want to invest. And they didn't consider that capped-for-profit was a path to billions of dollars.

Lee.aao10
  •  
    1. Note: It was a 100-point Elo improvement based on the ‘gpt2’ tests prior to release, but GPT-4o itself while still on top saw only a more modest increase.
  •  

Didn't he meant the early GPT-4 vs GPT-4 turbo?



As I get it, it's the same pre-trained model, but with more post-training work.
GPT-4o is probably a newly trained model, so you can't compare it like that.

and these aren’t normies, they work on tech, high paying 6 figure salaries, very up to date with current events.

If you are a true normie not working in tech, it makes sense to be unaware of such details. You are missing out, but I get why.

If you are in tech, and you don’t even know GPT-4 versus GPT-3.5? Oh no.


Is it just me, or do you also feel intellectually lonely lately? 

I think my relatives and most of my friends think I'm crazy for thinking and talking so much about AI. And they listen to me more out of respect and politeness than out of any real interest in the topic.

1lemonhope
Use the opportunity to get answers to important questions before the answers change! I've been asking people "what tech are you looking forward to?" and such Or you could go to one of the lesswrong or slatestarcodex meetups

Ege, do you think you'd update if you saw a demonstration of sophisticated sample-efficient in-context learning and far-off-distribution transfer?
 

Yes.

Suppose it could get decent at the first-person-shooter after like a subjective hour of messing around with it. If you saw that demo in 2025, how would that update your timelines?

I would probably update substantially towards agreeing with you.


DeepMind released an early-stage research model SIMA: https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/

It was tested on 6... (read more)

Since OpenAI are renting MSFT compute for both training and inference.. 
Seems reasonable to think that inference >> training.  Am I right? 

Lee.aao-10

Is there a cheap of free way to read Semianalysis posts? 
Cant afford the $500 subscription sadly