All of Akram Choudhary's Comments + Replies

Entertaining as this post was, I think very few of us have ai timelines so long that iq eugenics actually matter. Long timelines are like 2040 ish these days so what use is a 16 yo high iq child going to be to secure humanities future ?

Wait till you find out that qwen 2 is probably just llama 3 with a few changes and some training on benchmarks to inflate performance a bit

1Andrew Burns
Possible. Possible. But I don't see how that is more likely than that Alibaba just made something better. Or they made something with with lots of contamination. I think this should make us update toward not underestimating them. The Kling thing is a whole nother issue. If it is confirmed text-to-video and not something else, then we are in big trouble because the chip limits have failed.

What are your thoughts on skills that the government has too much control over? For example If we get ASI in 2030 do you imagine that a doctor will be obsolete in 2032 or will the current regulatory environment still be relevant ? 

And how much of this is determined by "labs have now concentrated so much power that governments are obsolete".

9Daniel Kokotajlo
If we get ASI in 2030, all humans will be economically and militarily obsolete in 2030, and probably politically obsolete too (though if alignment was solved then the ASIs would be acting on behalf of the values and intentions of at least some humans). The current regulatory regime will be irrelevant. ASI is powerful.

Daniel, your interpretation is literally contradicted by Eliezer's exact words. Eliezer defines dignity as that which increases our chance of survival.

 

""Wait, dignity points?" you ask.  "What are those?  In what units are they measured, exactly?"

And to this I reply:  Obviously, the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line.  A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity."

I don't think our chances of survival will increase if LessWrong becomes substantially more risk-averse about publishing research and musings about AI. I think they will decrease.

So I'm one of the rate limited users. I suspect it's because I made a bad early April fools joke about a WorldsEnd movement that would encourage people to maximise utility over the next 25 years instead of pursuing long term goals for humanity like alignment. Made some people upset and it hit me that this site doesn't really have the right culture for those kinds of jokes. I apologise and don't contest being rate limited.

Just this once I promise

See my other comment on how this is just a shitpost.

 

Also humans dont base their decisions on raw expected value calculations. Almost everyone would take 1 million over a 0.1% chance of 10 billion though the expected value of the latter is higher (pascals mugging) 

2Dagon
Yeah, don't do that.  

Early april fools joke. I dont seriously believe this. 

It was originally intended as an april fools joke lol. This isnt a serious movement but it does reflect a little bit of my hopelessness of ai alignment working 

AI + Humans would just eventually give rise to AGI anyway so I dont see the distinction people try to make here.

Yh I dont get it either. From what I can tell the best Chinese labs arent even as good as the second tier American labs. The only way I see it happening is if the CCP actively try to steal it.

1Andrew Burns
Care to reassess?
  1. Stealing is a possibility.
  2. China has a lot of compute, it's just not piled up together into one giant training run. If AGI is achieved in the USA it will be with e.g. 5% of the AI-datacenter-compute available that year in the USA; the rest will have been spent on other smaller runs, other companies than the leading lab, etc. So even if China has e.g. 10x less overall compute, once it leaks how to do it, they can pile all their compute together and do a big training run. (These numbers just illustrative, not serious estimates)
  3. The ban on GPUs to china was only partially effective.

Why do you have 15% for 2024 and only an additional 15 for 2025.

Do you really think there's a 15% chance of AGI this year ?

4Vladimir_Nesov
Any increase in scale is some chance of AGI at this point, since unlike weaker models, GPT-4 is not stupid in a clear way, it might be just below the threshold of scale to enable an LLM to get its act together. This gives some 2024 probability. More likely, a larger model "merely" makes job-level agency feasible for relatively routine human jobs, but that alone would suddenly make $50-$500 billion runs financially reasonable. Given the premise of job-level agency at <$5 billion scale, the larger runs likely suffice for AGI. The Gemini report says training took place in multiple datacenters, which suggests that this sort of scaling might already be feasible, except for the risk that it produces something insufficiently commercially useful to justify the cost (and waiting improves the prospects). So this might all happen as early as 2025 or 2026.

Yes, I really do. I'm afraid I can't talk about all of the reasons for this (I work at OpenAI) but mostly it should be figure-outable from publicly available information. My timelines were already fairly short (2029 median) when I joined OpenAI in early 2022, and things have gone mostly as I expected. I've learned a bunch of stuff some of which updated me upwards and some of which updated me downwards.

As for the 15% - 15% thing: I mean I don't feel confident that those are the right numbers; rather, those numbers express my current state of uncertainty. I ... (read more)

you would have to have ridiculous ai timelines for it to be closer than robotaxis. Closer than 2027?

2Daniel Kokotajlo
My AGI timelines are currently 50% by 2027. After writing this post I realized (thanks to the comments) that robotaxis were progressing faster than I thought. I still think it's unclear which will happen first. (Remember my definition of robotaxis = 1M rides per day without human oversight.)

shutting down GPU production was never in the Overton window anyway. This makes little difference. Even if further scaling isnt needed most people cant afford the 100M spent on gpt4.

because when you train something using gradient descent optimised against a loss function it de facto has some kind of utility function. You cant accomplish all that much without a utility function.

4the gears to ascension
a utility function is a particular long-term formulation of a preference function; in principle any preference function is convertible to a utility function, given zero uncertainty about the space of possible future trajectories. a preference is when a system tends to push the world towards some trajectories over others. not only can you not accomplish much without your behavior implying a utility function, it's impossible to not have an implicit utility function, as you can define a revealed preference utility function for any hunk of matter. doesn't mean that the system is evaluating things using a zero computational uncertainty model of the future like in the classic utility maximizer formulation though. I think evolutionary fitness is a better way to think about this - the preferences that preserve themselves are the ones that win.

What on earth does the government have to do with a fire alarm? The fire alarm continues to buzz even if everyone in the room is deaf. It is just sending a signal not promising any particular action will be taken as a consequence.

3Ben Pace
The use of the fire alarm analogy on LessWrong is specifically about its effect, not about it being an epiphenomenon to people's behavior.
4ShardPhoenix
If he thinks AI interpretability work as it exists isn't helpful he should say so, but he shouldn't speak as though it doesn't exist.

its a 2x2 matrix if you are married tho 

how many mathematicians could win the gold at the IMO

I understand its for under 18s but I imagine there are a lot of mathematicians that wouldnt be able to do it either right?

1jhahn
Well, I know at least one assistant professor who couldn't win gold at the IMO. However, I would be extremely surprised if AI were able to supplant me before winning IMO gold.

Eliezer seems to think that the shift from proto agi to agi to asi will happen really fast and many of us on this site agree with him 

thus its not sensible that there is a decade gap between "almost ai" and ai on metaculus . If I recall Turing (I think?) said something similar. That once we know the the way to generate even some intelligence things get very fast after that (heavily paraphrased).

So 2028 really is the beginning of the end if we do really see proto agi then.