Akram Choudhary

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Entertaining as this post was, I think very few of us have ai timelines so long that iq eugenics actually matter. Long timelines are like 2040 ish these days so what use is a 16 yo high iq child going to be to secure humanities future ?

Wait till you find out that qwen 2 is probably just llama 3 with a few changes and some training on benchmarks to inflate performance a bit

What are your thoughts on skills that the government has too much control over? For example If we get ASI in 2030 do you imagine that a doctor will be obsolete in 2032 or will the current regulatory environment still be relevant ? 

And how much of this is determined by "labs have now concentrated so much power that governments are obsolete".

Daniel, your interpretation is literally contradicted by Eliezer's exact words. Eliezer defines dignity as that which increases our chance of survival.

 

""Wait, dignity points?" you ask.  "What are those?  In what units are they measured, exactly?"

And to this I reply:  Obviously, the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line.  A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity."

So I'm one of the rate limited users. I suspect it's because I made a bad early April fools joke about a WorldsEnd movement that would encourage people to maximise utility over the next 25 years instead of pursuing long term goals for humanity like alignment. Made some people upset and it hit me that this site doesn't really have the right culture for those kinds of jokes. I apologise and don't contest being rate limited.

Just this once I promise

See my other comment on how this is just a shitpost.

 

Also humans dont base their decisions on raw expected value calculations. Almost everyone would take 1 million over a 0.1% chance of 10 billion though the expected value of the latter is higher (pascals mugging) 

Early april fools joke. I dont seriously believe this. 

It was originally intended as an april fools joke lol. This isnt a serious movement but it does reflect a little bit of my hopelessness of ai alignment working 

AI + Humans would just eventually give rise to AGI anyway so I dont see the distinction people try to make here.

Load More