I just want to register a prediction that I think something like meta's coconut will in the long run in fact perform much better than natural language CoT. Perhaps not in this time-frame though.
I suspect you're misinterpreting EY's comment.
Here was the context:
"I think controlling Earth's destiny is only modestly harder than understanding a sentence in English - in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.
You sound to me like someone saying, sixty years ago: "Maybe some day a computer will be able to play a legal game of chess - but simultaneously defeating multiple grandmasters, that strains credibility, I'm afraid." But it only took a few decades to get from point A to point B. I doubt that going from "understanding English" to "controlling the Earth" will take that long."
It seems clear to me EY was more saying something like "ASI will arrive soon after natural language understanding", rather than it having anything to do with alignment specifically.
"It's fine to say that this is a falsified prediction"
I wouldn't even say it's falsified. The context was: "it only took a few decades to get from [chess computer can make legal chess moves] to [chess computer beats human grandmaster]. I doubt that going from "understanding English" to "controlling the Earth" will take that long."
So insofar as we believe ASI is coming in less than a few decades, I'd say EY's prediction is still on track to turn out correct.
NEW EDIT: After reading three giant history books on the subject, I take back my previous edit. My original claims were correct.
Could you edit this comment to add which three books you're referring to?
One of the more interesting dynamics of the past eight-or-so years has been watching a bunch of the people who [taught me my values] and [served as my early role models] and [were presented to me as paragons of cultural virtue] going off the deep end.
I'm curious who these people are.
We should expect regression towards the mean only if the tasks were selected for having high "improvement from small to Gopher-7". Were they?
The reasoning was given in the comment prior to it, that we want fast progress in order to get to immortality sooner.
"But yeah, I wish this hadn't happened."
Who else is gonna write the article? My sense is that no one (including me) is starkly stating publically the seriousness of the situation.
"Yudkowsky is obnoxious, arrogant, and most importantly, disliked, so the more he intertwines himself with the idea of AI x-risk in the public imagination, the less likely it is that the public will take those ideas seriously"
I'm worried about people making character attacks on Yudkowsky (or other alignment researchers) like this. I think the people who think they can probably solve alignment by just going full-speed ahead and winging it, they are arrogant. Yudkowsky's arrogant-sounding comments about how we need to be very careful and slow, are negligible in comparison. I'm guessing you agree with this (not sure) and we should be able to criticise him for his communication style, but I am a little worried about people publically undermining Yudkowsky's reputation in that context. This seems like not what we would do if we were trying to coordinate well.
"We finally managed to solve the problem of deceptive alignment while being capabilities competitive"
??????
Minor quibble: It's a bit misleading to call B "experience curves", since it is also about capital accumulation and shifts in labor allocation. Without any additional experience/learning, if demand for candy doubles, we could simply build a second candy factory that does the same thing as the first one, and hire the same number of workers for it.