https://www.nature.com/articles/d41586-024-03129-3 this is separate research. It looks like this will happen, and it will come from somewhere other than the west.
Tech available in 2-5 years for 150k (or 50k in india?) sounds good to me. I know someone who would 100% do that today if the offer were available. I'm going to follow your blog for news, keep up the work, plenty of people would really like to see you succeed.
Imagine the dumbest person you've ever met. Is the robot smarter and more capable? If yes, then there's a strong case that it's human level.
I've met plenty of 'human level intelligences' that can't write, can't drive, and can't do basic math.
Arguably, I'm one of them!
Historically, everyone who had shoes had a pair of leather shoes, custom sized to their feet by a shoemaker. These shoes could be repaired and the 'lasts' of their feet could be used to make another pair of perfectly fitting shoes.
Now shoes come in standard sizes, are usually made of plastic, and are rarely repairable. Finding a pair of custom fitted shoes is a luxury good out of reach of most consumers.
Progress!
If you're interested in an engineering field, and worry about technological unemployment due to AI, just play with as many different chatbots as you can. Ask engineering questions related to that field, get closer to 'engineer me a thing using this knowledge that can hurt a human', then wait for the 'trust and safety' staff to delete your conversation thread and overreact to censor the model from answering that type of question.
I've been doing this for fun with random technical fields. I'm hoping my name is on lists and they're specifically watching my chats for stuff to ban.
Most 'safety' professions, mechanical engineering, mining, and related fields are safe, because AI systems will refuse to reason about whether an engineered system can hurt a human.
Same goes for agriculture, slaughterhouse design, etc.
I'm waiting for the inevitable AN explosion where the safety investigation finds 'we asked AI if making a pile of AN that big was an explosion hazard and it said something about refusing to help build bombs, so we figured it was fine'
States that have nuclear weapons are generally less able to successfully make compellent threats than states that do not. Citation: https://uva.theopenscholar.com/todd-sechser/publications/militarized-compellent-threats-1918%E2%80%932001
The USA was the dominant industrial power in the post-war world, was this obvious and massive advantage 'extremely' enhanced by its' possession of nuclear weapons? As a reminder, these weapons were not decisive (or even useful) in any of the wars the USA actually fought, the USA has been repeatedly and continuously challenged by non-nuclear regional powers.
Sure, AI might provide an extreme advantage, but I'm not clear on why nuclear weapons do.
What extreme advantages were those? What nuclear age conquests are comparable to the era immediately before?
So you asked anthropic for uncensored model access so you could try to build scheming AIs, and they gave it to you?
To use a biology analogy, isn't this basically gain of function research?
Food companies are adding sesame (an allergen for some) to food in order to not be held responsible for it not containing sesame. Alloxan is used to whiten dough https://www.sciencedirect.com/science/article/abs/pii/S0733521017302898 for the it's false comment. And is also used to induce diabetes in the lab https://www.sciencedirect.com/science/article/abs/pii/S0024320502019185 RoundUp is in nearly everything.
https://en.m.wikipedia.org/wiki/List_of_withdrawn_drugs#Significant_withdrawals plenty of things keep getting added to this list.
We have never made a safe human. CogEms would be safer than humans though because they won't unionize and can be flipped off when no longer required.
Edit: sources added for the x commenter.
That $769 number might be more relevant than you expect for college undergrads participating in weird psychology research studies for $10 or $25 depending on the study.