This is a special post for quick takes by Anders Lindström. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
19 comments, sorted by Click to highlight new comments since:

I am so thrilled! Daylight saving time got me to experience (kind of) the sleeping beauty problem first hand.

Last night we in Sweden changed our clocks back one hour at 03.00 to 02.00 and went from “summertime” to the dreaded “wintertime”. It’s dreaded because we know what follows with it, ice storms and polar bears in the streets... 

Anyways, I woke up in the middle of the night and I reached for my phone to check what time it was. It was 02.50. Then it struck me. Am I experiencing the first 02.50 or the second 02.50 this night, i.e. have I first slept to 03, then the clock have changed back to 02 (which it automatically does on the phone) and then slept until 02.50 the new time or am I on the first 02.50 and in 10 minutes at 03 the clock will switchback to 02? 

It was a very dizzying thought. I could not for my life say either or. There was nothing in the dark that could give me any indication weather I was experiencing the first or the second 02.50. Then with my thoughts spinning I slowly waited for the clock on my phone to turn 03. When it did, it did not go back to 02, I had experienced the second 02.50 that night.

Now when AGI seems to arrive in the near term (2-5 years) and ASI is possibly 5-10 years away (i.e. a few thousand days), what do you personally think will help you stay relevant and in demand? What do you read/watch/study/practice? What skills are you focusing on to sharpen? What "plan B" and "plan C" do you have? What field of work/study would you recommend others to steer away from ASAP?

Asking for a friend...

[-]nim30

Plan B, for if the tech industry gets tired of me but I still need money and insurance, is to rent myself to the medical system. I happen to have appropriate licensure to take entry-level roles on an ambulance or in an emergency room, thanks to my volunteer activities. I suspect that healthcare will continue requiring trained humans for longer than many other fields, due to the depth of bureaucracy it's mired in. And crucially, healthcare seems likely to continue hurting for trained humans willing to tolerate its mistreatment and burnout.

Plan C, for if SHTF all over the place, is that I've got a decent amount of time worth of food and water and other necessities. If the grid, supply chains, cities, etc go down, that's runway to bootstrap toward some sustainable novel form of survival.

My plans are generic to the impact of many possible changes in the world, because AI is only one of quite a lot of disasters that could plausibly befall us in the near term.

Thanks for your input. I really like that you pointed out that AI is just one of many things that could go wrong, perhaps people like me and others are to caught up in the p(doom) buzz that we don't see all the other stuff.

But I wounder one thing about your Plan B, which seems rational, that what if a lot of people have entry-level care work as their back-up. How will you stave of that competition? Or do you think its a matter of avoiding loss aversion and get out of your Plan A game early and not linger (if some pre-stated KPI of yours goes above or below a certain threshold) to grab one of those positions?

[-]nim20

"entry-level" may have been a misleading term to describe the roles I'm talking about. The licensure I'd be renting to the system takes several months to obtain, and requires ongoing annual investment to maintain once it's acquired. If my whole team at work was laid off and all my current colleagues decided to use exactly the same plan b as mine, they'd be 1-6 months and several thousand dollars of training away from qualifying for the roles where I'd be applying on day 1.

Training time aside, I am also a better candidate than most because I technically have years of experience already from volunteering. Most of the other volunteers are retirees, because people my age in my area rarely have the flexibility in their current jobs to juggle work and volunteering.

Then again, I'm rural, and I believe most people on this site are urban. If I lived in a more densely populated area, I would have less opportunity to keep up my licensure through volunteering, and also more competition for the plan b roles. These roles also lend themselves well to a longer commute than most jobs, since they're often shifts of several days on and then several days off.

The final interesting thing about healthcare as a backup plan is its intersection with disability, in that not everyone is physically capable of doing the jobs. There's the obvious issues of lifting etc, but more subtly, people can be unable to tolerate the required proximity to blood, feces, vomit, and all the other unpleasantness that goes with people having emergencies. (One of my closest friends is all the proof I need that fainting at the sight of blood is functionally a physical rather than mental problem - we do all kinds of animal care tasks together, which sometimes involve blood, and the only difference between our experiences is that they can't look at the red stuff)

I spend a small fraction of my time planning for the scenario of being in charge of a large "company" of AI agents of varying speeds and capabilities. Specifically, if many other people also have recently been granted their own AI companies, how might I differentiate myself? How can I accomplish economically valuable tasks and also push for the survival of humanity?

That is a very interesting perspective and mindset! Do you  in that scenario think you will focus on value created in terms of solving technical problems or do you think you will focus on "softer" problems that are more human wellbeing centric?

I have yet to read a post here on LW were someone write about a frontier model that have solved a "real" problem were the person really had tried for a long(-ish) time to solve it but failed and then the AI solved it for them, like a research problem, a coding problem, a math problem, a medical problem, a personal problem etc. Has anyone experienced this yet?

I don't have any first- or second-hand experience with that happening.  I see occasional articles about protein and materials research that LLMs have massively accelerated, but my suspicion is the main value currently is NOT cutting-edge significant problem solving.  

The "mundane utility" section of Zvi's writeups have very good examples of what it IS currently good at now.  There's yet a long way to go to handle long-running multi-step creative analysis that the top few percent of humans are engaged in.  What is the shape of the curve (both of capabilities and of "intelligence level" metrics) is currently not known.

Thanks for pointing me to Zvi's work

You mean specifically that an LLM solved it? Otherwise Deepmind's work will give you many examples. (Although there've been surprisingly little breakthroughs in math yet)

Yes, I meant an LLM in the context of a user that fed in a query of his or her problem and got a novel solution back. There is always debatable what a "real" or "hard" problem is, but as a lower bound I mean something that people here at LW would raise an eyebrow or two if the LLM solved. Otherwise there are as you mention plenty of stuff/problems that "custom" AI/machine learning models have solved for a long time.

I am starting to believe that military use of AI is perhaps the best and fastest way to figure out if large scale AI alignment is possible at all. Since the military will actively seek to develop AI's that kills humans, they must also figure out how not to kill ALL humans. I hope the military will be open with their successes and failures about what works and do no work.

The heat is on! It seems that the export restrictions on Nvidia GPUs have had little to no effect on Chinese companies abilities to make frontier AI models. What will the next move from US be now?  https://kling-ai.com/

GPT-4 level models can be trained with mere thousands of GPUs. Export restrictions for a product that's otherwise on the open market aren't going to work at this scale, and also replacement with inferior accelerators remains feasible. But one GPT is 30x compute, and procuring 100K or 3000K GPUs (or many more in their inferior alternatives) is more reasonably a practical impossibility.

Huawei claim they are catching up on Nvidia: https://www.huaweicentral.com/ascend-910b-ai-chip-outstrips-nvidia-a100-by-20-in-tests-huawei/

Very serious negative update. And they practically say: we read Sora paper and then replicated.

As I wrote in another post, this can easily turn into another TikTok tool, where dumb westerners spill personal info into what amounts to a Chinese intelligence gathering apparatus.

Just wait until more countries that do not share western values get their hands on tools like this. I think that the only way that social media can survive is mandatory ID. If Airbnb can do it, I am sure Meta, X, Snap etc. etc. can do it. And... call me old fashioned, but I rather not share ANY personal information with ANY intelligence service

Congratulations to Geoffrey Hinton and John Hopfield! 
I wonder if Roko's basilisk will spare the Nobel prize committee now: https://www.nobelprize.org/prizes/physics/2024/press-release/