It might be that drugs will help here, but even if you're on drugs, I think brain training over long periods of time is worth investing in. Some examples which I have put effort into:
There's a simple, terrible answer: because studies are hugely expensive, very time intensive, take a very long time to complete, and require multiple very slow iterations to get everything through committee in a way that our institutions will accept. Consider:
- Nobody is funding it. The cost is literally hundreds of millions of dollars to do in a way that the medical establishment would accept. Even then it would be challenged.
- It would take thousands of man hours. Ain't nobody got time for that.
- It would take 3+ years to get everything approved and done properly, otherwise the medical establishment won't accept it. Actually they still probably won't.
- By the time you're done, it's a virtual certainty that the virus will have run its course and the result will be useless.
IMO, the above is more than sufficient. The incentives were not there - or rather, the incentives were not sufficiently large to justify the cost and were further derated by the expected utility of the information a year after the pandemic is over.
I very much believe aligned AGI isn't going to just solve our problems overnight. It would have to be on the absolute far end of capability for that, IMO. Less-than-arbitrarily-powerful AGI is going to take time (years to decades) to figure out enough about biology to upload/fix our organic hardware while keeping us intact. Even for me, with my rather lax requirements about continuity (not required) and lax requirements of hardware platform (any), I expect it to take years if not decades.
Humans, barring extinction, will eventually solve aging. My best guess at the moment is that we'll hit longevity escape velocity around 2050; this is really inconvenient for me, because I am already old. My odds of dying due to organic hardware platform failure are IMO higher than my odds of dying from AGI ruin in that time.
So from my standpoint, investing in platform maintenance (a healthy lifestyle) makes sense. Platform failure is a substantial chunk of my probability space, and I'm old enough that there are qualify of life benefits to be had as well.
If you're only 20, AGI ruin will probably be a larger part of your probability space than platform failure. YMMV.
Unfortunately, political topics are like radiation, and pollute nearby ground as well. Peterson is radioactive in this regard, and using him as an example means your article is radioactive as well.
Analysis of a less radioactive expert may have been a better idea - perhaps someone like Peter Attia (I think he's less radioactive?)
I'm a tech worker. I work 40-70 hours a week, depending on incident load. Nobody I work with or see on a regular basis works less than 40 hours a week, and some are substantially more than that.
My most cognitively productive hours are the four hours in the morning, but there's plenty of lower effort important organizational stuff to fill out the afternoons. I think a good fraction of my coworkers are like me and don't actually need the job anymore, but we still put forth effort.
I think one of the major missing pieces of your article is "social status pressure". Most people play the status game; they struggle to get ahead of their neighbors, even if it doesn't make any sense. They work extra hours to afford that struggle. They demand more than the base necessities and comfort, because that's how you signal status. It's pointless and stupid, but IMO one of the biggest issues.
As a reductionist, I view the universe as nothing more than particles/forces/quantum fields/static event graph. Everything that is or was comes from simple rules down at the bottom. I agree with Eliezer regarding many-worlds versus copenhagen.
With this as my frame of reference, Searle's argument is trivially bogus, as every person (including myself) is obviously a Chinese Room. If a person can be considered 'conscious', then so can some running algorithm on a Turing machine of sufficient size. If no Turing machine program exists that can be considered conscious when run, then people aren't conscious either.
I've never needed more than this, and I find the Chinese Room argument to be one of those areas where philosophy is an unambiguously 'diseased discipline'.
I think you might be missing something more obvious here: tech has a huge amount of slack when it comes to money. If I were running a tech event of similar size to what you described, I wouldn't bother charging, because it would be a waste of my time. When you make half a million dollars a year, funding something like that yourself basically comes out of your fun budget; you don't really even think twice about it.
Yoga and new age groups though? Not nearly as flush with cash.
It's a risk hedge, it has social benefits, and it has capacity / functionality benefits.
Risk hedge: If AI doesn't kill us, there will be a time lag between now and when we can transform ourselves/obsolete our biology. Maintaining your current biological hardware increases the odds of getting to the transformation state. If you're old (like me), this matters a lot.
Social benefits: Being in shape changes how you look, and that improves how other people treat you. Depending on how social you are, this may have a lot or very little impact.
Capacity / functionality: If you can run a mile, you can walk a mile without getting tired. You can climb stairs more easily, you can rush across an airport to catch a flight, you can carry your groceries in one trip instead of two. It's like the saying goes: if you're deadlifting 250 pounds, you're not going to throw out your back picking up your kid.
For me, the big one is hardware maintenance until I can upload. Uploading is at least two decades out (probably more like five). My odds of getting there are materially better if I put effort into hardware maintenance now.