Some hypotheses - with much simplification and limited applicability in terms of the possible future states of the world within which they would really be valuable:
Last but not least:
3. Mindfulness. Embrace how much of a cosmic joke our individual lives and self-centered aspirations represent - or something of that sort.
Fully on-board with #3. Change your attitude about what constitutes "a decent life", such that pretty much all existence is positive-value, and moments of joy are worth much more than weeks of depression costs.
#1 and #2 are less obvious. One of the reasons it's called the singularity is that EVERYTHING becomes hard to predict. A lot of people are assuming that the concepts of ownership and financial capital remain consistent enough that investments now retain their power after the big changes. I think they're mostly wrong - i...
I would say to do everything possible to stop GAI. We might not win, but it was better to have tried. We might even succeed.
I mean, you're not alone in this. Lots of people will have the same problems. So one possible direction is to participate in movements for AI restrictions, job guarantees, basic income and so on.
The actions you suggest might represent a laudable contribution to a public good, but it doesn't directly answer the (self-concerned) question the OP raises. Given the largeness of the world and the public goods nature of the projects you mention, his own action will only marginally change the probability of a a better structure of society in general. That may still be worth it from a fully altruistic standpoint, but it has asymptotically 0 probability to improve his personal material welfare.
(If I may, an analogy: One could compare it to a situation where...
I bought index funds. I would say it has the advantage of being robust to AGI not happening, but with birth rates as they are I am not so sure that's true! If we survive, Hanson's economic growth calculations predict the economy will start doubling every few months. Provided the stock market can capture some of this, I guess learning how to live on very little (you really want to avoid burning your capital in this future, so should live as modestly as possible both so you can acquire capital and so you can use as little as possible until the market prices in such insane growth) and putting everything in index funds should be fine with even modest amounts of invested capital. However, I doubt property rights will be respected.
I think this post has decent financial advice if you believe in near term GAI.
https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi
Zero human capital? I’m sorry to read that you might think this but surely it’s simply not true. Personally, if I was in your situation I would invest those funds in myself. Perhaps a relatively future proof vocation, something physically creative but difficult to replace (in the short term). I’m certainly no expert but believe that skills such as dance teacher, hairdresser, building renovation skills, antique restorations, watch/clock repairs, blacksmith, etc might retain their utility way into the future.
How can I maximise my chances of having a decent life, given the very high likelihood that GAI will make all our intellectual labour useless in the next few years?
For example, I graduated from a good university a few years ago and am working as a software engineer in a multinational company, but my capabilities are middling at best. I am distressed that I will likely not be able to afford a house in the few years left before GAI renders me unable to afford a living. I am not a genius; it is very unlikely that I can join an AI research company and contribute meaningfully to AI research.
Assuming I have a small amount of money (100-200k) I can set aside, should I attempt to, for example, invest in companies that will likely be able to monetise GAI?
Or is there something else I should be doing to prepare for the time I have basically zero human capital?
Should I attempt to move to (and get citizenship from) a country with a larger amount of natural resources, assuming that human capital will become worthless quickly?
Is it reasonable to find potential outs (e.g. physician-assisted death) in case we cannot earn a living (and if unfriendly AI is basically confirmed)?