Thank you for your response!
I think I disagree with your take, but with high uncertainty. The counterfactual world I'm describing will only exist in parallel with the real world where we get increased spread lf the technology + improvements in it.
I agree with your assessment of where the value comes from, but I think a lot it time is spent by office workers on the stuff that could be automated. Since time is fungible, workers can use the saved time on the stuff you say drives value.
Yes that makes sense to me - I already own some TSMC and Intel. Samsung also has their own fabs I believe so could be another alternative. I suspect the early AGI businesses will just use existing/already deployed hardware though, so I'd say the hardware manufacturing stocks would rise with a delay.
These are my half baked thoughts on this, putting aside alignment and AI risk completely:
I am betting that large returns will come to those that either own the models underlying AGI (if they are hard / expensive to recreate) and supply them to others via a paid API, or those that build compelling products using AGI. The 2nd category of companies will probably be startups that pop up once we have AGI, so no way to invest in them right now unless you can invest indirectly in a VC fund you think will be likely to fund those startups.
For the 1st category, I think OpenAI and DeepMind are the two most likely candidates. Deepmind you can invest in via Alphabet, but OpenAI is private. However, Microsoft has invested in OpenAI and has some sort of agreement to allow them to supply OpenAI models via Azure. Although Microsoft's current market cap is largely not driven by its stake in / agreement with OpenAI, I think AGI is a large enough breakthrough that it would quickly drive much more of the value of Microsoft once / if it is created.
Therefore I've bought a bunch of Alphabet and Microsoft stock.
He doesn't go into this in the book, but I am fairly sure that Harris would agree with your consequentialist take of "acting as if they had free will". I have heard him speak on this matter in a few of his podcasts around "the hard problem of consciousness" with Dennett, Chalmers and a neurosurgeon that I can't find the name of (I remember him being british).
As I understand him, his view is to not view criminals (or anyone) as "morally bad" for whatever they have done, but to move directly on to figuring out the best possible way to avoid bad things happening again, to their future potential victims and to themselves. I think he sees this is as an important starting point in order to be able to be consequentialist about it at all.
For example, if the best way to avoid criminals re-offending turns out to be to put them into a cushy, luxurious rehabilitation program, then in order to even consider this as an option, we must remove our sense of needing to punish them for being morally reprehensible.
Helpful resource for whoever ends up doing this: Contraceptive Technology. It's a huge book that summarises almost all effectiveness studies that have been done on contraceptives, including the definitions of perfect and typical use (very important when comparing contraceptives). It also has detailed summaries of side effects, medical interactions, description of method of action and well researched "advantages" and "disadvantages" sections — it's basically what doctors use to decide how to prescribe birth control.
Source: I have used this book myself in research, I work for a birth control app company.
Good points!
Yes this snippet is particularly nonsensical to me
an AI system could be“superintelligent” without any basic humanlike common sense, yet while seamlessly preserving the speed, precision and programmability of a computer
It sounds like their experience with computers has involved them having a lot of "basic humanlike common sense" which is a pretty crazy experience in this case. When I explain what programming is like to kids, I usually say something like "The computer will do exactly exactly exactly what you tell it to, extremely fast. You can't rely on any basic sense checking or common sense, or understanding from it, if you can't define what you want specifically enough, the computer will fail in a (to you) very stupid way, very quickly."
Great and fair critique of this paper! I also enjoyed reading it and would recommend it also just for the history write up.
What do you think is the underlying reason for the bad reasoning in fallacy 4? Is the orthogonal it thesis particularly hard to understand intuitively or has it been covered so badly by media so often that the broad consensus of what it means is now wrong?
I agree, and I look forward to seeing how far it goes!