Food costs are not even slightly comparable. When I was kid (in the UK) they ran national advertising campaigns on TV for brands of flour, sugar and sliced bread. Nowadays the only reason these things aren't effectively free is because they take up valuable shelf space. Instead people are buying imported fruit and vegetables and ready-meals. It's like comparing the price of wood in the 1960's to the price of a fitted kitchen today.
Classic SciFi at its best :-)
Large groups of people can only live together by forming social hierarchies.
The people at the top of the hierarchy want to maintain their position both for themselves AND for their children (It's a pretty good definition of a good parent).
Fundamentally the problem is that it is not really about resources - It's a zero sum game for status and money is just the main indicator of status in the modern world.
The common solution to the problem of first timers is to make the first time explicitly free.
This is also applicable to clubs with fixed buy in costs but unknown (to the newbie) benefits and works well whenever the cost is realtively small (as it should be if it is optional). If they don't like the price they won't come again.
I think we can all agree on the thoughts about conflationary alliances.
On consciousness, I don't see a lot of value here apart from demonstrating the gulf in understanding between different people. The main problem I see, and this is common to most discussions of word definitions, is that only the extremes are considered. In this essay I see several comparisons of people to rocks, which is as extreme as you can get, and a few comparing people to animals, which is slightly less so, but nothing at all about the real fuzzy cases that we need to probe to decide what we really mean by consciousness i.e. comparing different human states:
Are we conscious when we are asleep?
Are we conscious when we are rendered unconscious?
Are we conscious when we take drugs?
Are we conscious when we play sports or drive cars? If we value consciousness so much, why do we train to become experts at such activities thereby reducing our level of consciousness?
If consciousness is binary then how and why do we, as unconscious beings (sleeping or anaesthetised), switch to being conscious beings?
If consciousness is a continuum then how can anyone reasonably rule conscious animals or AI or almost anything more complex than a rock?
If we equate consciousness to moral value and ascribe moral value to that which we believe to be conscious. Why do we not call out the obvious circular reasoning?
Is it logically possible to be both omniscient and conscious? (If you knew everything, there would be nothing to think about)
Personally I define consciousness as System 2 reasoning and, as such, I think it is ridiculously overrated. In particular people always fail to notice that System 2 reasoning is just what we use to muddle through when our System 1 reasoning is inadequate.
AI can reasonably be seen as far worse than us at System 2 reasoning but far better than us at System 1 reasoning. We overvalue System 2 so much precisely because it is the only thinking that we are "conscious" of.
Before we can even start to try to align AIs to human flourishing, we first need a clear definition of what that means. This has been a topic accessible to philosophical thought for millenia and yet still has no, universally accepted definition so how can you consider AI alignment helpful. Even if that we could all agree on what "human flourishing" meant, you would still have the problem of lock-in i.e. our AI overlords will never allow that definition to evolve once they have assumed control. Would you want to be trapped in the Utopia of someone born 3000 years ago? Better than being exterminated but still not what we want.
As a counterargument, consider mapping our ontology onto that of a baby. We can, kind of, explain some things in baby terms and, to that extent, a baby could theoretically see our neurons mapping to similar concepts in their ontology lighting up when we do or say things related to that ontology. At the same time our true goals are utterly alien to the baby.
Alternatively, imagine that you are sent back to the time of the pharaohs and had a discussion with Cheops/Khufu about the weather and forthcoming harvest - Even trying to explain it in terms of chaos theory, CO2 cycles, plant viruses and Milankovich cycles would probably get you executed so you'd probably say that the sun god Ra was going provide a good harvest this year and, Cheops, reading your brain would see that the neurons for "Ra" were activated as expected and be satisfied that your ontologies matched in all the important places.
I've heard much about the problems of misaligned superhuman AI killing us all but the long view seems to imply that even a "well aligned" AI will prioritise inhuman instrumental goals.
Have I missed something or is everyone ignoring the obvious problem with a superhuman AI with potentially limitless lifespan? It seems to me that such an AI, whatever its terminal goals, must, as an instrumental goal, prioritise seeking out and destroying any alien AI because, in simple terms, the greatest threat to it tiling the universe with tiny smiling human faces is an alien AI set on tiling the universe with tiny, smiling alien faces and, in a race for dominance, every second counts.
The usual arguments about logarithmic future discounting do not seem appropriate for an immortal intelligence.
The words stand for abstractions and abstractions suffer from the abstraction uncertainty principle i.e. an abstraction cannot be simultaneously, very useful/widely applicable and very precise. The more useful a word is, the less precise it will be and vise versa. Dictionary definitions are a compromise - They never use the most precise definitions even when such are available (e.g. for scientific terms) because such definitions are not useful for communication between most users of the dictionary. For example, If we defined red to be light with a frequency of exactly 430THz, it would be precise but useless but if were to define it as a range then it will be widely useful but will almost certainly overlap with the ranges for other colours thus leading to ambiguity.
(I think EY may even have a wiki entry on this somewhere)