Is paper now dominated by writing on a blackboard/whiteboard, and taking photos of what's worth keeping before erasing and rewriting?
Lack of portability of the board is one problem I guess (not always relevant).
I think much of the discussion of homeschooling is focused on elementary school. My impression is that some homeschooled children do go to a standard high school, partly for more specialized instruction.
But in any case, very few high school students are taught chemistry by a Ph.D in chemistry with 30 years work experience as a chemist. I think it is fairly uncommon for a high school student to have any teachers with Ph.Ds in any subject (relevant or not). If most of your teachers had Ph.D or other degrees in the subjects they taught, then you were very fortunate. (My daughter is in fact similarly fortunate, but I know perfectly well that her type of private school cannot be scaled to handle most students.)
And if we're going to discuss atypical situations, I do in fact think that I would be competent to teach all those subjects at a high school level.
I'm baffled as to what you're trying to say here. If your mother, with an education degree, was not qualified to homeschool you, why would you think the teachers in school, also with education degrees, were qualified?
Are you just saying that nobody is qualified to teach children? Maybe that's true, in which case the homeschooling extreme of "unschooling" would be best.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that
Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.
Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).
You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.
It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:
These are all "capital", but will I think fare rather differently in an AI future.
As always, there's no guarantee that the money will retain its value - that depends as usual on central bank actions - and I think it's especially likely that it loses its value in an AI future (crypto currencies as well). Why would an AI want to transfer resources to someone just because they have some fiat currency? Surely they have some better way of coordinating exchanges.
The nuclear power plant, in contrast, is directly powering the AIs, and should be quite valuable, since the AIs are valuable. This assumes, of course, that the company retains ownership. It's possible that it instead ends up belonging to whatever AI has the best military robots.
The nuts and bolts company may retain and even gain some value when AI dominates, if it is nimble in adapting, since the value of AI in making its operations more efficient will typically (in a market economy) be split between the AI company and the nuts and bolts company. (I assume that even AIs need nuts and bolts.)
The recruitment company is toast.
Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, "Low density parity check codes", IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.
I recognized that Gallager's method was the same as Pearl's belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes'', Electronics Letters, vol. 33, pp. 457-458).
This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it's not really AGI).
Then you know that someone who voiced opinion A that you put in the hat, and also opinion B, likely actually believes opinion B.
(There's some slack from the possibility that someone else put opinion B in the hat.)
Wouldn't that destroy the whole idea? Anyone could tell that an opinion voiced that's not on the list must have been the person's true opinion.
In fact, I'd hope that several people composed the list, and didn't tell each other what items they added, so no one can say for sure that an opinion expressed wasn't one of the "hot takes".
I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...
Note that this is not true if you're generating text from a base model at temperature one. The proportion of happy and unhappy families generated should match that in the training data. (This assumes training went reasonably well, of course, but it probably did.)
Now, people often use a temperature less than one. And few seem to realize that they are then biasing the generated text towards answers that it so happens can be expressed in only a few ways, and against answers that can be expressed in many different ways. Of course RLFH or whatever adds further biases...