Radford Neal

Wiki Contributions

Comments

Sorted by

All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that

Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.

Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).

You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.

It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:

  1. Money in a savings account at a bank.
  2. Shares in a company that owns a nuclear power plant.
  3. Shares in a company that manufactures nuts and bolts.
  4. Shares in a company that helps employers recruit new employees.

These are all "capital", but will I think fare rather differently in an AI future. 

As always, there's no guarantee that the money will retain its value - that depends as usual on central bank actions - and I think it's especially likely that it loses its value in an AI future (crypto currencies as well). Why would an AI want to transfer resources to someone just because they have some fiat currency? Surely they have some better way of coordinating exchanges.

The nuclear power plant, in contrast, is directly powering the AIs, and should be quite valuable, since the AIs are valuable. This assumes, of course, that the company retains ownership. It's possible that it instead ends up belonging to whatever AI has the best military robots.

The nuts and bolts company may retain and even gain some value when AI dominates, if it is nimble in adapting, since the value of AI in making its operations more efficient will typically (in a market economy) be split between the AI company and the nuts and bolts company. (I assume that even AIs need nuts and bolts.)

The recruitment company is toast.

Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, "Low density parity check codes", IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.

I recognized that Gallager's method was the same as Pearl's belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes'', Electronics Letters, vol. 33, pp. 457-458).

This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it's not really AGI).

Then you know that someone who voiced opinion A that you put in the hat, and also opinion B, likely actually believes opinion B.

(There's some slack from the possibility that someone else put opinion B in the hat.)

Wouldn't that destroy the whole idea? Anyone could tell that an opinion voiced that's not on the list must have been the person's true opinion.

In fact, I'd hope that several people composed the list, and didn't tell each other what items they added, so no one can say for sure that an opinion expressed wasn't one of the "hot takes".

I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...

OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might also make the right decision if you do both of these things incorrectly (your mistakes might cancel out), but that's not a reliable method. And you might also make the right decision by just intuiting what it is. That's fine if you happen to have good intuition, but since we often don't, we have probability theory and decision theory to help us out.

One of the big ways probability and decision theory help is by separating the estimation of probabilities from their use to make decisions. We can use the same probabilities for many decisions, and indeed we can think about probabilities before we have any decision to make that they will be useful for. But if you entirely decouple probability from decision-making, then there is no longer any basis for saying that one probability is right and another is wrong - the exercise becomes pointless. The meaningful justification for a probability assignment is that it gives the right answer to all decision problems when decision theory is correctly applied. 

As your example illustrates, correct application of decision theory does not always lead to you betting at odds that are naively obtained from probabilities. For the Sleeping Beauty problem, correctly applying decision theory leads to the right decisions in all betting scenarios when Beauty thinks the probability of Heads is 1/3, but not when she thinks it is 1/2.

[ Note that, as I explain in my top-level answer in this post, Beauty is an actual person. Actual people do not have identical experiences on different days, regardless of whether their memory has been erased. I suspect that the contrary assumption is lurking in the background of your thinking that somehow a "reference class" is of relevance. ]

Answer by Radford Neal30

I re-read "I Robot" recently, and I don't think it's particularly good. A better Asimov is "The Gods Themselves" (but note that there is some degree of sexuality, though not of the sort I would say that an 11-year should be shielded from).

I'd also recommend "The Flying Sorcerers", by David Gerrold and Larry Niven. It helps if they've read some other science fiction (this is sf, not fantasy), in order to get the puns.

How about "AI scam"? You know, something people will actually understand. 

Unlike "gas lighting", for example, which is an obscure reference whose meaning cannot be determined if you don't know the reference.

Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following possibility:

Upon wakening, Beauty see that there is a plate of fresh muffins beside her bed. She recognizes them as coming from a nearby cafe. She knows that they are quite delicious. She also knows that, unfortunately, the person who makes them on Mondays puts in an ingredient that she is allergic to, which causes a bad tummy ache. Muffins made on Tuesday taste the same, but don't cause a tummy ache. She needs to decide whether to eat a muffin, weighing the pleasure of their taste against the possibility of a subsequent tummy ache.

If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4, whereas if she thinks the probability of Heads is 1/3, she will think the probability that it is Monday is (1/3)+(1/2)*(2/3)=2/3. Since 3/4 is not equal to 2/3, she may come to a different decision about whether to eat a muffin if she thinks the probability of Heads is 1/2 than if she thinks it is 1/3 (depending on how she weighs the pleasure versus the pain). Her decision should not depend on some arbitrary "reference class", or on what bets she happens to be deciding whether to make at the same time. She needs a real probability. And on various grounds, that probability is 1/3.

Load More