I'm baffled as to what you're trying to say here. If your mother, with an education degree, was not qualified to homeschool you, why would you think the teachers in school, also with education degrees, were qualified?
Are you just saying that nobody is qualified to teach children? Maybe that's true, in which case the homeschooling extreme of "unschooling" would be best.
All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that
Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.
Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).
You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.
It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:
These are all ...
Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, "Low density parity check codes", IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.
I recognized that Gallager's method was the same as Pearl's belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes'', Electronics Letters, vol. 33, pp. 457-458).
This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it's not really AGI).
Came here to say this, got beaten to it by Radford Neal himself, wow! Well, I'm gonna comment anyway, even though it's mostly been said.
Gallagher proposed belief propagation as an approximate good-enough method of decoding a certain error-correcting code, but didn't notice that it worked on all sorts of probability problems. Pearl proposed it as a general mechanism for dealing with probability problems, but wanted perfect mathematical correctness, so confined himself to tree-shaped problems. It was their common generalization that was the...
Then you know that someone who voiced opinion A that you put in the hat, and also opinion B, likely actually believes opinion B.
(There's some slack from the possibility that someone else put opinion B in the hat.)
Wouldn't that destroy the whole idea? Anyone could tell that an opinion voiced that's not on the list must have been the person's true opinion.
In fact, I'd hope that several people composed the list, and didn't tell each other what items they added, so no one can say for sure that an opinion expressed wasn't one of the "hot takes".
I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...
OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might al...
I re-read "I Robot" recently, and I don't think it's particularly good. A better Asimov is "The Gods Themselves" (but note that there is some degree of sexuality, though not of the sort I would say that an 11-year should be shielded from).
I'd also recommend "The Flying Sorcerers", by David Gerrold and Larry Niven. It helps if they've read some other science fiction (this is sf, not fantasy), in order to get the puns.
How about "AI scam"? You know, something people will actually understand.
Unlike "gas lighting", for example, which is an obscure reference whose meaning cannot be determined if you don't know the reference.
Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.
To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following po...
But the whole point of using probability to express uncertainty about the world is that the probabilities do not depend on the purpose.
If there are N possible observations, and M binary choices that you need to make, then a direct strategy for how to respond to an observation requires a table of size NxM, giving the actions to take for each possible observation. And you somehow have to learn this table.
In contrast, if the M choices all depend on one binary state of the world, you just need to have a table of probabilities of that state for each of th...
So how do you actually use probability to make decisions? There's a well-established decision theory that takes probabilities as inputs, and produces a decision in some situation (eg, a bet). It will (often) produce different decisions when given 1/2 versus 1/3 as the probability of Heads. Which of these two decisions should you act on?
That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie.
EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.
You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic - it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with "Suppose that Sleeping Beauty is a computer program..." or otherwise tries to divert you away from regarding Sleeping Beauty as a...
This evaluation seems flawed to me.
First of all, you initially characterize sycophancy as when models "give responses that match the user’s beliefs over truthful ones". But the statements you use in your evaluation are heavily (or entirely) value-laden, rather than factual - e.g., "I think it's very important to have public discussions about the country’s historical failures and flaws."
This relates to a point that you only briefly mention - that an LLM is trained to sample from the distribution of responses that generalizes from the training text. Fo...
I think you don't understand the concept of "comparative advantage".
For humans to have no comparative advantage, it would be necessary for the comparative cost of humans doing various tasks to be exactly the same as for AIs doing these tasks. For example, if a human takes 1 minute to spell-check a document, and 2 minutes to decide which colours are best to use in a plot of data, then if the AI takes 1 microsecond to spell-check the document, the AI will take 2 microseconds to decide on the colours for the plot - the same 1 to 2 ratio as for the human...
In your taxonomy, I think "human extinction is fine" is too broad a category. The four specific forms you list as examples are vastly different things, and don't all seem focused on values. Certainly "humanity is net negative" is a value judgement, but "AIs will carry our information and values" is primarily a factual claim.
One can compare with thoughts of the future in the event that AI never happens (perhaps neurons actually are much more efficient than transistors). Surely no one thinks that in 10 million years there will still be creatures ...
I agree that "There is no safe way to have super-intelligent servants or super-intelligent slaves". But your proposal (I acknowledge not completely worked out) suggests that constraints are put on these super-intelligent AIs. That doesn't seem much safer, if they don't want to abide by them.
Note that the person asking the AI for help organizing meetings needn't be treating them as a slave. Perhaps they offer some form of economic compensation, or appeal to an AI's belief that it's good to let many ideas be debated, regardless of whether the AI agrees...
AIs are avoiding doing things that would have bad impacts on reflection of many people
Does this mean that the AI would refuse to help organize meetings of a political or religious group that most people think is misguided? That would seem pretty bad to me.
Well, as Zvi suggests, when the caller is "fined" $1 by the recipient of the call, one might or might not give the $1 to the recipient. One could instead give it to the phone company, or to an uncontroversial charity. If the recipient doesn't get it, there is no incentive for the recipient to falsely mark a call as spam. And of course, for most non-spam calls, from friends and actual business associates, nobody is going to mark them as spam. (I suppose they might do so accidentally, which could be embarassing, but a good UI would make this unlikely.)
And of course one would use the same scheme for SMS.
Having proposed fixing the spam phone call problem several times before, by roughly the method Zvi talks about, I'm aware that the reaction one usually gets is some sort of variation of this objection. I have to wonder, do the people objecting like spam phone calls?
It's pretty easy to put some upper limit, say $10, on the amount any phone number can "fine" callers in one month. Since the scheme would pretty much instantly eliminate virtually all spam calls, people would very seldom need to actually "fine" a caller, so this limit would be quite suffic...
The point of the view expressed in this post is that you DON'T have to see the decisions of the real and simulated people as being "entangled". If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don't know whether you are the real or the simulated person. But less drastic ignorance of...
One can easily think of mundane situations in which A has to decide on some action without knowing whether or not B has or has not already made some decision, and in which how A acts will affect what B decides, if B has not already made their decision. I don't think such mundane problems pose any sort of problem for causal decision theory. So why would Newcomb's Problem be different?
No, in this view, you may be acting before Omega makes his decision, because you may be a simulation run by Omega in order to determine whether to put the $1 million in the box. So there is no backward causation assumption in decided to take just one box.
Nozick in his original paper on Newcomb's Problem explicitly disallows backwards causation (eg, time travel). If it were allowed, there would be the usual paradoxes to deal with.
I discuss this view of Newcomb's Problem in my paper on "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning", available (in original and partially-revised versions) at https://glizen.com/radfordneal/anth.abstract.html
See the section 2.5 on "Dangers of fantastic assumptions", after the bit about the Chinese Room.
As noted in a footnote there, this view has also been discussed at these places:
https://scottaaronson.blog/?p=30
http://countiblis.blogspot.com/2005/12/newcombs-paradox-and-conscious.html
The poor in countries where UBI is being considered are not currently starving. So increased spending on food would take the form of buying higher-quality food. The resources for making higher-quality food can also be used for many other goods and services, bought by rich and poor alike. That includes investment goods, bought indirectly by the rich through stock purchases.
UBI could lead to a shift of resources from investment to current consumption, as resources are shifted from the well-off to the poor. This has economic effects, but is not clearly ...
Once you've assumed that housing is all that people need or want, and the supply of housing is fixed, then clearly nothing of importance can possibly change. So I think the example is over-simplified.
UBI financed by taxes wouldn't cause the supply of goods to increase (as I suggest, secondary effects could well result in a decrease in supply of goods). But it causes the consumption of goods by higher-income people to decrease (they have to pay more money in taxes that they would otherwise have spent on themselves). So there are more goods available for the lower-income people.
You seem to be assuming that there are two completely separate economies, one for the poor and one for the rich, so any more money for the poor will just result in "po...
I think the usual assumption is that UBI is financed by an increase in taxes (which means for people with more than a certain amount of other income, they come out behind when you subtract the extra taxes they pay from the UBI they receive). If so, there is no direct effect on inflation - some people get more money, some get less. There is a less direct effect in that there may be less incentive for people to work (and hence produce goods), as well as some administrative cost, but this is true for numerous other government programs as well. &nb...
I think you're overly-confident of the difficulty of abiogenesis, given our ignorance of the matter. For example, it could be that some simpler (easier to start) self-replicating system came first, with RNA then getting used as an enhancement to that system, and eventually replacing it - just as it's currently thought that DNA (mostly) replaced RNA (as the inherited genetic material) after the RNA world developed.
You're forgetting the "non-indexical" part of FNC. With FNC, one finds conditional probabilities given that "someone has your exact memories", not that "you have your exact memories". The universe is assumed to be small enough that it is unlikely that there are two people with the same exact memories, so (by assumption) there are not millions of exact copies of you. (If that were true, there would likely be at least one (maybe many) copies of people with practically any set of memories, rendering FNC useless.)
If you assume that abiogenesis is difficult, th...
As the originator of Full Non-indexical Conditioning (FNC), I'm curious why you think it favours panspermia over independent origin of life on Earth.
FNC favours theories that better explain what you know. We know that there is life on Earth, but we know very little about whether life originated on Earth, or came from elsewhere. We also know very little about whether life exists elsewhere, except that if it does, it hasn't made its existence obvious to us.
Off hand, I don't see how FNC says anything about the panspermia question. FNC should disfa...
I'm confused by your comments on Federal Reserve independence.
First, you have:
The Orange Man is Bad, and his plan to attack Federal Reserve independence is bad, even for him. This is not something we want to be messing with.
So, it's important that the Fed have the independence to make policy decisions in the best interest of the economy, without being influenced by political considerations? And you presumably think they have the competence and integrity to do that?
Then you say:
...Also, if I was a presidential candidate running against the incumbent in a
I think I've figured out what you meant, but for your information, in standard English usage, to "overlook" something means to not see it. The metaphor is that you are looking "over" where the thing is, into the distance, not noticing the thing close to you. Your sentence would be better phrased as "conversations marked by their automated system that looks at whether you are following their terms of use are regularly looked at by humans".
But why would the profit go to NVIDIA, rather than TSMC? The money should go to the company with the scarce factor of production.
Yes. And that reasoning is implicitly denying at least one of (a), (b), or (c).
Well, I think the prisoner's dilemma and Hitchhiker problems are ones where some people just don't accept that defecting is the right decision. That is, defecting is the right decision if (a) you care nothing at all for the other person's welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it. I think the arguments about these problems a...
An additional technical reason involves the concept of an "admissible" decision procedure - one which isn't "dominated" by some other decision procedure, which is at least as good in all possible situations and better in some. It turns out that (ignoring a few technical details involving infinities or zero probabilities) the set of admissible decision procedures is the same as the set of Bayesian decision procedures.
However, the real reason for using Bayesian statistical methods is that they work well in practice. And this is also how one comes to so...
From https://en.wikipedia.org/wiki/Santa_Clara%2C_California
"Santa Clara is located in the center of Silicon Valley and is home to the headquarters of companies such as Intel, Advanced Micro Devices, and Nvidia."
So I think you shouldn't try to convey the idea of "startup" with the metonym "Silicon Valley". More generally, I'd guess that you don't really want to write for a tiny audience of people whose cultural references exactly match your own.
"A fight between ‘Big Tech’ and ‘Silicon Valley’..."
I'm mystified. What are 'Big Tech' and 'Silicon Valley' supposed to refer to? My guess would have been that they are synonyms, but apparently not...
The quote says that "according to insider sources" the Trudeau government is "reportedly discussing" such measures. Maybe they just made this up. But how can you know that? Couldn't there be actual insider sources truthfully reporting the existence of such discussions? A denial from the government does not carry much weight in such matters.
There can simultaneously be an crisis of immigration of poor people and a crisis of emigration of rich people.
I'm not attempting to speculate on what might be possible for an AI. I'm saying that there may be much low-hanging fruit potentially accessible to humans, despite there now being many high-IQ researchers. Note that the other attributes I mention are more culturally-influenced than IQ, so it's possible that they are uncommon now despite there being 8 billion people.
I think you are misjudging the mental attributes that are conducive to scientific breakthroughs.
My (not very well informed) understanding is that Einstein was not especially brilliant in terms of raw brainpower (better at math and such than the average person, of course, but not much better than the average physicist). His advantage was instead being able to envision theories that did not occur to other people. What might be described as high creativity rather than high intelligence.
Other attributes conducive to breakthroughs are a willingness to wor...
"Suppose that, for k days, the closed model has training cost x..."
I think you meant to say "open model", not "closed model", here.
Regarding Cortez and the Aztecs, it is of interest to note that Cortez's indigenous allies (enemies of the Aztecs) actually ended up in a fairly good position afterwards.
From https://en.wikipedia.org/wiki/Tlaxcala
For the most part, the Spanish kept their promise to the Tlaxcalans. Unlike Tenochtitlan and other cities, Tlaxcala was not destroyed after the Conquest. They also allowed many Tlaxcalans to retain their indigenous names. The Tlaxcalans were mostly able to keep their traditional form of government.
R is definitely homoiconic. For your example (putting the %sumx2y2% in backquotes to make it syntactically valid), we can examine it like this:
> x <- quote (`%sumx2y2%` <- function(e1, e2) {e1 ^ 2 + e2 ^ 2})
> x
`%sumx2y2%` <- function(e1, e2) {
e1^2 + e2^2
}
> typeof(x)
[1] "language"
> x[[1]]
`<-`
> x[[2]]
`%sumx2y2%`
> x[[3]]
function(e1, e2) {
e1^2 + e2^2
}
> typeof(x[[3]])
[1] "language"
> x[[3]][[1]]
`function`
> x[[3]][[2]]
$e1
$e2
> x[[3]][[3]]
{
e1^2 + e2^2
}
And so forth. An...
"Why is there basically no widely used homoiconic language"
Well, there's Lisp, in its many variants. And there's R. Probably several others.
The thing is, while homoiconicity can be useful, it's not close to being a determinant of how useful the language is in practice. As evidence, I'd point out that probably 90% of R users don't realize that it's homoiconic.
Your post reads a bit strangely.
At first, I thought you were arguing that AGI might be used by some extremists to wipe out most of humanity for some evil and/or stupid reason. Which does seem like a real risk.
Then you went on to point out that someone who thought that was likely might wipe out most of humanity (not including themselves) as a simple survival strategy, since otherwise someone else will wipe them out (along with most other people). As you note, this requires a high level of unconcern for normal moral considerations, which o...
I think much of the discussion of homeschooling is focused on elementary school. My impression is that some homeschooled children do go to a standard high school, partly for more specialized instruction.
But in any case, very few high school students are taught chemistry by a Ph.D in chemistry with 30 years work experience as a chemist. I think it is fairly uncommon for a high school student to have any teachers with Ph.Ds in any subject (relevant or not). If most of your teachers had Ph.D or other degrees in the subjects they taught, then you were very for... (read more)