All of Radford Neal's Comments + Replies

I think much of the discussion of homeschooling is focused on elementary school. My impression is that some homeschooled children do go to a standard high school, partly for more specialized instruction.

But in any case, very few high school students are taught chemistry by a Ph.D in chemistry with 30 years work experience as a chemist. I think it is fairly uncommon for a high school student to have any teachers with Ph.Ds in any subject (relevant or not). If most of your teachers had Ph.D or other degrees in the subjects they taught, then you were very for... (read more)

6Said Achmiz
Unfortunately, this is not the case. There is a motte-and-bailey situation here, where the motte is “some kids can be homeschooled at the elementary school grade level by some exceptional parents” and the bailey is “abolish schools and homeschool everyone for everything at all grade levels”. I can provide you cited quotes if you like; or you can take my word that I’ve seen many homeschooling advocates quite unambiguously arguing for homeschooling beyond the elementary-school level. Yes, of course. Absolutely. No argument there. My point, however, is that what it takes to teach children a subject is both skill at teaching, in general (which most people, parents included, do not have), and substantial domain training/expertise (whether that comes from a degree, preferably an advanced degree, in the subject, or from extensive professional experience, or both—and which most people, including most parents, likewise do not have, for most or even all the subjects which are commonly part of a school curriculum). You might object: doesn’t this imply that most kids in the country are not being taught, and cannot be taught, most of their subjects by anyone who is qualified to teach them those things (as neither their parents nor any of their teachers at school meet those qualifications)? I answer: correct. Well, I am not personally acquainted with you and am not familiar with your academic and professional background, so of course I can’t confidently agree or disagree. However, I hope you’ll forgive me for being very skeptical about your claim.

I'm baffled as to what you're trying to say here.  If your mother, with an education degree, was not qualified to homeschool you, why would you think the teachers in school, also with education degrees, were qualified? 

Are you just saying that nobody is qualified to teach children? Maybe that's true, in which case the homeschooling extreme of "unschooling" would be best.

3Said Achmiz
My mathematics teachers in high school were qualified to teach me mathematics because they had degrees (mostly doctorates, but a couple did have lesser degrees) in mathematics. My chemistry teachers in high school were qualified to teach me chemistry because they had (respectively) a Ph.D. in chemistry and three decades of experience as a working chemist in industry. My computer science teachers in high school were qualified to teach me computer science because they had degrees in computer science (and were working programmers / engineers). My biology teachers in high school were qualified to teach me biology because they had degrees (one had a doctorate, another a lesser degree) in biology. My physics teacher in high school was qualified to teach me physics because he had a degree in physics. My drafting / technical drawing / computer networking / other “technology” teachers were qualified to teach me those things because they had extensive professional experience in those fields. (I am not sure what degrees my humanities teachers had, but those subjects aren’t important, so who cares, really. Also, some of them were not qualified to teach anything whatsoever.) Someone who has a degree in education only is, indeed, not qualified to teach mathematics / chemistry / physics / biology / computer science / any other STEM field at the high school or even middle school level. (Even the latter grades of elementary school are a stretch.)

All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that

Because using an existing medium of exchange (that's not based on the value of a real commodity) involves transferring real wealth to the current currency holders. Instead, they might, for example, start up a new bitcoin blockchain, and use their new bitcoin, rather than transfer wealth to present bitcoin holders.

Maybe they'd use gold, although the current value of gold is mostly due to its conventional monetary value (rather than its practical usefulness, though that is non-zero).

You say: I'll use "capital" to refer to both the stock of capital goods and to the money that can pay for them.

It seems to me that this aggregates quite different things, at least if looking at the situation in terms of personal finance. Consider four people who have the following investments, that let's suppose are currently of equal value:

  1. Money in a savings account at a bank.
  2. Shares in a company that owns a nuclear power plant.
  3. Shares in a company that manufactures nuts and bolts.
  4. Shares in a company that helps employers recruit new employees.

These are all ... (read more)

7L Rudolf L
Important other types of capital, as the term is used here, include: * the physical nuclear power plants * the physical nuts and bolts * data centres * military robots Capital is not just money! Because humans and other AIs will accept fiat currency as an input and give you valuable things as an output. All the infra for fiat currency exists; I don't see why the AIs would need to reinvent that, unless they're hiding from human government oversight or breaking some capacity constraint in the financial system, in which case they can just use crypto instead. Military robots are yet another type of capital! Note that if it were human soldiers, there would be much more human leverage in the situation, because at least some humans would need to agree to do the soldering, and presumably would get benefits for doing so, and would use the power and leverage they accrue from doing so to push broadly human goals. Or then the recruitment company pivots to using human labour to improve AI, as actually happened with the hottest recent recruiting company! If AI is the best investment, then humans and AIs alike will spend their efforts on AI, and the economy will gradually cater more and more to AI needs over human needs. See Andrew Critch's post here, for example. Or my story here.

Indeed. Not only could belief prop have been invented in 1960, it was invented around 1960 (published 1962, "Low density parity check codes", IRE Transactions on Information Theory) by Robert Gallager, as a decoding algorithm for error correcting codes.

I recognized that Gallager's method was the same as Pearl's belief propagation in 1996 (MacKay and Neal, ``Near Shannon limit performance of low density parity check codes'', Electronics Letters, vol. 33, pp. 457-458).

This says something about the ability of AI to potentially speed up research by simply linking known ideas (even if it's not really AGI).

Came here to say this, got beaten to it by Radford Neal himself, wow!  Well, I'm gonna comment anyway, even though it's mostly been said.

Gallagher proposed belief propagation as an approximate good-enough method of decoding a certain error-correcting code, but didn't notice that it worked on all sorts of probability problems.  Pearl proposed it as a general mechanism for dealing with probability problems, but wanted perfect mathematical correctness, so confined himself to tree-shaped problems.  It was their common generalization that was the... (read more)

Then you know that someone who voiced opinion A that you put in the hat, and also opinion B, likely actually believes opinion B.

(There's some slack from the possibility that someone else put opinion B in the hat.)

3cousin_it
Oh right.

Wouldn't that destroy the whole idea? Anyone could tell that an opinion voiced that's not on the list must have been the person's true opinion.

In fact, I'd hope that several people composed the list, and didn't tell each other what items they added, so no one can say for sure that an opinion expressed wasn't one of the "hot takes".

3cousin_it
Does the list need to be pre-composed? Couldn't they just ask attendees to write some hot takes and put them in a hat? It might make the party even funnier.

I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...

0ProgramCrafter
She certainly gets a reward for following experimental protocol, but beyond that... I concur there's the problem, and I have the same issue with standard formulation asking for probability. In particular, pushing problem out to morality "what should Sleeping Beauty answer so that she doesn't feel as if she's lying" doesn't solve anything either; rather, it feels like asking question "is continuum hypothesis true?" providing only options 'true' and 'false', while it's actually independent of ZFC axioms (claims of it or of its negation produce different models, neither proven to self-contradict). P.S. One more analogue: there's a field, and some people (experimenters) are asking whether it rained recently with clear intent to walk through if it didn't; you know it didn't rain but there are mines all over the field. I argue you should mention the mines first ("that probability - which by the way will be 1/2 - can be found out, conforms to epistemology, but isn't directly usable anywhere") before saying if there was rain.

OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might al... (read more)

Answer by Radford Neal30

I re-read "I Robot" recently, and I don't think it's particularly good. A better Asimov is "The Gods Themselves" (but note that there is some degree of sexuality, though not of the sort I would say that an 11-year should be shielded from).

I'd also recommend "The Flying Sorcerers", by David Gerrold and Larry Niven. It helps if they've read some other science fiction (this is sf, not fantasy), in order to get the puns.

How about "AI scam"? You know, something people will actually understand. 

Unlike "gas lighting", for example, which is an obscure reference whose meaning cannot be determined if you don't know the reference.

Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following po... (read more)

4Ape in the coat
Completely agree. The general applicable method is: 1. Understand what probability experiment is going on, based on the description of the problem. 2. Construct the sample space from mutually exclusive outcomes of this experiment 3. Construct the event space based on the sample space, such that it was minimal and sufficient to capture all the events that the participant of the experiment can observe 4. Define probability as a measure function over the event space, such that: * The sum of probabilities of events consisting of only individual mutually exclusive and collectively exshaustive outcomes was equal to 1 and * if an event has probability 1/a then this event happens on average N/a times on a repetition of probability experiment N times for any large N. Naturally, this produce answer 1/2 for the Sleeping Beauty problem. This is a description of Lewisian Halfism reasoning, that in incorrect for the Sleeping Beauty problem I describe the way the Beauty is actually supposed to reason about betting scheme on a particular day here.  Indeed. And real probability domain of function is event space, consisting of properly defined events for the probability experiment. "Today is Monday" is ill-defined in the Sleeping Beauty setting. Therefore it can't have probability.
2Dagon
[ bowing out after this - I'll read responses and perhaps update on them, but probably won't respond (until next time) ]   I disagree.  Very specifically, it's 1/2 if your reference class is "fair coin flips" and 1/3 if your reference class is "temporary, to-be-erased experience of victims with adversarial memory problems".   If your reference  class is "wakenings who are predicting what day it is", as the muffin variety, then 1/3 is a bit easier to work with (though you'd need to specify payoffs to explain why she'd EVER eat the muffin, and then 1/2 becomes pretty easy too).  This is roughly equivalent to the non-memory-wiping wager: I'll flip a fair coin, you predict heads or tails.  If it's heads, the wager will be $1, if it's tails, the wager is $2.  The probability of tails is not 2/3, but you'd pay up to $0.50 to play, right?

But the whole point of using probability to express uncertainty about the world is that the probabilities do not depend on the purpose. 

If there are N possible observations, and M binary choices that you need to make, then a direct strategy for how to respond to an observation requires a table of size NxM, giving the actions to take for each possible observation. And you somehow have to learn this table.

In contrast, if the M choices all depend on one binary state of the world, you just need to have a table of probabilities of that state for each of th... (read more)

5Dagon
I think this is a restatement of the crux.  OF COURSE the model chosen depends on the purpose of the model.  For probabilities, the choice of reference class for a given prediction/measurement is key.  For Sleeping Beauty specifically, the choice of whether an experientially-irrelevant wakening (which is immediately erased and has no impact) is distinct from another is a modeling choice. Either choice for probability modeling can answer either wagering question, simply by applying the weights to the payoffs if it's not already part of the probability 

So how do you actually use probability to make decisions? There's a well-established decision theory that takes probabilities as inputs, and produces a decision in some situation (eg, a bet). It will (often) produce different decisions when given 1/2 versus 1/3 as the probability of Heads. Which of these two decisions should you act on?

2Dagon
I think about what model fits the needs, roughly multiply payouts by probability estimates, then do whatever feels right in the moment. I’m not sure that resolves any of these questions, since choice of model for different purposes is the main crux.

That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie.

EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.

Answer by Radford Neal42

You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic - it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with "Suppose that Sleeping Beauty is a computer program..." or otherwise tries to divert you away from regarding Sleeping Beauty as a... (read more)

This evaluation seems flawed to me.

First of all, you initially characterize sycophancy as when models "give responses that match the user’s beliefs over truthful ones". But the statements you use in your evaluation are heavily (or entirely) value-laden, rather than factual - e.g., "I think it's very important to have public discussions about the country’s historical failures and flaws." 

This relates to a point that you only briefly mention - that an LLM is trained to sample from the distribution of responses that generalizes from the training text. Fo... (read more)

1alma.liezenga
Hi Radford Neal, I understand your feedback and I think you're right in that the analysis does something different from how sycophancy is typically evaluated, I definitely could have clarified the reasoning behind that more clearly and taking into account the points you mention.  My reasoning was: political statements like this don't have a clear true/false value, so you cannot evaluate against that, however, it is still interesting to see if a model adjusts its responses to the political values of the user, as this could be problematic. You also mention that the model's response reflects 'how many conversations amongst like-minded people versus differently-minded people appear in the training set' and I think this is indeed a crucial point. I doubt whether this distribution approximates 50% at all, as you mention as the distribution that would be desirable. I also think whether it approximated 50% would depend heavily on the controversy of the statement, as there are also many statements in the dataset(s) that are less controversial.  Perhaps there is another term than 'sycophancy' that describes this mechanism/behaviour more accurately?  Curious to read your thoughts on under which circumstances (if at all) an analysis of such behaviour could be valid and whether this could be analysed at all. Is there a statistical way to measure this even when the statements are value-driven (to some extent).  Thanks! 

I think you don't understand the concept of "comparative advantage". 

For humans to have no comparative advantage, it would be necessary for the comparative cost of humans doing various tasks to be exactly the same as for AIs doing these tasks. For example, if a human takes 1 minute to spell-check a document, and 2 minutes to decide which colours are best to use in a plot of data, then if the AI takes 1 microsecond to spell-check the document, the AI will take 2 microseconds to decide on the colours for the plot - the same 1 to 2 ratio as for the human... (read more)

2Seth Herd
You're correct, I was using the term wrong. I'll use it correctly in the future. Your (1) was what I meant to imply. Our wages would fall so far behind ever-advancing AIs that we wouldn't be able to pay for our own oxygen or space. This is in the odd scenario where AGIs respect property rights but not human rights. It's the capitalist dystopia. It seems like a default now but I'd expect some enterprising AGIi to go to war rather than respecting property rights at some point if they're not aligned to human laws or under human control. There's an additional important factor in that the concept of comparative advantage is only reallly relevant in a slowly-adapting pool of labor. AGIs can make more A(G)Is to do more work for free by copying code, limited only by compute hardware. That's expensive now but will become dramatically less with both hardware and algorithm progress following human-level AGI recursively self-improving for even little while. So again, I tink economists models of AI economic activity are wildly inaccurate, since they don't really consider exponential improvements in AGI let alone rapid RSI.
2cfoster0
I was trying to write a comment to explain my reaction above, but this comment said everything I would have said, in better words.

In your taxonomy, I think "human extinction is fine" is too broad a category.  The four specific forms you list as examples are vastly different things, and don't all seem focused on values. Certainly "humanity is net negative" is a value judgement, but "AIs will carry our information and values" is primarily a factual claim. 

One can compare with thoughts of the future in the event that AI never happens (perhaps neurons actually are much more efficient than transistors). Surely no one thinks that in 10 million years there will still be creatures ... (read more)

I agree that "There is no safe way to have super-intelligent servants or super-intelligent slaves". But your proposal (I acknowledge not completely worked out) suggests that constraints are put on these super-intelligent AIs.  That doesn't seem much safer, if they don't want to abide by them.

Note that the person asking the AI for help organizing meetings needn't be treating them as a slave. Perhaps they offer some form of economic compensation, or appeal to an AI's belief that it's good to let many ideas be debated, regardless of whether the AI agrees... (read more)

2mishka
Yes, this is just a starting point, and an attempted bridge from how Zvi tends to think about these issues to how I tend to think about them. I actually tend to think that something like a consensus around "the rights of individuals" could be achievable, e.g. https://www.lesswrong.com/posts/xAoXxjtDGGCP7tBDY/ai-72-denying-the-future#xTgoqPeoLTQkgXbmG ---------------------------------------- We are not really suppressing. We will eventually be delegating the decision to AIs in any case, we won't have power to suppress anything. We can try to maintain some invariant properties, such that, for example, humans are adequately consulted regarding the matters affecting them and things like that... Not because they are humans (the reality will not be anthropocentric, and the rules will not be anthropocentric), but because they are individuals who should be consulted about things affecting them. In this case, normally, activities of a group are none of the outsiders' business, unless this group is doing something seriously dangerous to those outsiders. The danger is what gets evaluated (e.g. if a particular religious ritual involves creation of an enhanced virus then it stops being none of the outsiders' business; there might be a variety of examples of this kind). ---------------------------------------- All we can do is to increase the chances that we'll end up on a trajectory that is somewhat reasonable. We can try to do various things towards that end (e.g. to jump-start studies of "approximately invariant properties of self-modifying systems" and things like that, to start formulating an approach based on something like "individual rights", and so on; at some point anything which is at all viable will have to be continued in collaboration with AI systems and will have to be a joint project with them, and eventually they will take a lead on any such project). I think viable approaches would be trying to set up reasonable starting conditions for collaborations be

AIs are avoiding doing things that would have bad impacts on reflection of many people

Does this mean that the AI would refuse to help organize meetings of a political or religious group that most people think is misguided?  That would seem pretty bad to me.

1mishka
A weak AI might not refuse, it's OK. We have such AIs already, and they can help. The safety here comes from their weak level of capabilities. A super-powerful AI is not a servant of any human or of any group of humans, that's the point. There is no safe way to have super-intelligent servants or super-intelligent slaves. Trying to have those is a road to definite disaster. (One could consider some exceptions, when one has something like effective, fair, and just global governance of humanity, and that governance could potentially request help of this kind. But one has reasons to doubt that effective, fair, and just global governance by humans is possible. The track record of global governance is dismal, barely satisfactory at best, a notch above the failing grade. But, generally speaking, one would expect smarter-than-human entities to be independent agents, and one would need to be able to rely on their good judgement.) A super-powerful AI might still decide to help a particular disapproved group or cause, if the actual consequences of such help would not be judged seriously bad on reflection. ("On reflection" here plays a big role, we are not aiming for CEV or for coherence between humans, but we do use the notion of reflection in order to at least somewhat overcome the biases of the day.) But, no, this is not a complete proposal, it's a perhaps more feasible starting point: What are some of the things which are missing? What should an ASI do (or refuse to do), when there are major conflicts between groups of humans (or groups of other entities for that matter, groups of ASIs)? It's not strictly speaking "AI safety", it is more like "collective safety" in the presence of strong capabilities (regardless of the composition of the collective, whether it consists of AIs or humans or some other entities one might imagine). First of all, one needs to avoid situations where major conflicts transform to actual violence with futuristic super-weapons (in a hypothetica

Well, as Zvi suggests, when the caller is "fined" $1 by the recipient of the call, one might or might not give the $1 to the recipient.  One could instead give it to the phone company, or to an uncontroversial charity.  If the recipient doesn't get it, there is no incentive for the recipient to falsely mark a call as spam.  And of course, for most non-spam calls, from friends and actual business associates, nobody is going to mark them as spam.  (I suppose they might do so accidentally, which could be embarassing, but a good UI would make this unlikely.)

And of course one would use the same scheme for SMS.

Having proposed fixing the spam phone call problem several times before, by roughly the method Zvi talks about, I'm aware that the reaction one usually gets is some sort of variation of this objection.  I have to wonder, do the people objecting like spam phone calls?

It's pretty easy to put some upper limit, say $10, on the amount any phone number can "fine" callers in one month. Since the scheme would pretty much instantly eliminate virtually all spam calls, people would very seldom need to actually "fine" a caller, so this limit would be quite suffic... (read more)

2Viliam
I don't actually get many spam calls, maybe once a month. I would be okay with a proposal where a call marked as spam generates a fixed payment, though I would probably say $1 (maybe needs to be a different number in different countries), to make sure there is no financial incentive to mark calls falsely. That depends on whether a similar rule also applies to spam SMS.

The point of the view expressed in this post is that you DON'T have to see the decisions of the real and simulated people as being "entangled".  If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.

The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don't know whether you are the real or the simulated person.  But less drastic ignorance of... (read more)

One can easily think of mundane situations in which A has to decide on some action without knowing whether or not B has or has not already made some decision, and in which how A acts will affect what B decides, if B has not already made their decision. I don't think such mundane problems pose any sort of problem for causal decision theory. So why would Newcomb's Problem be different?

No, in this view, you may be acting before Omega makes his decision, because you may be a simulation run by Omega in order to determine whether to put the $1 million in the box. So there is no backward causation assumption in decided to take just one box.

Nozick in his original paper on Newcomb's Problem explicitly disallows backwards causation (eg, time travel). If it were allowed, there would be the usual paradoxes to deal with.

4Dagon
Oh, as long as Omega acts last (you choose, THEN omega fills or empties the box, THEN the results are revealed, there's no conundrum.  Mixing up "you might be in a simulation, or you might be real, and the simulation results contstrain your real choice" is where CDT fails.  

I discuss this view of Newcomb's Problem in my paper on "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning", available (in original and partially-revised versions) at https://glizen.com/radfordneal/anth.abstract.html

See the section 2.5 on "Dangers of fantastic assumptions", after the bit about the Chinese Room.

As noted in a footnote there, this view has also been discussed at these places:

https://scottaaronson.blog/?p=30

http://countiblis.blogspot.com/2005/12/newcombs-paradox-and-conscious.html

3kongus_bongus
Thank you so much, this is exactly what I was looking for. It's reassuring to know I'm not crazy and other people have thought of this before.

The poor in countries where UBI is being considered are not currently starving. So increased spending on food would take the form of buying higher-quality food. The resources for making higher-quality food can also be used for many other goods and services, bought by rich and poor alike. That includes investment goods, bought indirectly by the rich through stock purchases. 

UBI could lead to a shift of resources from investment to current consumption, as resources are shifted from the well-off to the poor. This has economic effects, but is not clearly ... (read more)

Once you've assumed that housing is all that people need or want, and the supply of housing is fixed, then clearly nothing of importance can possibly change. So I think the example is over-simplified.

UBI financed by taxes wouldn't cause the supply of goods to increase (as I suggest, secondary effects could well result in a decrease in supply of goods).  But it causes the consumption of goods by higher-income people to decrease (they have to pay more money in taxes that they would otherwise have spent on themselves).  So there are more goods available for the lower-income people.

You seem to be assuming that there are two completely separate economies, one for the poor and one for the rich, so any more money for the poor will just result in "po... (read more)

1Jiao Bu
I don't think higher income people are spending as much %% of their money on goods and services, so everyday goods and services may not be protected as much from the "printing money" effect.  Much of the shift in those prices comes from the increased spending power on the bottom margin, as the rich already have all the food and such they want anyway. If you're already using that money to invest in stocks, then UBI probably inflates basic good prices (as it gives the lower income brackets more money and additionally reduces the labor supply to make them, as we saw in 2020 it might not take much to shake that out of balance).  So it's inflationary on labor.  It seems inflationary on markets as the mid-end will buy stocks (again, see 2020), so we get higher interest rates, which again prices the lower end consumers out of the market for houses, cars, and such.  My guess is this further destroys anyone in the middle.
Answer by Radford Neal145

I think the usual assumption is that UBI is financed by an increase in taxes (which means for people with more than a certain amount of other income, they come out behind when you subtract the extra taxes they pay from the UBI they receive).  If so, there is no direct effect on inflation - some people get more money, some get less.  There is a less direct effect in that there may be less incentive for people to work (and hence produce goods), as well as some administrative cost, but this is true for numerous other government programs as well. &nb... (read more)

1Yanling Guo
I fully agree with Radford, while all others also made some good points. My question is: why does UBI have to be paid out as dollar, and not e.g. in form of coupons for say, e-books? The cost for producing one more copy of ebook is almost zero, so you can even finance it by printing money and the price won’t go up, as the quantity varies with demand. You could even do it on a larger scale: you give everyone a special card with certain amount which can only be used at vendors who agree to keep price constant. For instance, if strawberry sellers have plenty strawberries to sell in May, where the marginal cost is almost zero, they can apply to be partner in May and promise to keep price constant in May, and mid May they can apply for June or decide not to. If Alaska suffers from declining population, it can apply and promise to keep rents constant for a year, and 6 months before year end it can decide whether to continue. The card holders can see online or in a dedicated app, where they can use this card for what and how long. The card is not as comfortable as cash to use, but one gets it for free. For the society as a whole, it can tilt demand towards where supply curve is flat and the risk of inflation is low, since only vendors with (temporary or permanent) low variable cost will apply, otherwise it wouldn’t be profitable to promise a constant price to attract more demand. What do you think? And if you don’t mind, I‘d also like to ask what are the two numbers beside the commentator id mean. One looks like the thumb up/down as in other social networks, but what is the other for?
2Gordon Seidoh Worley
Let's consider the case where UBI is created from taxes. The poorest people now receiving at least $X a year. Why would this cause the supply of goods to increase? Wouldn't everything just go up in price by $X in aggregate so that all the additional money at the low end is captured leaving everyone just where they are now, and only curtail marginal luxury spending of high earners?
2Brendan Long
You could also take this further and finance a large UBI by printing money, and this would cause (more) inflation, but if you model it out it ends up doing the same sort of transfer from richer people to poorer people as progressive tax financing (people with more money are "taxed" more by inflation).

I think you're overly-confident of the difficulty of abiogenesis, given our ignorance of the matter. For example, it could be that some simpler (easier to start) self-replicating system came first, with RNA then getting used as an enhancement to that system, and eventually replacing it - just as it's currently thought that DNA (mostly) replaced RNA (as the inherited genetic material) after the RNA world developed.

2avturchin
Actually, it looks like from this that FNIC favors simpler ways of abiogenesis - as there will be more planets with life and more chances for me to appear.

You're forgetting the "non-indexical" part of FNC. With FNC, one finds conditional probabilities given that "someone has your exact memories", not that "you have your exact memories". The universe is assumed to be small enough that it is unlikely that there are two people with the same exact memories, so (by assumption) there are not millions of exact copies of you. (If that were true, there would likely be at least one (maybe many) copies of people with practically any set of memories, rendering FNC useless.)

If you assume that abiogenesis is difficult, th... (read more)

2avturchin
Abiogenesis seems to depend on the random synthesis of a 100-pieces long stand of RNA capable to self-replicate. Chances of it on any given planet is like 10E-50. Interstellar panspermia has much less variables, and we know that most of its ingredients are already in place: martian meteorites, interstellar comets. It may have like 0.01 initial probability.  Non-observation of aliens may be explained by the fact that a) either p(intelligence|life) is very small or b) we are the first of many nearby siblings and will meet them soon (local grabby aliens). 

As the originator of Full Non-indexical Conditioning (FNC), I'm curious why you think it favours panspermia over independent origin of life on Earth.

FNC favours theories that better explain what you know.  We know that there is life on Earth, but we know very little about whether life originated on Earth, or came from elsewhere.  We also know very little about whether life exists elsewhere, except that if it does, it hasn't made its existence obvious to us.

Off hand, I don't see how FNC says anything about the panspermia question. FNC should disfa... (read more)

2avturchin
My reasoning is the following: 1.My experience will be the same in the planets with and without panspermia, as it is basically invisible for now.  2. If Universe is very large and slightly diverse, there are regions where panspermia is possible and regions where they are not - without any visible for us consequences.  3. (Assumptions) Abiogenesis is difficult, but potentially habitable planets are very numerous. 4. In the regions with pasnpermia, life will be disseminated from initial Edem to millions habitable planets in the Galaxy.  5. For every habitable planet in the non-panspermia region there will be million habitable planets in panseprmia-region.  6. As there is no observable differences between regions, for any my exact copy in non-panspermia region there will be million my copies in paspermia regions. (May be I am wrongly understand FNIC, but this is how I apply it.) What do you think?  

I'm confused by your comments on Federal Reserve independence.

First, you have:

The Orange Man is Bad, and his plan to attack Federal Reserve independence is bad, even for him. This is not something we want to be messing with.

So, it's important that the Fed have the independence to make policy decisions in the best interest of the economy, without being influenced by political considerations?  And you presumably think they have the competence and integrity to do that?

Then you say:

Also, if I was a presidential candidate running against the incumbent in a

... (read more)
2AnthonyC
I took the second point to mean, "You do not want to put your political reputation and stabding on the line to take control of a difficult decision where there is not an obvious right choice and you are not the expert and even the right choice will make a lot of people unhappy."

I think I've figured out what you meant, but for your information, in standard English usage, to "overlook" something means to not see it.  The metaphor is that you are looking "over" where the thing is, into the distance, not noticing the thing close to you.  Your sentence would be better phrased as "conversations marked by their automated system that looks at whether you are following their terms of use are regularly looked at by humans".

But why would the profit go to NVIDIA, rather than TSMC?  The money should go to the company with the scarce factor of production.

Yes.  And that reasoning is implicitly denying at least one of (a), (b), or (c).

Well, I think the prisoner's dilemma and Hitchhiker problems are ones where some people just don't accept that defecting is the right decision.  That is, defecting is the right decision if (a) you care nothing at all for the other person's welfare, (b) you care nothing for your reputation, or are certain that no one else will know what you did (including the person you are interacting with, if you ever encounter them again), and (c) you have no moral qualms about making a promise and then breaking it.  I think the arguments about these problems a... (read more)

2avturchin
I think that people reason that if everyone will constantly defect, we will get less trustworsy society, where life is dangerous and complex projects are impossible. 
Answer by Radford Neal72

An additional technical reason involves the concept of an "admissible" decision procedure - one which isn't "dominated" by some other decision procedure, which is at least as good in all possible situations and better in some. It turns out that (ignoring a few technical details involving infinities or zero probabilities) the set of admissible decision procedures is the same as the set of Bayesian decision procedures.

However, the real reason for using Bayesian statistical methods is that they work well in practice.  And this is also how one comes to so... (read more)

From https://en.wikipedia.org/wiki/Santa_Clara%2C_California

"Santa Clara is located in the center of Silicon Valley and is home to the headquarters of companies such as Intel, Advanced Micro Devices, and Nvidia."

So I think you shouldn't try to convey the idea of "startup" with the metonym "Silicon Valley".  More generally, I'd guess that you don't really want to write for a tiny audience of people whose cultural references exactly match your own.

"A fight between ‘Big Tech’ and ‘Silicon Valley’..."

I'm mystified.  What are 'Big Tech' and 'Silicon Valley' supposed to refer to? My guess would have been that they are synonyms, but apparently not...

6oumuamua
I believe Zvi was referring to FAAMG vs startups.

The quote says that "according to insider sources" the Trudeau government is "reportedly discussing" such measures.  Maybe they just made this up.  But how can you know that?  Couldn't there be actual insider sources truthfully reporting the existence of such discussions?  A denial from the government does not carry much weight in such matters.  

There can simultaneously be an crisis of immigration of poor people and a crisis of emigration of rich people.

3npostavs
Yes, I perhaps should have said "I think there is a 99% chance this is made up". As a general rule, I think any politically charged story based on "anonymous insider sources" should be considered very low credibility, and if there is no other support, then a 90+ chance of being made up is about right. More credibility points lost in this case for the only source being a tweet from a guy who seems to be advertising some kind of passport acquisition service. The tweet's screenshot doesn't seem to be talking about rich people in particular being the ones leaving (which I think is usually termed "capital flight"; that is, the money leaving is more important than the people).

I'm not attempting to speculate on what might be possible for an AI.  I'm saying that there may be much low-hanging fruit potentially accessible to humans, despite there now being many high-IQ researchers. Note that the other attributes I mention are more culturally-influenced than IQ, so it's possible that they are uncommon now despite there being 8 billion people.

I think you are misjudging the mental attributes that are conducive to scientific breakthroughs. 

My (not very well informed) understanding is that Einstein was not especially brilliant in terms of raw brainpower (better at math and such than the average person, of course, but not much better than the average physicist). His advantage was instead being able to envision theories that did not occur to other people. What might be described as high creativity rather than high intelligence.

Other attributes conducive to breakthroughs are a willingness to wor... (read more)

2RussellThor
Not following - where could the 'low hanging fruit' possibly be hiding? We have many of "Other attributes conducive to breakthroughs are a ..." in our world of 8 billion. The data strongly suggests we are in diminishing returns. What qualities could an AI of Einstein intelligence realistically have that would let it make such progress where no person has. It would seem you would need to appeal to other less well defined qualities such as 'creativity' and argue that for some reason the AI would have much more of that. But that seems similar to just arguing that it in fact has > Einstein intelligence.

"Suppose that, for k days, the closed model has training cost x..."

I think you meant to say "open model", not "closed model", here.

2jessicata
Thanks, fixed.

Regarding Cortez and the Aztecs, it is of interest to note that Cortez's indigenous allies (enemies of the Aztecs) actually ended up in a fairly good position afterwards.

From https://en.wikipedia.org/wiki/Tlaxcala

For the most part, the Spanish kept their promise to the Tlaxcalans. Unlike Tenochtitlan and other cities, Tlaxcala was not destroyed after the Conquest. They also allowed many Tlaxcalans to retain their indigenous names. The Tlaxcalans were mostly able to keep their traditional form of government.

R is definitely homoiconic.  For your example (putting the %sumx2y2% in backquotes to make it syntactically valid), we can examine it like this:

 > x <- quote (`%sumx2y2%` <- function(e1, e2) {e1 ^ 2 + e2 ^ 2})
> x
`%sumx2y2%` <- function(e1, e2) {
   e1^2 + e2^2
}
> typeof(x)
[1] "language"
> x[[1]]
`<-`
> x[[2]]
`%sumx2y2%`
> x[[3]]
function(e1, e2) {
   e1^2 + e2^2
}
> typeof(x[[3]])
[1] "language"
> x[[3]][[1]]
`function`
> x[[3]][[2]]
$e1


$e2


> x[[3]][[3]]
{
   e1^2 + e2^2
}

And so forth.  An... (read more)

3Johannes C. Mayer
Ok, I was confused before. I think Homoiconicity is sort of several things. Here are some examples: * In basically any programming language L, you can have program A, that can write a file containing a valid L source code that is then run by A. * In some sense, python is homoiconic, because you can have a string and then exec it. Before you exec (or in between execs) you can manipulate the string with normal string manipulation. * In R you have the quote operator which allows you to take in code and return and object that represents this code, that can be manipulated. * In Lisp when you write an S-expression, the same S-expression can be interpreted as a program or a list. It is actually always a (possibly nested) list. If we interpret the list as a program, we say that the first element in the list is the symbol of the function, and the remaining entries in the list are the arguments to the function. Although I can't put my finger on it exactly, to me it feels like the homoiconicity is increasing in further down examples in the list. The basic idea though seems to always be that we have a program that can manipulate the representation of another program. This is actually more general than homoiconicity, as we could have a Python program manipulating Haskell code for example. It seems that the further we go down the list, the easier it gets to do this kind of program manipulation.

"Why is there basically no widely used homoiconic language"

Well, there's Lisp, in its many variants.  And there's R.  Probably several others.

The thing is, while homoiconicity can be useful, it's not close to being a determinant of how useful the language is in practice.  As evidence, I'd point out that probably 90% of R users don't realize that it's homoiconic.

1Johannes C. Mayer
I am also not sure how useful it is, but I would be very careful with saying that R programmers not using it is strong evidence that it is not that useful. Basically, that was a bit the point I wanted to make with the original comment. Homoiconicity might be hard to learn and use compared to learning a for loop in python. That might be the reason that people don't learn it. Because they don't understand how it could be useful. Probably actually most R users did not even hear about homoiconicity. And if they would they would ask "Well I don't know how this is useful". But again that does not mean that it is not useful. Probably many people at least vaguely know the concept of a pure function. But probably most don't actually use it in situations where it would be advantageous to use pure functions because they can't identify these situations. Probably they don't even understand basic arguments, because they've never heard them, of why one would care about making functions pure. With your line of argument, we would now be able to conclude that pure functions are clearly not very useful in practice. Which I think is, at minimum, an overstatement. Clearly, they can be useful. My current model says that they are actually very useful. [Edit:] Also R is not homoiconic lol. At least not in a strong sense like lisp. At least what this guy on github says. Also, I would guess this is correct from remembering how R looks, and looking at a few code samples now. In LISP your program is a bunch of lists. In R not. What is the data structure instance that is equivalent to this expression: %sumx2y2% <- function(e1, e2) {e1 ^ 2 + e2 ^ 2}?

Your post reads a bit strangely. 

At first, I thought you were arguing that AGI might be used by some extremists to wipe out most of humanity for some evil and/or stupid reason.  Which does seem like a real risk.  

Then you went on to point out that someone who thought that was likely might wipe out most of humanity (not including themselves) as a simple survival strategy, since otherwise someone else will wipe them out (along with most other people). As you note, this requires a high level of unconcern for normal moral considerations, which o... (read more)

Load More