All of ryan_b's Comments + Replies

I like this effort, and I have a few suggestions:

  • Humanoid robots are much more difficult than non-humanoid ones. There are a lot more joints than in other designs; the balance question demands both more capable components and more advanced controls; as a consequence of the balance and shape questions, a lot of thought needs to go into wrangling weight ratios, which means preferring more expensive materials for lightness, etc.
  • In terms of modifying your analysis, I think this cashes out as greater material intensity - the calculations here are done by weight
... (read more)
3Benjamin_Todd
Thanks for the comments!  I'm especially keen to explore bottlenecks (e.g. another suggestion I saw is that to reach 1 billion a year would require 10x current global lithium production to supply the batteries.) A factor of 2 for increased difficultly due to processing intensity seems reasonable, and I should have thrown it in. (Though my estimates were to an order of magnitude so this probably won't change the bottom line, and on the other side, many robots will weigh <100kg and some will be non-humanoid.)

This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.

I don't think the energy balance turns out in the idea's favor. Here are the heuristics I considered:

  • The first thing I note is what happens during reconnection: a bunch of the magnetic energy turns into kinetic and thermal energy. The part you plan to harvest is just the electric field part. Even in otherwise ideal circumstances, that's a substantial loss.
  • The second thing I note is that in a fusion reactor, the magnetic field is already being generated
... (read more)

Regarding The Two Cultures essay:

I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow's. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.

Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the ... (read more)

I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.

Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.

Why don’t you expect AGIs to be able to do that too?

I do, I just expect it to take a few iterations. I don't expect any kind of stable niche for humans after AGI appears.

I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay ... (read more)

6Steven Byrnes
I looked it up, evidently mules still have at least one tiny economic niche in the developed world. Go figure :) But I don’t think that lesson generalizes because of an argument Eliezer makes all the time: the technologies created by evolution (e.g. animals) can do things that current human technology cannot. E.g. humans cannot currently make a self-contained “artificial cow” that can autonomously turn grass and water into more copies of itself, while also creating milk, etc. But that’s an artifact of our current immature technology situation, and we shouldn’t expect it to last into the superintelligence era, with its more advanced future technology. Separately, I don’ t think “preps new types of environments for teleoperation” is a good example of a future human job. Teleoperated robots can string ethernet cables and install wifi and whatever just like humans can. By analogy, humans have never needed intelligent extraterrestrials to come along and “prep new types of environments for human operation”. Rather, we humans have always been able to bootstrap our way into new environments. Why don’t you expect AGIs to be able to do that too? (I understand that it’s possible to believe that there will be economic niches for humans, because of more abstract reasons, even if we can’t name even a single plausible example right now. But still, not being able to come up with any plausible examples is surely a bad sign.)

Obviously, at least one of those predictions is wrong. That’s what I said in the post.

Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.

6Steven Byrnes
Well, the main thing is that Principle (A) says that the price of the chips + electricity + teleoperated robotics package will be sustainably high, and Principle (B) says that the price of the package will be sustainably low. Those can’t both be true. …But then I also said that, if the price of the package is low, then human labor will have its price (wage / earnings) plummet way below subsistence via competing against a much-less-expensive substitute, and if it’s high, they won’t. This step brings in an additional assumption, namely that they’re actually substitutes. That’s the part you’re objecting to. Correct? If so, I mean, I can start listing ways that tractors are not perfect substitutes for mules—mules do better on rough terrain, mules can heal themselves, etc. Or I can list ways that Jeff Bezos is not a perfect substitute for a moody 7yo—the 7yo is cuter, the 7yo may have a more sympathetic understanding of how to market to 7yo’s, etc. But c’mon, a superintelligent AI CEO would not pay a higher salary to hire a moody 7yo, rather than a lower salary to “hire” another copy of itself, or to “hire” a different model of superintelligent AI. The only situation where human employment is even remotely plausible, IMO, is that the job involves appealing to human consumers. But that doesn’t “grow the pie” of human resources. If that’s the only thing humans can do, collective human wealth will just dwindle to zero as they buy AI-produced goods and services. So then the only consistent picture here is to say that at least some humans have a sustainable source of increasing wealth besides getting jobs & founding companies. And then humans can sometimes get employed because they have special appeal to those human consumers. What’s the sustainable source of increasing human wealth? It could be capital ownership, or  welfare / UBI / charity from aligned AIs or government, whatever. But if you’re going to assume that, then honestly who cares whether the humans are employa

I agree fake pictures are harder to threaten with. But consider that the deepfake method makes everyone a potential target, rather than only targeting the population who would fall for the relationship side of the scam.

There are other reasons I think it would be grimly effective, but I am not about to spell it out for team evil.

He also claims that with the rise of deepfakes you can always run the Shaggy defense if the scammer actually does pull the trigger.

With the rise of deepfakes, the scammers can skip steps 1-3, and also more easily target girls.

7Milan W
I'm not sure if this preferences of mine holds for most people, but I think I'd be easier to threaten with real photos than with fake ones. There's an element of guilt and shame in having taken and sent real photos to a stranger. I don't think scammers would invest in generating fake pictures for a potential victim who may well just block after the first message. I think the deepfake-first strategy would be both less profitable and less enjoyable for these sick fucks than the "trick into sexting" strategy. Now if the victim selection / face acquisition / deepfake generation / victim first contact pipeline were to be automated, I can see things changing.

Chip fabs and electricity generation are capital!

Yes, but so are ice cream trucks and the whirligig rides at the fair. Having “access to capital” is meaningless if you are buying an ice cream truck, but means much if you have a rare earth refinery.

My claim is that the big distinction now is between labor and capital because everyone had about an equally hard time getting labor; when AI replacement happens and that goes away, the next big distinction will be between different types of what we now generically refer to as capital. The term is uselessly broad in my opinion: we need to go down at least one level towards concreteness to talk about the future better.

I agree with the ideas of AI being labor-replacing, and I also agree that the future is likely to be more unequal than the present.

Even so, I strongly predict that the post-AGI future will not be static. Capital will not matter more than ever after AGI: instead I claim it will be a useless category.

The crux of my claim is that when AI replaces labor and buying results is easy, the value will shift to the next biggest bottlenecks in production. Therefore future inequality will be defined by the relationship to these bottlenecks, and the new distinctions wil... (read more)

4L Rudolf L
Chip fabs and electricity generation are capital! Yep, AI buying power winning over human buying power in setting the direction of the economy is an important dynamic that I'm thinking about. Yep, this is an important point, and a big positive effect of AI! I write about this here. We shouldn't lose track of all the positive effects.

This is a fantastic post, immediately leaping into the top 25 of my favorite LessWrong posts all-time, at least. 

I have a concrete suggestion for this issue:

They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.

I propose switching gears at this point to make "Why is the problem impossible?" the actual focus of their efforts for the remainder of the time period. I predict this will consistently yield partial progress among at least a chunk of ... (read more)

4Raemon
Yeah, a lot of my work recently has gone into figuring out how to teach this specific skill. I have another blogpost about it in the works. "Recursively asking 'Why exactly is this impossible?'"

I think this post is quite important because it is about Skin in the Game. Normally we love it, but here is the doubly-interesting case of wanting to reduce the financial version in order to allow the space for better thinking.

The content of the question is good by itself as a moment in time of thinking about the problem. The answers to the question are good both for what they contain, and also for what they do not contain, by which I mean what we want to see come up in questions of this kind to answer them better.

As a follow-up, I would like to see a more... (read more)

But if you introduce AI into the mix, you don’t only get to duplicate exactly the ‘AI shaped holes’ in the previous efforts.

I have decided I like the AI shaped holes phraseology, because it highlights the degree to which this is basically a failure in the perception of human managers. There aren't any AI shaped holes because the entire pitch with AI is we have to tell the AI what shape to take. Even if we constrain ourselves to LLMs, the AI docs literally and exactly describe how to tell it what role to fill.

Let’s say Company A can make AGIs that are drop-in replacements for highly-skilled humans at any existing remote job (including e.g. “company founder”), and no other company can. And Company C is a cloud provider. Then Company A will be able to outbid every other company for Company C’s cloud compute, since Company A is able to turn cloud compute directly into massive revenue. It can just buy more and more cloud compute from C and every other company, funding itself with rapid exponential growth, until the whole world is saturated.

I think this is outside t... (read more)

4Steven Byrnes
Yeah it’s fine to assume that there might be some period of time that (1) the AGIs don’t escape control, (2) the code doesn’t leak or get stolen, (3) nobody else reinvents the same thing, (4) Company A doesn’t have infinite capital (yet) to spend on renting cloud compute (or the contracts haven’t yet been signed or whatever). And it’s fine to be curious about how many AGIs would Company A have available during this period of time. And then a key question is whether anything happens during that period of time that would change what happens after that period of time. (And if not, then the analysis isn’t too important.) A pivotal act would certainly qualify. I’m kinda cynical in this area; I think the most likely scenario by far is that nothing happens during this period that has an appreciable impact on what happens afterwards. Like, I’m sure that Company A try to get their AGIs to beat benchmarks, do scientific research, make money, etc. I also expect them to have lots of very serious meetings, both internally and with government officials. But I don’t expect that Company A would succeed at making the world resilient to future out-of-control AGIs, because that’s just a crazy hard thing to do even with millions of intent-aligned AGIs at your disposal. I discussed some of the practical challenges at What does it take to defend the world against out-of-control AGIs?. Well anyway. My comment above was just saying that the OP could be clearer on what they’re trying to estimate, not that they’re wrong to be trying to estimate it.  :)

I endorse this movie unironically. It is a classic film for tracking what information you have and don't have, how many possibilities there are, etc.

Also the filmmaker maintains to this day that they left the truth of the matter in the final scene undefined on purpose, so we are spared the logic being hideously hacked-off to suit the narrative and have to live with the uncertainty instead.

Huzzah for assembling conversations! With this proof of concept, I wonder how easy it will be to deploy inside of LessWrong here.

Answer by ryan_b80

I think the best arguments are those about the costs to the AI of being nice. I don't believe the AI will be nice at all because neglect is so much more profitable computation-wise.

This is because even processing the question of how much sunlight to spare humanity probably costs more in expectation than the potential benefit of that sunlight to the AI.

First and least significant, consider that niceness is an ongoing cost. It is not a one-time negotiation to spare humanity 1% of the sun; more compute will have to be spent on us in the future. That compute w... (read more)

2Noosphere89
I basically agree with this on why we can't assume AIs that are mostly unaligned towards a human's values but has a shard of human values will be nice to us at all, because the cost of niceness is way more than just killing a lot of humans and leaving humans on-planet to die of a future existential catastrophe. I'd not say that we would die by Occam's Razor, but rather that we die by the need for AIs to aggressively save compute.

I'm not familiar with the details of Robin's beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He's spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.

I happened to read a Quanta article about equivalence earlier, and one of the threads is the difficulty of a field applying a big new concept without the expository and distillation work of putting stuff into textbooks/lectures/etc.

That problem pattern-matches with the replication example, but well-motivated at the front end instead of badly-motivated at the back end. It still feels like exposition and distillation are key tasks that govern the memes-in-the-field passed among median researchers.

I strongly suspect the crux of the replication crisis example ... (read more)

To me memetic normally reads something like "has a high propensity to become a meme" or "is meme-like" I had no trouble interpreting the post from this basis.

I push back against trying to hew closely to usages from the field of genetics. Fundamentally I feel like that is not what talking about memes is for; it was an analogy from the start, not meant for the same level of rigor. Further, memes and how meme-like things are is much more widely talked about than genetics, so insofar as we privilege usage considerations I claim switching to one matching geneti... (read more)

I am an American who knows what Estonia is, and I found the joke hilarious.

7Shoshannah Tekofsky
This made me unreasonably happy. Thank you :D
Answer by ryan_b20

Welcome!

The short and informal version is that epistemics covers all the stuff surrounding the direct claims. Things like credence levels, confidence intervals, probability estimates, etc are the clearest indicators. It also includes questions like where the information came from, how it is combined with other information, what other information we would like to have but don't, etc.

The most popular way you'll see this expressed on LessWrong is through Bayesian probability estimates and a description of the model (which is to say the writer's beliefs about ... (read more)

May I throw geometry's hat into the ring? If you consider things like complex numbers and quarternions, or even vectors, what we have are two-or-more dimensional numbers.

I propose that units are a generalization of dimension beyond spatial dimensions, and therefore geometry is their progenitor. 

It's a mathematical Maury Povich situation.

I feel like this is mostly an artifact of notation. The thing that is not allowed with addition or subtraction is simplifying to a single term; otherwise it is fine. Consider:

10x + 5y -5x -10y = 10x - 5x + 5y -10y = 5x - 5y

So, everyone reasons to themselves, what we have here is two numbers. But hark, with just a little more information, we can see more clearly we are looking at a two-dimensional number:

5x - 5y = 5

5x = 5y +5

5x - 5 = 5y

x - 1 = y

y = x - 1

Such as a line.

This is what is happening with vectors, and complex numbers, quarternions, etc.

The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.

Because that method rejects everything about prices. People consume more of something the lower the price is, even more so when it is free: consider the meme about all the games that have never been played in people's Steam libraries because they buy them in bundles or on sale days. There are ~zero branches of history where they sell as many units at retail as are pirated.

A better-but-still-generous method would be to do a projection of the increased sales in the future under the lower price curve, and then claim all of that as damages, reasoning that all of this excess supply deprived the company of the opportunity to get those sales in the future.

This is not an answer, but I register a guess: the number relies on claims about piracy, which is to say illegal downloads of music, movies, videogames, and so on. The problem is that the conventional numbers for this are utter bunk, because the way it gets calculated by default is they take the number of downloads, multiply it by the retail price, and call that the cost.

This would be how they get the cost of cybercrime to significantly exceed the value of the software industry: they can do something like take the whole value of the cybersecurity industry, better-measured losses like from finance and crypto, and then add bunk numbers for piracy losses from the entertainment industry on top of it.

2Noosphere89
Why do you think the methodology of calculating piracy damages by taking the number of downloads and multiplying by the retail price utter bunk?

This feels like a bigger setback than the generic case of good laws failing to pass.

What I am thinking about currently is momentum, which is surprisingly important to the legislative process. There are two dimensions that make me sad here:

  1. There might not be another try. It is extremely common for bills to disappear or get stuck in limbo after being rejected in this way. The kind of bills which keep appearing repeatedly until they succeed are those with a dedicated and influential special interest behind them, which I don't think AI safety qualifies for.
  2. The
... (read more)

As for OpenAI dropping the mask: I devoted essentially zero effort to predicting this, though my complete lack of surprise implies it is consistent with the information I already had. Even so:

Shit.

4Sinityy
@gwern wrote am explanation why this is surprising (for some) [here](https://forum.effectivealtruism.org/posts/Mo7qnNZA7j4xgyJXq/sam-altman-open-ai-discussion-thread?commentId=CAfNAjLo6Fy3eDwH3) It is still a mystery to me what is Sam's motive exactly.

I wonder how the consequences to reputation will play out after the fact.

  • If there is a first launch, will the general who triggered it be downvoted to oblivion whenever they post afterward for a period of time?
  • What if it looks like they were ultimately deceived by a sensor error, and believed themselves to be retaliating?
  • If there is mutual destruction, will the general who triggered the retaliatory launch also be heavily downvoted?
  • Less than, more than, or about the same as the first strike general?
  • Would citizens who gained karma in a successful first strik
... (read more)

It does, if anything, seem almost backwards - getting nuked means losing everything, and successfully nuking means gaining much but not all.

However, that makes the game theory super easy to solve, and doesn't capture the opposing team dynamics very well for gaming purposes.

The best LW Petrov Day morals are the inadvertent ones.  My favorite was 2022, when we learned that there is more to fear from poorly written code launching nukes by accident than from villains launching nukes deliberately.  Perhaps this year we will learn something about the importance of designing reasonable prosocial incentives.

I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistinguishable from the real world

This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?

Or is the position stronger, more like we don't need to solve the alignment problem in general, due to our ability to run simulations and use synthetic data?

This is kind of correct:

This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?

but my point is this shifts us from a one-shot problem in the real world to a many-shot problem in simulations based on synthetic data before the AI gets unimaginably powerful.

We do still need to solve it, but it's a lot easier to solve problems when you can turn them into many-shot problems.

Following on this:

Moreover, even when that dataset does exist, there often won’t be even the most basic built-in tools to analyze it. In an unusually modern manufacturing startup, the M.O. might be “export the dataset as .csv and use Excel to run basic statistics on it.”

I wonder how feasible it would be to build a manufacturing/parts/etc company whose value proposition is solving this problem from the jump. That is to say, redesigning parts with the sensors built in, with accompanying analysis tools, preferably as drop-in replacements where possible. In th... (read more)

Also think he's wrong in the particulars, but I can't quite square it back to his perspective once the particulars are changed.

The bluntest thing that is wrong is that you can specify as precise a choice as you care to in the prompt, and the models usually respond. The only hitch is that you have to know those choices beforehand, whereas it would be reasonable to claim that someone like a photographer is being compelled to make choices they did not know about a priori. If that winds up being important then it would be more like the artist has to make and e... (read more)

8Raemon
An update I wanted to come back to make was "art is a scalar, not a boolean." Art that involves more interesting choices, technique, and deliberate psychological effects on viewers is "more arty." Clicking a filter in photoshop on a photo someone else took is, maybe like, a .5 on a 1-10 scale. I honestly do rank much photography as lower on the "is it art?" scale than equivalent paintings. A lot of AI art will be "slop" that is very low-but-nonzero on the art scale.  Art is somewhat anti-inductive or "zero sum"[1], where if it turns out that everyone makes identical beautiful things with a click that would previously have required tons of technique and choicefulness to create, that stuff ends up lower on the artiness scale than previously, and the people who are somehow innovating with the new tools count as more arty.  The first person to make the Balenciaga Harry Potter AI clip was making art. Subsequent Balenciaga meme clips are much less arty. I like to think that my WarCraft Balenciaga video was "less arty than the original but moreso than most of the dross." 1. ^ this is somewhat an abuse of what 'zero sum' means, I think the sum of art can change,  but is sort of... resistant to change.

Great job writing an oops post, with a short and effective explanation. Strong upvote for you!

The Ted Chiang piece, on closer reading, seems to be about denying the identity of the AI prompter as an artist rather than speaking to the particular limitations of the tool. For those who did not read, his claim is:

  • Being an artist is about making interesting choices in your medium (paintings, novels, photography, digital).
  • AI tools make all the choices for you; therefore you cannot be an artist when using AI tools.
  • Further, the way the AI makes choices precludes them from being interesting, because they are a kind of arbitrary average and therefore cannot
... (read more)

I think this is an actual interesting question, and roughly agree with his frame, but, he's just actually wrong on the particulars. AI prompting involves tons of choices (in particular because you're usually creating art for some particular context, and deciding what sort of art to query the AI for is at least one important choice. I also almost always generate at least 10 different images or songs or whatever, shifting my prompt as i go).

I strongly endorse you writing that post!

Detailed histories of field development in math or science are case studies in deconfusion. I feel like we have very little of this in our conversation on the site relative to the individual researcher perspective (like Hamming’s You & Your Research) or an institutional focus (like Bell Labs).

That’s very interesting - could you talk a bit more about that? I have a guess about why, but would rather hear it straight than risk poisoning the context.

Why I think it's overrated? I basically have five reasons:

  1. Thomas Kuhn's ideas are not universally accepted and don't have clear empirical support apart from the case studies in the book. Someone could change my mind about this by showing me a study operationalizing "paradigm", "normal science", etc. and using data since the 1960s to either support or improve Kuhn's original ideas.
  2. Terms like "preparadigmatic" often cause misunderstanding or miscommunication here.
  3. AI safety has the goal of producing a particular artifact, a superintelligence that's good for h
... (read more)
2Raemon
Dunno if this is a complete answer but Thomas Kwa had a shortform awhile back arguing against at least some uses of "preparadigmatic" https://www.lesswrong.com/posts/Zr37dY5YPRT6s56jY/thomas-kwa-s-shortform?commentId=mpEfpinZi2wH8H3Hb 

Could you talk a bit about how much time and effort you have invested into writing the wikipedia articles?

I think it would be helpful by making it easier for other people to judge whether they can have an impact this way, and whether it would be worth their time.

The amount of time and effort you can invest into them is on a continuous scale. The more time you invest, the more impact you'll have. However, what I can say is if you invest any time at all into advocacy, writing, or trying to communicate ideas to others, you should be doing that on Wikipedia instead. It's like a blog where you can get 1,000s of views a day if you pick a popular article to work on.

The claim that zoning restrictions are not a taking also goes against the expert consensus among economists about the massive costs that zoning imposes on landowners.

I would like to know more about how the law views opportunity costs. For most things, such as liability, it seems to only accept costs in the normal sense of literally had to pay out of pocket X amount; for other things like worker's comp it is a defined calculation of lost future gains, but only from pre-existing arrangements like the job a person already had. It feels like the only time I see opportunity costs is lumped in with other intangibles like pain and suffering.

Independently of the other parts, I like this notion of poverty. I head-chunk the idea as any external thing a person lacks, of which they are struggling to keep the minimum; that is poverty.

This seems very flexible, because it isn't a fixed bar like an income level. It also seems very actionable, because it is asking questions of the object-level reality instead of hand-wavily abstracting everything into money.

Over at Astral Codex Ten is a book review of Progress and Poverty with three follow-up blog posts by Lars Doucet. The link goes to the first blog post because it has links to the rest right up front.

I think it is relevant because Progress and Poverty is the book about:

...strange and immense and terrible forces behind the Poverty Equilibrium.

The pitch of the book is that the fundamental problem is economic rents deriving from private ownership over natural resources, which in the book means land. As a practical matter the focus on land rents in the book hea... (read more)

6Yoav Ravid
From Protection or Free Trade by Henry George: I recommend the full chapter, and book.
6mike_hawke
Yeah, given that Eliezer mentioned Georgism no less than 3 times in his Dath Ilan AMA, I'm pretty surprised it didn't come up even once in this post about UBI. Personally, I wouldn't be surprised to find we already have most or all the pieces of the true story. * Ricardo's law of rent + lack of LVT * Supply and demand for low-skill labor * Legal restrictions on jobs that disproportionately harm low-wage workers. For example, every single low wage job I have had has been part time, presumably because it wasn't worth it to give me health benefits. * Boumol effect? * People really want to eat restaurant food, and seem to underestimate (or just avoid thinking about) how much this adds up. * A lot of factors that today cause poverty would have simply caused death in the distant past. That's just off the top of my head EDIT: Also the hedonic treadmill is such a huge effect that I would be surprised if it wasn't part of the picture. How much worse is it for your kid's tooth to get knocked out at school than to get a 1920's wisdom tooth extraction?
3Sable
I want to support this; the initial motivation behind Georgism is, in fact, the exact question of why poverty still exists when so much progress has been made - and the answer is that when private actors are allowed to monopolize natural resources (most importantly land), all the gains accruing from productivity increases and technology eventually go to them. A UBI, as Eliezer suggests, is a band-aid to the problem, addressing the symptom but not the disease, and so long as land rents (economic rent) are monopolized, the disease continues unabated. I don't know if the Georgist Paradise doesn't have any poverty - land taxes don't magically cure addiction or depression or any of the other reasons someone might become and stay poor. But I'd bet that it has substantially less of the 'scrabbling in the dirt' than our current economic equilibrium.
8Yoav Ravid
I think George does see the dividend as necessary for solving poverty, but only in addition to taxing rent. On its own it would indeed be gobbled up by landlords. Also, what George suggests is a bit different from UBI (and I think Universal Land Dividend is a better name for it than Citizen's Dividend). With UBI, the law dictates a set amount to be given each person each year/month. With the Citizen's Dividend, whatever revenue isn't spent at the end of the year is distributed equally between everyone. This on the one hand leads to a variable income, on the other hand it doesn't place an obligation on the government that it might not be able to fulfil. Personally I think it's a better and more elegant policy.

I feel like the absence of large effects is to be expected during a short-term experiment. It would be deeply shocking to me if there was a meaningful shift in stuff like employment or housing in any experiment that doesn't run for a significant fraction of the duration of a job or lease/rental agreement. For a really significant study you'd want to target the average in an area, I expect.

9Gunnar_Zarncke
The difference you are interested in - short vs long - is explicitly studied by the GiveDirectly UBI study in Kenya.

This is why San Francisco was chosen as the example - at least over the last decade or so it has been one of the most inelastic housing supplies in the U.S.

You are therefore exactly correct: it does not comply with basic supply and demand. This is because basic supply and demand usually do not apply for housing in American cities due to legal constraints on supply, and subsidies for demand.

6Matthew Barnett
But San Francisco is also pretty unusual, and only a small fraction of the world lives there. The amount of new construction in the United States is not flat over time. It responds to prices, like in most other markets. And in fact, on the whole, the majority of Americans likely have more and higher-quality housing than their grandparents did at the same age, including most poor people. This is significant material progress despite the supply restrictions (which I fully concede are real), and it's similar to, although smaller in size than what happened with clothing and smartphones.

I acknowledge the bow-out intention, and I'll just answer what look like the cruxy bits and then leave it.

There's no actual price signal or ground truth for that portion of the value.

Fortunately we have solved this problem! Slightly simplified: what a vacant lot sells for is the land value, and how much more a developed lot next to it sells for is the value of the improvements. Using the prices from properties recently sold is how they usually calculate this.

If it's NOT just using a land-value justification to raise the dollar amounts greatly, please educa

... (read more)
4Dagon
This would alleviate a lot of my concerns.  Sales taxes (on actual sales, as long as it's not imputed or assumed-sale where no money is actually changing hands) have a ton of advantages, not least of which is that the money is ALWAYS there to pay the taxes.  I suspect it won't satisfy the Georgists, though, as it doesn't capture appreciation in value if there's no sale for decades or longer.  Maybe - it does remove the incentive for empty-land speculation.

My objection is to the core of the proposal that it's taxed at extremely high levels, based on theoretical calculations rather than actual use value.

I'm a little confused by what the theoretical calculations are in your description. The way I understand it - which is scarcely authoritative but does not confuse me - is that we have several steps:

  1. Theory: a lot of the value of a piece of property is not because of work done by the owner, but instead because of other people being nearby.
  2. Theory: this is bad. We should remove all the value provided by other peop
... (read more)
2Dagon
[I have bad self-control, so my statements that I'm bowing out of this don't seem to have stuck.  Apologies.] The theoretical calculation problem is in what you call "practical".  There's no actual price signal or ground truth for that portion of the value.  The use of the property combines land and improvement values in a way that's idiosyncratic and inseparable.  That calculation is going to be made up, and wildly inaccurate and unsupportable even in the ideal world where it's not politically adjusted. I'm not sure how to interpret  I know that.  I don't know how the proposal differs from "run it the same way, just with higher values framed as land-value tax".  If the amounts are low, it's workable.  If the amounts are high, it's not.  I don't know if the proposal is to somehow separate the ownership of land and improvements, or if there's something else that makes it practically different from "much higher normal property taxes".  If it's NOT just using a land-value justification to raise the dollar amounts greatly, please educate me.

That isn't how the taxes are assessed, as a practical matter. The value of the land and the value of buildings are assessed, mostly using market data, and then the applied tax is the ratio of the land value to the property value, so for example in an apartment building that fraction is taxed out of the rent payments, and when a property is sold that fraction is taxed from the sale price.

I do notice that we don't have any recent examples of the realistically-full land tax interacting with individual home ownership; everywhere we see it is treated the same a... (read more)

3Dagon
Most of my objection (and confusion that it gets handwaved away so often) is NOT that the unimproved theoretical value of land could be taxed.  It seems complex and unnecessary, but that's not a unique problem with tax proposals. My objection is to the core of the proposal that it's taxed at extremely high levels, based on theoretical calculations rather than actual use value. I have plenty of other concerns (like how it ACTUALLY works, for improved properties - do we split all deeds in two, one for the land and one for the improvements, and allow people to sell them separately?  How does that work?), but they weren't the crux of THIS discussion, and I suspect the answer is just "no - this is just regular property taxes, just calculated differently (and much higher), we still take the improvements if the tax is unpaid".

“Government will make better resource decisions than profit-motivated private entities”

I think you landed on the crux of it - under the Georgist model, individuals (or firms) still make the decisions about what to do with the resources. What the government does is set a singular huge incentive, strongly in the direction of "add value to the land."

I don't have an answer to this question, but I would register a prediction:

  • Georgism believers < communism believers
  • Georgism popularity > communism popularity

The latter is mostly because there are a bunch of people who really hate communism and will prefer almost literally anything else in a survey.

0Dagon
Kind of.  They can only make decisions that generate enough income to pay the taxes, which are calculated as the theoretical value (maximum rent attainable by any use), not the actual choice.
Load More