This is a fun idea! I was recently poking at field line reconnection myself, in conversation with Claude.
I don't think the energy balance turns out in the idea's favor. Here are the heuristics I considered:
Regarding The Two Cultures essay:
I have gained so much buttressing context from reading dedicated history about science and math that I have come around to a much blunter position than Snow's. I claim that an ahistorical technical education is technically deficient. If a person reads no history of math, science, or engineering than they will be a worse mathematician, scientist, or engineer, full stop.
Specialist histories can show how the big problems were really solved over time.[1] They can show how promising paths still wind up being wrong, and the ...
I would like to extend this slightly by switching perspective to the other side of the coin. The drop-in remote worker is not a problem of anthropomorphizing AI, so much as it is anthropomorphizing the need in the first place. Companies create roles with the expectation people will fill them, but that is the habit of the org, not the threshold of the need.
Adoption is being slowed down considerably by people asking for AI to be like a person, so we can ask that person to do some task. Most companies and people are not asking more directly for an AI to meet a need. Figuring out how to do that is a problem to solve by itself, and there hasn't been much call for it to date.
I agree that the economic principles conflict; you are correct that my question was about the human labor part. I don't even require that they be substitutes; at the level of abstraction we are working in, it seems perfectly plausible that some new niches will open up. Anything would qualify, even if it is some new-fangled job title like 'adaptation engineer' or something that just preps new types of environments for teleoperation before moving onto the next environment like some kine of meta railroad gang. In this case the value of human labor might stay ...
Obviously, at least one of those predictions is wrong. That’s what I said in the post.
Does one of them need to be wrong? What stops a situation like only one niche, or a few niches, being high value and the rest not providing enough to eat? This pretty much exactly like how natural selection operates, for example.
I agree fake pictures are harder to threaten with. But consider that the deepfake method makes everyone a potential target, rather than only targeting the population who would fall for the relationship side of the scam.
There are other reasons I think it would be grimly effective, but I am not about to spell it out for team evil.
He also claims that with the rise of deepfakes you can always run the Shaggy defense if the scammer actually does pull the trigger.
With the rise of deepfakes, the scammers can skip steps 1-3, and also more easily target girls.
Chip fabs and electricity generation are capital!
Yes, but so are ice cream trucks and the whirligig rides at the fair. Having “access to capital” is meaningless if you are buying an ice cream truck, but means much if you have a rare earth refinery.
My claim is that the big distinction now is between labor and capital because everyone had about an equally hard time getting labor; when AI replacement happens and that goes away, the next big distinction will be between different types of what we now generically refer to as capital. The term is uselessly broad in my opinion: we need to go down at least one level towards concreteness to talk about the future better.
I agree with the ideas of AI being labor-replacing, and I also agree that the future is likely to be more unequal than the present.
Even so, I strongly predict that the post-AGI future will not be static. Capital will not matter more than ever after AGI: instead I claim it will be a useless category.
The crux of my claim is that when AI replaces labor and buying results is easy, the value will shift to the next biggest bottlenecks in production. Therefore future inequality will be defined by the relationship to these bottlenecks, and the new distinctions wil...
This is a fantastic post, immediately leaping into the top 25 of my favorite LessWrong posts all-time, at least.
I have a concrete suggestion for this issue:
They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.
I propose switching gears at this point to make "Why is the problem impossible?" the actual focus of their efforts for the remainder of the time period. I predict this will consistently yield partial progress among at least a chunk of ...
I think this post is quite important because it is about Skin in the Game. Normally we love it, but here is the doubly-interesting case of wanting to reduce the financial version in order to allow the space for better thinking.
The content of the question is good by itself as a moment in time of thinking about the problem. The answers to the question are good both for what they contain, and also for what they do not contain, by which I mean what we want to see come up in questions of this kind to answer them better.
As a follow-up, I would like to see a more...
But if you introduce AI into the mix, you don’t only get to duplicate exactly the ‘AI shaped holes’ in the previous efforts.
I have decided I like the AI shaped holes phraseology, because it highlights the degree to which this is basically a failure in the perception of human managers. There aren't any AI shaped holes because the entire pitch with AI is we have to tell the AI what shape to take. Even if we constrain ourselves to LLMs, the AI docs literally and exactly describe how to tell it what role to fill.
Let’s say Company A can make AGIs that are drop-in replacements for highly-skilled humans at any existing remote job (including e.g. “company founder”), and no other company can. And Company C is a cloud provider. Then Company A will be able to outbid every other company for Company C’s cloud compute, since Company A is able to turn cloud compute directly into massive revenue. It can just buy more and more cloud compute from C and every other company, funding itself with rapid exponential growth, until the whole world is saturated.
I think this is outside t...
I endorse this movie unironically. It is a classic film for tracking what information you have and don't have, how many possibilities there are, etc.
Also the filmmaker maintains to this day that they left the truth of the matter in the final scene undefined on purpose, so we are spared the logic being hideously hacked-off to suit the narrative and have to live with the uncertainty instead.
I think the best arguments are those about the costs to the AI of being nice. I don't believe the AI will be nice at all because neglect is so much more profitable computation-wise.
This is because even processing the question of how much sunlight to spare humanity probably costs more in expectation than the potential benefit of that sunlight to the AI.
First and least significant, consider that niceness is an ongoing cost. It is not a one-time negotiation to spare humanity 1% of the sun; more compute will have to be spent on us in the future. That compute w...
I'm not familiar with the details of Robin's beliefs in the past, but it sure seems lately he is entertaining the opposite idea. He's spending a lot of words on cultural drift recently, mostly characterizing it negatively. His most recent on the subject is Betrayed By Culture.
I happened to read a Quanta article about equivalence earlier, and one of the threads is the difficulty of a field applying a big new concept without the expository and distillation work of putting stuff into textbooks/lectures/etc.
That problem pattern-matches with the replication example, but well-motivated at the front end instead of badly-motivated at the back end. It still feels like exposition and distillation are key tasks that govern the memes-in-the-field passed among median researchers.
I strongly suspect the crux of the replication crisis example ...
To me memetic normally reads something like "has a high propensity to become a meme" or "is meme-like" I had no trouble interpreting the post from this basis.
I push back against trying to hew closely to usages from the field of genetics. Fundamentally I feel like that is not what talking about memes is for; it was an analogy from the start, not meant for the same level of rigor. Further, memes and how meme-like things are is much more widely talked about than genetics, so insofar as we privilege usage considerations I claim switching to one matching geneti...
Welcome!
The short and informal version is that epistemics covers all the stuff surrounding the direct claims. Things like credence levels, confidence intervals, probability estimates, etc are the clearest indicators. It also includes questions like where the information came from, how it is combined with other information, what other information we would like to have but don't, etc.
The most popular way you'll see this expressed on LessWrong is through Bayesian probability estimates and a description of the model (which is to say the writer's beliefs about ...
May I throw geometry's hat into the ring? If you consider things like complex numbers and quarternions, or even vectors, what we have are two-or-more dimensional numbers.
I propose that units are a generalization of dimension beyond spatial dimensions, and therefore geometry is their progenitor.
It's a mathematical Maury Povich situation.
I feel like this is mostly an artifact of notation. The thing that is not allowed with addition or subtraction is simplifying to a single term; otherwise it is fine. Consider:
10x + 5y -5x -10y = 10x - 5x + 5y -10y = 5x - 5y
So, everyone reasons to themselves, what we have here is two numbers. But hark, with just a little more information, we can see more clearly we are looking at a two-dimensional number:
5x - 5y = 5
5x = 5y +5
5x - 5 = 5y
x - 1 = y
y = x - 1
Such as a line.
This is what is happening with vectors, and complex numbers, quarternions, etc.
The post anchors on the Christiano vs Eliezer models of takeoff, but am I right that the goal more generally is to disentangle the shape of progress from the timeline for progress? I strongly support disentangling dimensions of the problem. I have spoken against using p(doom) for similar reasons.
Because that method rejects everything about prices. People consume more of something the lower the price is, even more so when it is free: consider the meme about all the games that have never been played in people's Steam libraries because they buy them in bundles or on sale days. There are ~zero branches of history where they sell as many units at retail as are pirated.
A better-but-still-generous method would be to do a projection of the increased sales in the future under the lower price curve, and then claim all of that as damages, reasoning that all of this excess supply deprived the company of the opportunity to get those sales in the future.
This is not an answer, but I register a guess: the number relies on claims about piracy, which is to say illegal downloads of music, movies, videogames, and so on. The problem is that the conventional numbers for this are utter bunk, because the way it gets calculated by default is they take the number of downloads, multiply it by the retail price, and call that the cost.
This would be how they get the cost of cybercrime to significantly exceed the value of the software industry: they can do something like take the whole value of the cybersecurity industry, better-measured losses like from finance and crypto, and then add bunk numbers for piracy losses from the entertainment industry on top of it.
This feels like a bigger setback than the generic case of good laws failing to pass.
What I am thinking about currently is momentum, which is surprisingly important to the legislative process. There are two dimensions that make me sad here:
I wonder how the consequences to reputation will play out after the fact.
The best LW Petrov Day morals are the inadvertent ones. My favorite was 2022, when we learned that there is more to fear from poorly written code launching nukes by accident than from villains launching nukes deliberately. Perhaps this year we will learn something about the importance of designing reasonable prosocial incentives.
I think this is actually wrong, because of synthetic data letting us control what the AI learns and what they value, and in particular we can place honeypots that are practically indistinguishable from the real world
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
Or is the position stronger, more like we don't need to solve the alignment problem in general, due to our ability to run simulations and use synthetic data?
This is kind of correct:
This sounds less like the notion of the first critical try is wrong, and more like you think synthetic data will allow us to confidently resolve the alignment problem beforehand. Does that scan?
but my point is this shifts us from a one-shot problem in the real world to a many-shot problem in simulations based on synthetic data before the AI gets unimaginably powerful.
We do still need to solve it, but it's a lot easier to solve problems when you can turn them into many-shot problems.
Following on this:
Moreover, even when that dataset does exist, there often won’t be even the most basic built-in tools to analyze it. In an unusually modern manufacturing startup, the M.O. might be “export the dataset as .csv and use Excel to run basic statistics on it.”
I wonder how feasible it would be to build a manufacturing/parts/etc company whose value proposition is solving this problem from the jump. That is to say, redesigning parts with the sensors built in, with accompanying analysis tools, preferably as drop-in replacements where possible. In th...
Also think he's wrong in the particulars, but I can't quite square it back to his perspective once the particulars are changed.
The bluntest thing that is wrong is that you can specify as precise a choice as you care to in the prompt, and the models usually respond. The only hitch is that you have to know those choices beforehand, whereas it would be reasonable to claim that someone like a photographer is being compelled to make choices they did not know about a priori. If that winds up being important then it would be more like the artist has to make and e...
The Ted Chiang piece, on closer reading, seems to be about denying the identity of the AI prompter as an artist rather than speaking to the particular limitations of the tool. For those who did not read, his claim is:
I think this is an actual interesting question, and roughly agree with his frame, but, he's just actually wrong on the particulars. AI prompting involves tons of choices (in particular because you're usually creating art for some particular context, and deciding what sort of art to query the AI for is at least one important choice. I also almost always generate at least 10 different images or songs or whatever, shifting my prompt as i go).
I strongly endorse you writing that post!
Detailed histories of field development in math or science are case studies in deconfusion. I feel like we have very little of this in our conversation on the site relative to the individual researcher perspective (like Hamming’s You & Your Research) or an institutional focus (like Bell Labs).
Why I think it's overrated? I basically have five reasons:
The amount of time and effort you can invest into them is on a continuous scale. The more time you invest, the more impact you'll have. However, what I can say is if you invest any time at all into advocacy, writing, or trying to communicate ideas to others, you should be doing that on Wikipedia instead. It's like a blog where you can get 1,000s of views a day if you pick a popular article to work on.
The claim that zoning restrictions are not a taking also goes against the expert consensus among economists about the massive costs that zoning imposes on landowners.
I would like to know more about how the law views opportunity costs. For most things, such as liability, it seems to only accept costs in the normal sense of literally had to pay out of pocket X amount; for other things like worker's comp it is a defined calculation of lost future gains, but only from pre-existing arrangements like the job a person already had. It feels like the only time I see opportunity costs is lumped in with other intangibles like pain and suffering.
Independently of the other parts, I like this notion of poverty. I head-chunk the idea as any external thing a person lacks, of which they are struggling to keep the minimum; that is poverty.
This seems very flexible, because it isn't a fixed bar like an income level. It also seems very actionable, because it is asking questions of the object-level reality instead of hand-wavily abstracting everything into money.
Over at Astral Codex Ten is a book review of Progress and Poverty with three follow-up blog posts by Lars Doucet. The link goes to the first blog post because it has links to the rest right up front.
I think it is relevant because Progress and Poverty is the book about:
...strange and immense and terrible forces behind the Poverty Equilibrium.
The pitch of the book is that the fundamental problem is economic rents deriving from private ownership over natural resources, which in the book means land. As a practical matter the focus on land rents in the book hea...
I feel like the absence of large effects is to be expected during a short-term experiment. It would be deeply shocking to me if there was a meaningful shift in stuff like employment or housing in any experiment that doesn't run for a significant fraction of the duration of a job or lease/rental agreement. For a really significant study you'd want to target the average in an area, I expect.
This is why San Francisco was chosen as the example - at least over the last decade or so it has been one of the most inelastic housing supplies in the U.S.
You are therefore exactly correct: it does not comply with basic supply and demand. This is because basic supply and demand usually do not apply for housing in American cities due to legal constraints on supply, and subsidies for demand.
I acknowledge the bow-out intention, and I'll just answer what look like the cruxy bits and then leave it.
There's no actual price signal or ground truth for that portion of the value.
Fortunately we have solved this problem! Slightly simplified: what a vacant lot sells for is the land value, and how much more a developed lot next to it sells for is the value of the improvements. Using the prices from properties recently sold is how they usually calculate this.
...If it's NOT just using a land-value justification to raise the dollar amounts greatly, please educa
My objection is to the core of the proposal that it's taxed at extremely high levels, based on theoretical calculations rather than actual use value.
I'm a little confused by what the theoretical calculations are in your description. The way I understand it - which is scarcely authoritative but does not confuse me - is that we have several steps:
That isn't how the taxes are assessed, as a practical matter. The value of the land and the value of buildings are assessed, mostly using market data, and then the applied tax is the ratio of the land value to the property value, so for example in an apartment building that fraction is taxed out of the rent payments, and when a property is sold that fraction is taxed from the sale price.
I do notice that we don't have any recent examples of the realistically-full land tax interacting with individual home ownership; everywhere we see it is treated the same a...
“Government will make better resource decisions than profit-motivated private entities”
I think you landed on the crux of it - under the Georgist model, individuals (or firms) still make the decisions about what to do with the resources. What the government does is set a singular huge incentive, strongly in the direction of "add value to the land."
I don't have an answer to this question, but I would register a prediction:
The latter is mostly because there are a bunch of people who really hate communism and will prefer almost literally anything else in a survey.
I like this effort, and I have a few suggestions:
- Humanoid robots are much more difficult than non-humanoid ones. There are a lot more joints than in other designs; the balance question demands both more capable components and more advanced controls; as a consequence of the balance and shape questions, a lot of thought needs to go into wrangling weight ratios, which means preferring more expensive materials for lightness, etc.
- In terms of modifying your analysis, I think this cashes out as greater material intensity - the calculations here are done by weight
... (read more)