All of jasoncrawford's Comments + Replies

Yes, they would not be made from mirror components!

Synthetic cells aren't inherently dangerous if they're not mirror cells (and aren't dangerous pathogens of course).

0Purplehermann
Is there a reason that random synthetic cells will not be mirror cells?

Failure to detect other life in the universe is only really evidence against advanced intelligent civilizations, I think. The universe could easily be absolutely teeming with bacterial life.

Re “take steps to stop it”, I was replying to @Purplehermann

The asymmetric advantage of bacteria is that they can invade your body but not vice versa.

I think until recently, most scientists assumed that mirror bacteria would (a) not be able to replicate well in an environment without many matching-chirality nutrients, and/or (b) would be caught by the immune system. It's only recently that a group of scientists got more concerned and did a more in-depth investigation of the question.

Yes, antibodies could adapt to mirror pathogens. The concern is that the system which generates antibodies wouldn't be strongly triggered. The Science article says: “For example, experiments show that mirror proteins resist cleavage into peptides for antigen presentation and do not reliably trigger important adaptive immune responses such as the production of antibodies (11, 12).”

Given that mirror life hasn't arisen independently on Earth in ~4B years, I don't think we need to take any steps to stop it from doing so in the future. Either abiogenesis is extremely rare, or when new life does arise naturally, it is so weak that it is outcompeted by more evolved life.

I agree that this is a risk from any extraterrestrial life we might encounter.

0Purplehermann
https://english.elpais.com/science-tech/2024-12-31/protocells-emerge-in-experiment-simulating-lifeless-world-there-is-no-divine-breath-of-life.html We have here some scientists making cells. Looks like a dangerous direction
-1PhilGoetz
Stop right there at "Either abiogenesis is extremely rare..."  I think we have considerable evidence that biogenesis is rare--our failure to detect any other life in the universe so far.  I think we have no evidence at all that biogenesis is not rare.  (Anthropic argument.) Stop again at "I don't think we need to take any steps to stop it from doing so in the future".  That's not what this post is about.  It's about taking steps to prevent people from deliberately constructing it.
1mruwnik
It's probably not that large a risk though? I doubt any alien microbes would be that much of a problem to us. It seems unlikely that they would happen to use exactly the same biochemistry as we do, which makes it harder for them to infect/digest us. Chirality is just one of the multitudes of ways in which earth's biosphere is "unique". It's been a while since I was knowledgeable about any of this, but a quick o1 query seems to point in the same direction. Worth going through quarantine, just in case, of course. Though that works on earth pathogens which tend to quickly die off without hosts to infect, which very well might not hold true for more interesting environments.

I appreciate that! Would like to get back to them at some point…

I don't intend to write something anodyne, and don't think I am doing so. Let me know what you think once I'm at least a few chapters in.

I don't think that's right. The world now is much better than the world when it was smaller, and I think that is closely related to population growth. So I think it is actually possible to conclude that more people are better.

4TAG
Does the world seem better to young people who are unable to afford housing?
1senguidev
I wish too. This is an extraordinary bold claim though. There's no logical reasoning from "here a few, sparse, mostly theoretical ideas" to "so this is how this immensely complex systems involving billions of humans and organisations with agency we have no idea how to model should behave if we changed some major factor by a few orders of magnitude".  You can only make the jump with vibes or politics. And fine, since we have no idea, I'd rather make an optimistic jump too! I hope you manage to distill some nuanced faith in the future, sincere thanks for trying!

Software/internet gives us much better ability to find.

Re competitors, the idea is that we're not all competing for a single prize; we're being sorted into niches. If there is 1 songwriter and 1 lyricist, they kind of have to work together. If there are 100 of each, then they can match with each other according to style and taste. That's not 100x competition, it's just much better matching.

5Kaj_Sotala
And yet...

That is a good point. Still, the fact that individual companies, for instance, develop layers of bureaucracy is not an argument against having a large economy. It's an argument for having a lot of companies of different sizes, and in particular for making sure that market entry doesn't become too difficult and that competition is always possible. And maybe at the governance level it is an argument for many smaller nations rather than one world government.

2Kaj_Sotala
This is true in principle, but population growth has led to the creation of larger companies in practice. ChatGPT when I asked it what proportion of the economy is controlled by the biggest 100 companies:  And if the population in every country would grow, then we'd end up with larger governments even if we kept the current system and never established a world government. To avoid governments getting bigger, you'd need to actively break up countries into smaller ones as their population increased. That doesn't seem like a thing that's going to happen.

I feel that you're only paying attention to the “more geniuses and researchers” part and ignoring the parts about market size, better matching, more niches?

Also “focus on it at the exclusion of everything else” is a strawman, I'm not advocating that of course. Certainly increasing intelligence would be good (although we don't know how to do that yet!) Better education would be great and I am a strong advocate of that. Same for better scientific institutions, etc.

3Viliam
I am not blaming you personally, but the Overton window contains the population growth and not much else. Improving the population (genetically or by education) would have some effect here, too. Not literally more niches or bigger market size overall, but more niches for smart-people-related things, and more market demand for the stuff smart people buy.

I think the positive externalities of one genius are much greater than the negative externalities of one idiot or jerk. A genius can create a breakthrough discovery or invention that elevates the entire human race. Hard for an idiot or jerk to do damage of equivalent magnitude.

Maybe a better argument is “what about more Hitlers or Stalins?” But I still think that looking at the overall history of humanity, it seems that the positives of people outweigh the negatives, or we wouldn't even be here now.

Bryan Caplan addressed this recently here.

First, this seems to be arguing against strawman. No one is advocating literally infinite growth forever, which is obviously impossible.

Second, the current reality is not exponential population growth. It is a decelerating population. The UN projections show world population likely leveling off around 10 or 11 billion people in in this century, and possibly even declining:

Even if we were to get back on an exponential population growth curve, the limits seem to me to be many orders of magnitude away. I don't see why we would worry about them until we get mu... (read more)

7avancil
The question of what IS happening versus what SHOULD happen with population growth are certainly two different things. My point is that arguments for growth ultimately need to address the questions of how big should we grow, and what happens when we reach that point. If our economy depends on continued growth, that's going to stop working at some point. While the physical limits of the universe are a long ways off, there are other limits that we could hit much sooner. Underlying your pro-growth arguments, there is an assumption that collective intelligence can continue to grow without limits, leading to technology that can grow without limits. I would question those assumptions. And of course, your post is ignoring the costs of growth. Ideas are non-rival goods, but space on this planet, and physical resources, are rival goods. If intelligence (and the resulting technology) reaches a point of diminishing returns, but the costs of growth hit an upward inflection, you quickly hit a limit. For example, larger, more complex systems risk becoming less stable, while coordination problems can grow factorially. Reasonable people can disagree on whether the current population is too big, too small, or about right, but "ever larger" is not going to work as an answer. At some point, we need to either figure out how to have a stable population, or deal with the less pleasant alternatives.

Investigators get fired when they aren't being productive. This does happen. The difference in the block model is that whether someone is being productive is determined by their manager, with input from their peers.

Who says they would be MBAs? The best science managers are highly technical themselves and started out as scientists. It's just that their career from there evolves more in a management direction.

Once you eliminate the requirement that the manager be a practicing scientist, the roles will become filled with people who like managing, and are good at politics, rather than doing science. I’m surprised this is controversial. There is a reason the chair of academic departments is almost always a rotating prof in the department, rather than a permanent administrator. (Note: “was once a professor” is not considered sufficient to prevent this. Rather, profs understand that serving as chair for a couple years before rotating back into research is an unpl... (read more)

I really don't think a group of, say, university professors could join in such a contract. For one, I'm not sure their universities would let them, especially if they weren't all at the same university. For another, the granting organizations (e.g., NIH) put a lot of restrictions on the grant money. You can't redistribute it to other labs.

Also, the grants are still going to be small ones to fund a single lab, not large ones that could fund hundreds of researchers. If everyone still has to seek grants you haven't really solved the problem, even if they are spreading risk/reward somehow.

Yes, but those researchers are typically grad students. To become a professor, get tenure, get your own grants, etc., you need to go run your own lab. At least that is my understanding of the system.

There is certainly no moral equivalence between the two of them; SBF was a fraud and Toner was (from what I can tell) acting honestly according to her convictions. Sorry if I didn't make that clear enough.

But I disagree about destroying OpenAI—that would have been a massive destruction of value and very far from justified IMO.

2ChristianKl
When negotiating it can be useful to be open to outcomes that are net destruction of value, even if the outcome is not what you ideally want. 

Did Sam threaten to take the team with him, or did the team threaten to quit and follow him? From what I saw it looked like the latter.

4Amalthea
I mean he didn't threaten to take the team with him, he was just going to do so. We also don't know what went on behind the scenes, and it seems plausible that many OpenAI employees were (mildly) pressured into signing by the pro-Sam crowd. So if counterfactually he hadn't been willing to destroy the company, he could have assuaged the people closest to him, and likely the dynamics would have been much different.

I was basing my (uncertain) interpretation on a number of sources, and I only linked to one, sorry.

In particular, the only substantive board disagreement that I saw was over Toner's report that was critical of OpenAI for releasing models too quickly, and Sam being upset over it.

Thanks. I was quoting Semafor, but on a closer reading of Tallinn's quote I agree that they might have been misinterpreting him. (Has he commented on this, does anyone know?)

Yes, but not all of it is well-understood as problem-solving ahead of time:

It feels strained to say that Henry Ford solved the problem that people couldn’t move over land faster than horses. Or that Apple solved the problem that people couldn’t carry the internet in their pockets. Or that telephones solved the problem that people couldn’t communicate in real time without being in the same room. The list of technologies that didn’t solve a problem except in retrospect is long.

https://blog.spec.tech/p/is-necessity-actually-the-mother 

Thank you! That means a lot to me, especially since these posts are never the ones that go viral, so it's good to know that someone appreciates them.

3matto
Very much! Apart from enjoying it myself, I usually pick some things out and share them with friends and family as a way to offset some of the unagentic doom and gloom present in mainstream media :)

I don't think this is exactly correct: I'm pretty sure that many cities including London and Paris had sewer systems much earlier than that, although they modernized them / made major overhauls in the 19th century. (Anyway, kind of besides the point of the linked thread)

2ChristianKl
There is a claim that there was city planning for thousands of years.  If you ignore the issue of streets and look at older parts of many European towns the streets aren't straight because nobody planned them beforehand and they grew more organically. What happened in the 19th century in Europe was that people actually started city planning. American cities might have a drawn-out street network before that point but in most European cities there wasn't planning. 

Update: I’m already planning to give brief remarks at a few events coming up very soon:

If you’re in/near Bangalore, hope to see you there!

Maybe “general truths” is still too broad. Let's approach this a different way. I submit that science is the best and only method for establishing a certain class of truths. I'm not totally sure how to describe that class. They are general truths about the world, but maybe it's narrower than that. But I'm pretty sure there is such a class. Do you agree? How would you describe the type of knowledge that science (and only science) can get us?

2ChristianKl
I think a key feature of science is that it's about public knowledge as opposed to private knowledge. You can verify whether or not a scientific claim is true. If you are on the other hand dealing with a superforcaster you can verify whether or not the superforcaster overall has a good track record but you can't verify whether or not specific claims are true in the way you can with scientific claims. You can write down all your scientific knowledge in a textbook and then the knowledge is independent of the reader. An expert can't write down his implicit knowledge in a similar way and a reader gets all the knowledge by reading it.  Science is inherently about using systematized ways to understand a subject. An expert that unsystematically explores the subject can still understand all the truths about the subject One of the interesting things about LLMs is that people used to believe that an AI has to reason much more systematically to be truly intelligent. LLMs proved that wrong and show that a very unsystematic approach still leads to an AI that's more intelligent than all the approaches to build AI in a more systematic way.  That means that you can't easily verify whether the claims of the LLM are true but I still think that the LLM can learn "general truths" from the data it has access to. 

Good point. Maybe I should say it is the only method for finding out general truths about the world. It's not the only way to answer specific, narrow, practical questions like whether a particular building or road can be built.

3ChristianKl
Do you believe that philosophy is science? Do you believe it can be used to find out any general truths about the world? A majority of knowledge that human experts in most fields have is implicit knowledge and not the kind of knowledge that you could write down in a book. Do you think that knowledge contains no general truths about the world?

Thanks Zac. I don't have an opinion on this myself but I'll add your comment to this digest and mention it in the next one as well.

4Zac Hatfield-Dodds
Thanks - https://blog.opensource.org/metas-llama-2-license-is-not-open-source/ is less detailed but as close to an authoritative source as you can get, if that helps. And yes, this opinion is my own. More relevant than my employer is is my open source experience: eg I'm a Fellow of the Python Software Foundation, "nominated for their extraordinary efforts and impact upon Python, the community, and the broader Python ecosystem".

Counterpoint: The American South very quickly adopted one of the classic inventions of the Industrial Revolution, the cotton gin. And it has been proposed that this actually helped entrench slavery in the South.

Yes, see my reply to Vaniver above.

I think everything you say about the printing press is correct and important, I would just caution against overfocusing on the printing press as the one pivotal cause. I think it was part of a broader trend.

2[anonymous]
Per the link you cited: There must be some very deep underlying trend that explains these non-coincidences. And that is why I am sympathetic to explanations that invoke fundamental changes in thinking The question then converts to : why did this happen when it happened, and not earlier or later?  The "printing press theory" proposes that people could not change their thinking without the information to show where it was flawed (by having something to compare to), and the other critical element is it's a ratchet. Each "long tail" theory that someone writes down continues to exist because a press can make many copies of their book.  Prior to this, ideas that only sort of worked but were not that valuable would only get hand copied a few times and then lost.  This is one of the reasons why genomes are able to evolve : multiple redundant copies of the same gene allows for 1 main copy to keep the organism reproducing while the other copies can change with mutations, exploring the fitness space for an edge.     If you think about how you might build an artificial intelligence able to reason about a grounded problem, for example a simple one:  Pathing an autonomous car.   One way to solve the problem is to use a neural network that generates many plausible paths the car might take over future instants in time.  (anyone here on lesswrong has used such a tool) Then you would evaluate the set of paths vs heuristics for "goodness" and then choose the max(goodness(set(generated paths))) to actually do in the world. Similarly, an AI reasoning over scientific theories need not "stake it's reputation" on particles or waves, to name a famous dilemma.  It's perfectly feasible to simultaneously believe both theories at once, and to weight your predictions by evaluating any inputs against both theories, and to multiply how confident a particular theory is it applies in this domain.   An AI need not commit to 2 theories, it can easily maintain sets of thousands and be able to

Yes, the famous Needham question. It is tougher to answer. Mokyr offers some thoughts in A Culture of Growth. I'm sure there are other hypotheses but I don't have pointers right now.

You're right, that was my mistake, I wasn't reading it carefully enough and I summarized it incorrectly. Fixed now, thanks.

“You can’t deduce anything about the validity of someone’s position from their willingness or unwillingness to debate it”

Ben Bayer

2ChristianKl
The validity of someone's position is not the only factor that goes into willingness or unwillingness to debate but claiming that it plays no factor at all seems strange to me. 

The article has a detailed analysis that comes up with a much lower cost. If you think that analysis goes wrong, I'd be curious to understand exactly where?

3bhauth
I sure didn't see one! I saw some analysis of the cost of energy used for grinding up rock, with no consideration of other costs. Can you point me to the section with detailed analysis of the costs of mining, crushing, and spreading the rock, or the capital costs of grinders? A detailed analysis would have numbers for these things, not just dismiss them. OK then. Digging up and crushing olivine to gravel would be $20-30/ton. We know this from the cost of gravel and the availability of olivine deposits. That alone makes this uneconomical, yet the author just dismisses them as negligible next to the cost of milling. So either the dismissal is wrong, or the milling cost estimation is wrong, or both. Why is the cost per ton of CO2 lower than the cost per ton of rock, when 1 ton of rock stores much less than 1 ton of CO2? That's quite a non sequitur! We know what grinding rock to fine powder costs. Use those costs, not the cost of electricity.

Trust is important, but… the Church banning cousin-marriage as the primary cause of a high-trust society? I find it hard to believe. No time now to elaborate on my reasons but if people are really interested maybe I will write something up later

I think in Allen's book there is both a generic claim of high wages, and some specific analyses of technologies like the spinning jenny and whether it would have paid to adopt them.

The builders' wages are part of the generic claim, because there was no building-related technology that was analyzed.

The spinners' wages might be related to the spinning jenny ROI calculations, but I haven't gone deep enough on the analysis to understand how the paper that was linked might affect those calculations.

Maybe! Or maybe you could interest him in a printing press, or a sextant, or at least a plow? That is sort of my point in the second-to-last paragraph (about shape/direction vs. rate).

That is one of many hypotheses. (I haven't studied all of them yet, but I'd be surprised if I ended up ranking that even in the top three causes.)

2jmh
That might be too quick a dismissal given the importance that is typically assigned to trust for well functioning economies and economic development. But I think the view of some top three regardless of what the three are is a difficult to accept as an unqualified statement. Seems like we're talking about a very complex and complicated area that will not distill down to some simple map of that territory. I think we will find that the map will need to have a larger number of layers that can be applied than just three.  Which layers one will need or find most informative will depend a good bit on what focus or specific question or framing one starts with. I thought that type of view was implied in your conclusion so was a bit surprised to see that parenthetical statement.

It is a spike in the death rate, from covid.

Insurance is exactly a mechanism that transforms high-variance penalties in the future into consistent penalties in the present: the more risky you are, the higher your premiums.

1mako yass
Then insurance as you've defined it is not a specific mechanism, it's a category of mechanisms, most of which don't do the thing they're supposed to do. I want to do mechanism design. I want to make a proposal specific enough to be carried out irl.

Yes, and similarly, William Crookes warning about a fertilizer shortage in 1898 was correct. Sometimes disaster truly is up ahead and it's crucial to change our course. What makes the difference IMO is between saying “this disaster will happen and there's nothing we can do about it” vs. “this disaster will happen unless we recant and turn backwards” vs. “this disaster might happen so we should take positive steps to make sure it doesn't.”

Right, and as Tyler Cowen pointed out in the article I linked to, we don't hold the phone company liable if, e.g., criminals use the telephone to plan and execute a crime.

So even if/when liability is the (or part of the) solution, it's not simple/obvious how to apply it. Needs good, careful thinking each time of where the liability should exist under what circumstances, etc. This is why we need experts in the law thinking about these things.

3ChristianKl
I have the impression that your post asserts that there's a problem with review-and-approval paradigm is in some way more problematic than other paradigms of how to regulate. It seems to me unclear why that would be true.  While it sounds absurd to talk about this, there are legal proposals to do that at least for some crimes. In the EU there's the idea that there should be machine learning run on devices to detect when the user engages in some crimes and alerts authorities.  Brazil discusses at the moment legal responsibility for social media companies who publish "fake news".

Looking at the “accelerating projection of 1960–1976” data points here, it reaches almost 3 TW by the mid-2010s:

According to Our World in Data's energy data explorer, world electricity generation in 2021 was 27,812.74 TWh, which is 3.17 TW (using 1W = 8,766 Wh/year).

Comparing almost 3TW at about 2015 (just eyeballing the chart) to 3.17 TW in 2021, I say those are roughly equal. I did not make anything “significantly shinier”, or at least I did not intend to.

3Thomas Sepulchre
Crystal-clear, thank you!
Load More