All of Jeffrey Heninger's Comments + Replies

I would also make the same prediction for Q > 10. Or when CFS first sells electricity to the grid. These will be farther into the future, but I do not think that this culture will have changed by then.

I think that I predict the opposite (conditional on what exactly is being predicted).

What exactly would count as a GPT-3 moment for fusion? How about an experiment demonstrating reactor-like conditions? This is roughly equivalent to what I referred to as 'getting fusion' in my book review.

My prediction is that, after Commonwealth Fusion Systems gets Q > 5 on SPARC, they will continue to supply or plan to supply HTS tape to at least 3 other fusion startups.

3mishka
I'd say that the ability to produce more energy overall than what is being spend on the whole cycle would count as a "GPT-3 moment". No price constraints, so it does not need to reach the level of "economically feasible", but it should stop being "net negative" energy-wise (when one honestly counts all energy inputs needed to make it work). I, of course, don't know how to translate Q into this. GPT-4o tells me that it thinks that Q=10 is what is approximately needed for that (for "Engineering Break-even (reactor-level energy balance)"), at least for some of the designs, and Q in the neighborhood of 20-30 is what's needed for economic viability, but I don't really know if these are good estimates. But assuming that these estimates are good, Q passing 10 would count as the GPT-3 moment. What happens then might depend on the economic forecast (what's the demand for energy, what are expected profits, and so on). If they only expect to make profits typical for public utilities, and the whole thing is still heavily oriented towards publicly regulated setups, I would expect continuing collaboration. If they expect some kind of super-profits, with market share being really important and with expectations of chunks of it being really lucrative, then I would not bet on continuing collaboration too much...

I agree that this is plausibly a real important difference, but I do not think that it is obvious.

The most recent augmentative technological change was the industrial revolution. It has reshaped virtually every every activity. It allowed for the majority of the population to not work in agriculture for the first time since the agricultural revolution.

The industrial revolution centered on energy. Having much cheaper, much more abundant energy allowed humans to use that energy for all sorts of things. 

If fusion ends up being similar in cost to existing ... (read more)

2AnthonyC
Fusion plants are manufactured. By default, our assumption should be that plant costs follow typical experience curve behavior. Most technologies involving production of physical goods do. Whatever the learning rate x for fusion turns out to be, the 1000th plant will likely cost close to x^10. Obviously the details depend on other factors, but this should be the default starting assumption. Yes, the eventual impact assumption should be significant societal and technological transformation by cheaper and more abundant electricity. The scale for that transformation is measured in decades, and there are humans designing and permitting and building and operating each and every one, on human timescales. There's no winner take all dynamic even if your leading competitor builds their first commercial plant five years before you do. Also: We do have other credible paths that can also greatly increase access to comparably low-cost dispatchable clean power on a similar timescale of development, if we don't get fusion. Also true, which means the default assumption without it is that the scaling behavior looks like the scaling behavior for other successful software innovations. In software, the development costs are high and then the unit costs in deployment quickly fall to near zero. As long as AI benefits from collecting user data to improve training (which should still be true in many non-foom scenarios) then we might expect network effect scaling behavior where the first to really capture a market niche becomes almost uncatchable, like Meta and Google and Amazon. Or where downstream app layers are built on software functionality, switching costs become very high and you get a substantial amount of lock-in, like with Apple and Microsoft. Agreed. But, if any of the leading labs could credibly state what kinds of things they would or wouldn't be able to do in a foom scenario, let alone credibly precommit to what they would actually do, I would feel a whole lot better and sa
1bhishma
Jeffrey, I appreciate your points about fusion's potential, and the uncertainty around "foom." However, I think framing this in terms of bottlenecks clarifies the core difference. The Industrial Revolution was transformative because it overcame the energy bottleneck. Today, while clean energy is vital, many transformative advancements are primarily bottlenecked by intelligence, not energy. Fusion addresses an important, existing constraint, but it's a step removed from the frontier of capability. AI, particularly AGI, directly targets that intelligence bottleneck, potentially unlocking progress across virtually every domain limited by human cognitive capacity. This difference in which bottleneck is addressed makes the potential transformative impact, and thus the strategic landscape, fundamentally distinct. Even drastic cost reductions in energy don't address the core limiting factor for progress in areas fundamentally constrained by our cognitive and analytical abilities.

Before. The 2022 survey responses were collected from June-August. ChatGPT came out at the end of November.

A few more thoughts on Ord's paper:

Despite the similarities, I think that there is some difference between Ord's notion of hyperbolation and what I'm describing here. In most of his examples, the extra dimension is given. In the examples I'm thinking of, what the extra dimension ought to be is not known beforehand.

There is a situation in which hyperbolation is rigorously defined: analytic continuation. This takes smooth functions defined on the real axis and extends them into the complex plane. The first two examples Ord gives in his paper are examples of ... (read more)

Climate change is not the only field to have defined words for specific probability ranges. The intelligence community has looked into this as well. They're called words of estimative probability.

3eggsyntax
I was unfamiliar with the intelligence community's work in this area until it came up in another response to this post. And I haven't run across the phrase 'words of estimative probability' before at all until your mention. Thank you!

A lot of the emphasis is on climate change, which has become partisan than other environmental issues. But other environmental issues have become partisan as well. Here's some data from a paper from 2013 by D.L. Guber, "A cooling climate for change? Party polarization and the politics of global warming."
 

The poll you linked indicates that Republicans in the Mountain West are more concerned with the environmental than Republicans in the rest of the country. There is a 27 p.p. partisan gap on the energy vs environment question (p. 17) - much less than t... (read more)

1zoop
Actually, my read of the data is that the mountain west is not more environmentally conscious than the rest of the US.  The mountain west poll does not include national numbers, so I have no idea where your national comparisons are coming from. If I did, I'd check for same year/same question, but because I don't know where they're from I can't. Take a look at this cool visualization of different state partisan splits from 2018: https://climatecommunication.yale.edu/visualizations-data/partisan-maps-2018/ The mountain west appears neither significantly more nor significantly less partisan on any of the climate change related questions than the rest of the US.  My main point, which I don't think you've contradicted (even if I accept that the mountain west is unique), is that you're making an argument about "environmentalism" partisanship by using primarily "climate change" polling data. The charts from the 2013 paper you've posted sort of confirm this take–climate change is obviously a uniquely partisan issue.  The intro to your sequence states the following: Basically, I have not seen evidence that this is true for issues beyond climate change (or other countries!), and I think your sequence would benefit by explicitly comparing  * the partisan split of non-climate-change environmental issues (e.g. rain forest protection) to  * the partisan split of non-environmental issues (e.g. taxation)
1Andrew Burns
Several things are at work. * Global warming is highly partisan, and the proposed solutions to it are extremely polarizing. Al Gore's An Inconvenient Truth is probably single-handedly responsible for shifting the GOP to deny it exists. The taint spread to other issues. A whiff of any environmentalism raises hackles that would not have been raised otherwise. * Environmentalists have used environmental laws that were initially bipartisan to throw wrenches into development favored by GOP. * Partisan sorting. Republicans who were concerned about the environment in 1990 are dead, changed positions, or are no longer Republicans, just like anti-abortion Democrats from 1990 are dead, changed positions, or are no longer Democrats.

I think that this is a coincidence. Japan has low partisanship for environmentalism and has less nuclear power than most developed countries (along with low overall partisanship). The association would be between three things: (1) low partisanship for environmentalism, (2) high overall partisanship, and (3) lots of nuclear power plants. There aren't enough countries to do this kind of correlation.

From the introduction to the last post in this sequence: 

Environmentalists were not the only people making significant decisions here. Fossil fuel companies and conservative think tanks also had agency in the debate – and their choices were more blameworthy than the choices of environmentalists. Politicians choose who they do and do not want to ally with. My focus is on the environmental movement itself, because that is similar to what other activist groups are able to control.

The motivation for this report was to learn what the AI safety movement should do to keep from becoming partisan. 'Meta doesn't lobby the government' isn't an action the AI safety movement can take.

1Alex K. Chen (parrot)
Newt Gingrich started out as an environmentalist (and a former member of the Sierra Club), but later turned away from it. Even after he left congress, he still had some sympathy for environmental issues, as he wrote the book "Contract with Earth" (with an EO Wilson forward).  Newt can be surprisingly high openness - a person oriented towards novelty can be pro-drilling (accel), pro-geoengineering, and pro-environment (which can be decel), and maybe not reconcile the two together in the most consistent way. He has been critical of both parties on climate change/environment issues (just as Mitt Romney has been, who scores low on the LCV but who really does care about addressing climate change, just not in the "punitive" way that the Democrats want to see it addressed). Free-market environmentalists who do care have different approaches that might on the surface be seen as riskier (just as making use of more energy gives you more resources to address the problem faster even while pumping more entropy into the system). But his high openness (for a Republican) seems to have also made him more stochastic, or inconsistent. https://archive.ph/LsZeh Ronald Reagan was surprisingly pro-environment as governor of California (Gavin Newsom even spoke about it when he visited China), but later was seen as anti-environmental by environmental groups as president (esp due to his choices of Secretary of the Interior and https://www.cnn.com/2024/01/17/politics/supreme-court-epa-neil-gorsuch-chevron/index.html ) and his generally pro-industry choices. George H.W. Bush was surprisingly pro-environment in his first 2 years (ozone, acid rain..), but was advised to no longer be pro-environment b/c it would not sit well with his base.. worth reading: https://kansaspress.ku.edu/blog/2021/10/13/when-democrats-and-republicans-united-to-repair-the-earth/ === the LCV seems to take the view that all drilling/resource extraction (or industry) is bad. But it still is done somewhere, and if n

Thank you !

The links to the report are now fixed.

The 4 blog posts cover most of the same ground as the report. The report goes into more detail, especially in sections 5 & 6.

I think this is true of an environmentalist movement that wants there to be a healthy environment for humans; I'm not sure this is true of an environmentalist movement whose main goal is to dismantle capitalism.

I talk about mission creep in the report, section 6.6.

Part of 'making alliances with Democrats' involved environmental organizations adopting leftist positions on other issues. 

Different environmental organizations have seen more or less mission creep. The examples I give in the report are the women's issues for the World Wildlife Fund:

In many

... (read more)
2Vaniver
Similarly for the Sierra Club, I think their transition from an anti-immigration org to a pro-immigration org seems like an interesting political turning point that could have failed to happen in another timeline.

This is trying to make environmentalism become partisan, but in the other direction.

Environmentalists could just not have positions on most controversial issues, and instead focus more narrowly on the environment.

1Ape in the coat
It's making environmentalism bi-partisan. It's too late to make environmentalism never have been partisan in the first place. And you can't just persuade current people in the environmentalist movement to stop caring about all the other issues, except environment. Neither it will work, nor I think it will be net positive thing to do. But there is still an opportunity to have its own branch of environmentalism for republicans. 

There is also the far right in France, which is not the same as the right wing in America, but is also not Joe Biden. From what I can tell, the far right in France supports environmentalism.[1]

Macron & Le Pen seem to have fairly similar climate policies. Both want France's electricity to be mostly nuclear – Le Pen more so. Both are not going to raise fuel taxes – Macron reluctantly. Le Pen talks more about hydrogen and reshoring manufacturing from countries which emit more (and claims that immigration is bad for France's environmental goals). Macron su... (read more)

1sloonz
  Yes (with some minor caveats). It is also pro-choice on abortion (https://www.lemonde.fr/politique/article/2022/11/22/sur-l-ivg-marine-le-pen-change-de-position-et-propose-de-constitutionnaliser-la-loi-veil_6151030_823448.html) (with some minor caveats), and pro-gun-control (can’t find a link for that, sorry — the truth is that they are pro-gun-control because there is literally no one debating for the side pro-gun-rights at all, pro-gun-control is an across-the-board consensus). 

I think it was possible for the environmental movement to form alliances with politicians in both parties, and for environmentalism to have remained bipartisan.

Comparing different countries and comparing the same country at different times is not the same thing as a counterfactual, but it can be very helpful for understanding counterfactuals. In this case, the counterfactual US is taken to be similar to the US in the 1980s or to the UK, France, or South Korea today.

Vaniver1319

I think this is true of an environmentalist movement that wants there to be a healthy environment for humans; I'm not sure this is true of an environmentalist movement whose main goal is to dismantle capitalism. I don't have a great sense of how this has changed over time (maybe the motivations for environmentalism are basically constant, and so it can't explain the changes), but this feels like an important element of managing to maintain alliances with politicians in both parties.

(Thinking about the specifics, I think the world where Al Gore became a Rep... (read more)

3Alex K. Chen (parrot)
Bill Frist, the former Republican Senate Majority Leader under Bush (even though he had a low score by the partisan/zero compromises LCV), is now chairman at the Nature Conservancy (it's even his LinkedIn profile header) and frequently speaks out on environment and climate change issues. His kind of Republicanism is now way out of vogue. https://www.tennessean.com/story/news/2022/08/16/tenneessee-former-senator-bill-frist-elected-chair-nonprofit-nature-conservancy/10328455002/ https://www.linkedin.com/posts/billfristmd_nature-conservation-activity-7114961629628227585-C5BY?utm_source=share&utm_medium=member_android Republicans from Utah seem to disproportionately form the Republican climate change caucus - they tend to be somewhat more open-minded than Republicans elsewhere, and some of the current representatives have been outspoken on the need to combine conservation with conservatism (though this also means making some compromises with federal land ownership which has become an unusually partisan "don't compromise" issue). 

I think you should ask the author of the song if it's referring to someone using powerful AI to do something transformative to the sun.

This is extremely obvious to me. The song is opposed to how the sun currently is, calling it "wasteful" and "distasteful" - the second word is a quote from a fictional character, but the first is not. It later talks about when "the sun's a battery," so something about the sun is going to change. I really don't know what "some big old computer" could be referring to if not powerful AI.

1tcheasdfjkl
oh yeah my dispute isn't "the character in the song isn't talking about building AI" but "the song is not a call to accelerate building AI"

Thank you for responding! I am being very critical, both in foundational and nitpicky ways. This can be annoying and make people want to circle the wagons. But you and the other organizers are engaging constructively, which is great.

The distinction between Solstice representing a single coherent worldview vs. a series of reflections also came up in comments on a draft. In particular, the Spinozism of Songs Stay Sung feels a lot weirder if it is taken as the response to the darkness, which I initially did, rather than one response to the darkness.

Neverthele... (read more)

3tcheasdfjkl
Super disagree with this! Neither I nor (I have not checked but am pretty certain) the author of the text wants to advocate that! (Indeed I somewhat actively tried to avoid having stuff in my program encourage this! You could argue that even though I tried to do this I did not succeed, but I think the fact that you seem to be reading ~motivations into authors' choices that aren't actually there is a sign that something in your analysis is off.) I think it's pretty standard that having a fictional character espouse an idea does not mean the author espouses it. In the case of this song I did actually consider changing "you and I will flourish in the great transhumanist future" to "you and I MAY flourish in the great transhumanist future" to highlight the uncertainty, but I didn't want to make changes against the author's will, and Alicorn preferred to keep the "will" there because the rest of the song is written in the indicative mood. And, as I said before, Solstice is a crowdsourced endeavor and I am not willing to only include works where I do not have the slightest disagreement. hmm, I want to be able to sing songs that express an important thing even if one can possibly read them in a way that also implies some things I disagree with you are extremely welcome to suggest new versions of things! but a lot of the cost is distributed and/or necessarily borne by the organizers. changing lines in a song that's sung at Solstice every year is a Big Deal and it is simply not possible to do this in a way that does not cause discourse and strife (I guess arguably we managed the "threats and trials" line in TWTR without much discourse or strife but I think the framing did a lot there and I explicitly didn't frame it as a permanent change to the song, and also it was a pretty minor change)

The London subway was private and returned enough profit to slowly expand while it was coal powered. Once it electrified, it became more profitable and expanded quickly.

The Baltimore tunnel was and is part of an intercity line that is mostly above ground. It was technologically similar to London, but operationally very different.

I chose the start date of 1866 because that is the first time the New York Senate appointed a committee to study rapid transit in New York, which concluded that New York would be best served by an underground railroad. It's also the start date that Katz uses.

The technology was available. London opened its first subway line in 1863. There is a 1.4 mi railroad tunnel from 1873 in Baltimore that is still in active use today. These early tunnels used steam engines. This did cause ventilation challenges, but they were resolvable. The other reasonable pre-electr... (read more)

4[anonymous]
Now that you've brought up other working systems, the question would be if the pre-electric subways ROIed.  Yes there's cable driven streetcars in SF, as I recall switching cable is something the driver does.  So that's a valid power mechanism.   Mitigations and inferior tech has costs.  Higher ceilings, people passing out from CO exposure and heat, big ventilation fans that waste coal.   Were these costs enough to make the London subway unsustainable in an economic sense? The Baltimore one sounds too small to be viable.   Another thing you'd have to look at is what NYC residents are giving up.  If a subway saves 20 minutes each way at that time from a 10 hour workday (Fair Labor Standards Act is 1944 limiting it to nominally 44 hours a week except for exempt), that's a cost of 6 percent to daily productivity.  Less because early subway networks only have partial coverage, so only the portion of the city's residents covered have a 6 percent delta in productivity. This is such a small effect the historical data may not show anything.  Many other factors would affect the economic performance of NYC and Chicago.   A hypothetical technology that made a 100% difference in productivity, or 1000%, would be far more costly to give up, and it might simply not be a viable choice at all.  (unviable because it effectively makes the group "not giving in to temptation" cost 2x-10x as much to do any task, and they are selling goods and services to the global market.  Would go broke fast.  I did this analysis when looking at autonomous driving, and I realized that autonomous taxi and trucking firms could set their price to where their competition still using drivers loses money on every ride)

The original version of the song reads to me as being deist or pantheist. You could replace 'God' with 'Nature' and the meaning would be almost the same. My view of Divinely Guided Evolution has a personal God fiddling with random mutations and randomly determined external factors to create the things He wants.

It is definitely anti-Young-Earth-Creationism, but it is also dismissive of the Bible. Even if you don't think that Genesis 1 should be treated as a chronology, I think that you should take the Bible seriously. Its commentary on what it means to be human is important.

Many of these seem reasonable. The "book of names" sounds to me like the Linnaean taxonomy, while the "book of night" sounds like astronomical catalogues. I don't know as much about geology, but the "book of earth" could be geological surveys.

This kind of science is often not exciting. Rutherford referred to it as "stamp collecting." It is very useful for the practice of future generations of scientists. For example, if someone wants to do a survey of various properties of binary star systems, they don't have to find a bunch of examples themselves (and wor... (read more)

If it were done at Lighthaven, it would have to be done outdoors. This does present logistical problems.

I would guess that making Lighthaven's outdoor space usable even if it rains would cost much less (an order of magnitude?) than renting out an event space, although it might cost other resources like planning time that are in more limited supply.

If Lighthaven does not want to subsidize Solstice, or have the space reserved a year in advance, then that would make this option untenable.

It's also potentially possible to celebrate Solstice in January, when event spaces are more available.

Staggering the gathering in time also works. Many churches repeat their Christmas service multiple times over the course of the day, to allow more people to come than can fit in the building.

3Ben Pace
Staggering it sounds kind of nice. It could allow there to be a solstice event on the actual solstice (Dec 21st) as well as small solstice celebrations in the week leading up for those who cannot be there on that date. I'd be excited to try that at Lighthaven (if the solstice organizers wanted to give it a shot), though I also really like having a big get-together. Perhaps we could have a week-long solstice celebration at Lighthaven with multiple rituals and other little fun things for people to do.

There's another reason for openness that I should have made clearer. Hostility towards Others is epistemically and ethically corrosive. It makes it easier to dismiss people who do agree with you, but have different cultural markers. If a major thing that unifies the community is hostility to an outgroup, then it weakens the guardrails against actions based on hate or spite. If you hope to have compassion for all conscious creatures, then a good first step is to try to have compassion for the people close to you who are really annoying.

Christianity seems to

... (read more)

Hostility towards Others may be epistemically and ethically corrosive, but the kind of hostility I have discussed is also sometimes necessary. For instance, militaristic jingoism is bad, and I am hostile to it. I am also wary of militaristic jingoists, because they can be dangerous (this is an intentionally extreme example; typical religions are less dangerous).

There is a difference between evangelizing community membership and evangelizing an ideology or set of beliefs. 

Usually, a valuable community should only welcome members insofar as it can still... (read more)

So I think the direction in which you would want Solstice to change -- to be more positive towards religion, to preach humility/acceptance rather than striving/heroism -- is antithetical to one of Solstice's core purposes.

While I would love to see the entire rationalist community embrace the Fulness of the Gospel of Christ, I am aware that this is not a reasonable ask for Solstice, and not something I should bet on in a prediction market. While I criticize the Overarching Narrative, I am aware that this is not something that I will change.

My hopes for chan... (read more)

also it’s a lot more work to setup

How hard would it be to project them? There was a screen, and it should be possible to project at least two lines with music large enough for people to read. Is the problem that we don't have sheet music that's digitized in a way to make this feasible for all of the songs?

5Raemon
We do not currently have sheet music for most songs. It’s also extra labor to arrange the slides (though this isn’t that big a part of the problem)

This is more volunteer-based than I was expecting. I would have guessed that Solstice had a lot of creative work, the choir, and day-of work done by volunteers, but that the organizers and most of the performers were paid (perhaps below market rates). As it is, it is probably more volunteer-based than most Christmas programs.

I'll edit the original post to say that this suggestion is already being followed.

This kind of situation is dealt with in Quine's Two Dogmas of Empiricism, especially the last section, "Empiricism Without the Dogmas." This is a short (~10k words), straightforward, and influential work in the philosophy of science, so it is really worth reading the original.

Quine describes science as a network of beliefs about the world. Experimental measurements form a kind of "boundary conditions" for the beliefs. Since belief space is larger than the space of experiments which have been performed, the boundary conditions meaningfully constrain but do ... (read more)

I'm currently leaning towards

  • kings and commonwealths and all

Tokamaks have been known for ages. We plausibly have gotten close to the best performance out of them that we could, without either dramatically increasing the size (ITER) or making the magnets significantly stronger. The high temperature superconducting[1] 'tape' that Commonwealth Fusion has pioneered has allowed us to make stronger magnetic fields, and made it feasible to build a fusion power plant using a tokamak the size of JET.

After SPARC, Commonwealth Fusion plans to build ARC, which should actually ship electricity to customers. ARC should have... (read more)

OpenAI has to face off against giants like Google and Facebook, as well as other startups like Anthropic. There are dozens of other organizations in this space, although most are not as competitive as these.

Commonwealth Fusion has to face off against giants like ITER (funding maybe $22B, maybe $65B, estimates vary) and the China National Nuclear Corporation (building CFETR at ?? cost, while a much smaller experiment in China cost ~$1B), as well as other startups like Helion. The Fusion Industry Association has 37 members, which are all private companies tr... (read more)

I thought about including valuation in the table as well, but decided against it:

  • I'm not sure how accurate startup valuations are. It make be less clear how to interpret what the funding received means, but the number is easier to measure accurately.
  • These are young companies, so the timing of the valuation matters a lot. OpenAI's valuation is recent, or 8 years after the company was founded. Commonwealth Fusion's valuation is from 2 years ago, or 4 years after the company was founded. If each had multiple valuations, then I would have made a graph like Figure 1 for this.

The cost to build a tokamak that is projected to reach Q~10 has fallen by more than a factor of 10 in the last 6 years. CFS is building for $2B what ITER is building for maybe $22B, maybe $65B (cost estimates vary).

It's really not clear what the cost of fusion will end up being once it becomes mass produced.

2[anonymous]
Ok, I think I may have missed a key piece above. 1.8 trillion is currently spent globally to generate electric power.   96.5 trillion is current world GDP. If AI automation can reduce the cost of 50% of jobs by 50%, then it's value per year is 24 trillion.  (much more because AI will enable to the economy to grow) Obviously if fusion makes electricity cost $0, free, it's value created per year is 1.8 trillion.  More realistically, competitive fusion will probably not reduce costs at all - it will simply reduce carbon emissions, which is a cost not priced into that "1.8T" figure.  If we say the cost of the carbon emissions are $75 a ton, and 36.8 gigatons are global carbon emissions for electric power generation, then $2.76 trillion is the "externalities" from generating electricity. So if fusion costs the same as current equipment at scale, then the benefit from fusion is $2.76 trillion. Also, electric power is usually not the bottleneck resource for economic growth.  It's a necessary condition but human labor, IP, economic systems that don't allow mass amounts of theft and inefficiencies - these I think contribute much more.

Helion has raised a similar amount of capital as Commonwealth: $2.2B. Helion also has hundreds of employees: their LinkedIn puts them in the 201-500 employees category. It was founded in 2013, so it is a bit older than CFS or OpenAI.

My general sense is that there's more confidence in the plasma physics community that CFS will succeed than that Helion will succeed.

SPARC is a tokamak, and tokamaks have been extensively studied. SPARC is basically JET with a stronger magnetic field, and JET has been operational since the 1980s and has achieved Q=0.67. It's on... (read more)

1mishka
That is, indeed, an important indicator. Otherwise, tokamaks being an old design works as an argument in the opposite direction for me (more or less along the following lines: tokamak design has been known for ages, and they still have not succeeded with it; perhaps an alternative and less tried design would have better chances, since at the very least it does not have the accumulated history of multi-decade-long delays associated with it). (I guess, my assumption is that the mainstream plasma community has been failing us for a long time, feeding us more promises than actual progress for decade after decade, and that I would rather bet on something from the "left field" at this point, at least in terms of the chances to achieve commercial viability relatively soon, as opposed to the ability to attract funding or boost headcounts.) ---------------------------------------- Basically, yes, one thing we are comparing is their (Helion and CFS) respective 2024 and 2025 promises regarding Q>1, but more importantly from my viewpoint, Helion's promise to actually ship electricity to the customers in 2028 does seem overoptimisitic, but perhaps not outrageously so, whereas with tokamaks, what's our forecast for when they have a chance to actually ship electricity to the customers?

I have now looked into this example, and talked to Bean at Naval Gazing about it.

I found data for the total tonnage in many countries' navies from 1865-2011. It seems to show overhang for the US navy during the interwar years, and maybe also for the Italian navy, but not for any of the other great powers.

Bean has convinced me that this data is not to be trusted. It does not distinguish between warships and auxiliary ships, or between ships in active duty and in reserve. It has some straightforward errors in the year the last battleship was decommissioned a... (read more)

2[anonymous]
What are we trying to model here or find examples of?  Here's what I think we're trying to model: if a technology were isolated and for whatever reason, development was stopped, then during the 'stopped' period very little effort is being put into it. After the 'stopped' period ends, development resumes and presumably progress is proportional to effort, with an unavoidable serial part of the process (from Amdahl's law/ Gant charts show this) restricting the rate that progress could be made at. For US Navy tonnage : without a washington Naval treaty, and a Great Depression and a policy of isolation, the US Navy would presumably have built warships at a steady rate.  They did not, as shown in your data. However, during this prewar period, other processes continued.  Multiple countries continuously improved aircraft designs, with better aerodynamics (biplane to mono), carrier launching and landing, ever larger and more powerful engines, dive and torpedo bombing, and other innovations.     So even though very few ships are being built, aircraft are being improved.  Now Pearl harbor, and unpause.  All out effort, which shows in the data you linked. But we don't have to trust it, all that really matters is the aircraft carrier numbers, nothing else.  As it turned out, the carrier was a hard counter to everything, even other carriers - the other ships in a carrier battle group are there to hunt submarines, supplement the carriers antiaircraft fire, and resupply the carriers.  While there were direct gun battles in late ww2 in the Pacific theater, better admirals could probably have avoided every battle and just sank all the enemy ships with aircraft.  Shooting down enemy aircraft was also way easier, it turned out, to do with aircraft.   So only the left column matters for the model, and you also need the 0 point.  There were seven fleet aircraft carriers and one escort carrier at t=0, beginning of ww2.   If we count the escort carriers at 30% of a fleet carrier, a

The examples are things that look sort of like overhang, but are different in important ways. I did not include the hundreds of graphs I looked through that look nothing like overhang.

Your post reads, to me, as saying, "Better algorithms in AI may add new s-curves, but won't jump all the way to infinity, they'll level off after a while." 

The post is mostly not about either performance s-curves or market size s-curves. It's about regulation imposing a pause on AI development, and whether this would cause catch-up growth if the pause is ended.

Stacked s-curves can look like a pause + catch-up growth, but they are a different mechanism.

4AnthonyC
True, that was poor framing on my part.  I think I was thrown by the number of times I've read things about us already being in hardware overhang, which a pause would make larger but not necessarily different-in-kind. I don't know if (or realistically, how much) larger overhangs lead to faster change when the obstacle holding us back goes away. But I would say in this proposed scenario that the underlying dynamics of how growth happens don't seem like they should depend on whether the overhang comes from regulatory sources specifically. The reason I got into the whole s-curve thing is largely because I was trying to say that overhangs are not some novel thing, but rather a part of the development path of technology and industry generally. In some sense, every technology we know is possible is in some form(s) of overhang, from the moment we meet any of the prerequisites for developing it, right up until we develop and implement it. We just don't bother saying things like "Flying cars are in aluminum overhang."

The Soviet Union did violate the Biological Weapons Convention, which seems like an example of "an important, binding, ratified arms treaty." It did not lead to nuclear war.

1M. Y. Zuo
It's very misleading to cite that wikipedia article as an example as the actual text of the BWC only bans substances that are classified as 'biological and toxin weapons'. But not substances classified as 'biodefense', 'defensive', etc., capabilities.  And guess which parties the text assigns to be responsible for making that determination? Which is the loophole that allows countries to operate 'biodefense programs'. i.e. I'm fairly certain the Soviet Union never in fact violated the Convention according to the letter of the law, since all it would have taken to comply was a single piece of paper from the politburo reclassifying their programs to 'biodefense' programs. 

I did not look at the Washington Naval Conference as a potential example. It seems like it might be relevant. Thank you !

It seems to me that governments now believe that AI will be significant, but not extremely advantageous. 

I don't think that many policy makers believe that AI could cause GDP growth of 20+% within 10 years. Maybe they think that powerful AI would add 1% to GDP growth rates, which is definitely worth caring about. It wouldn't be enough for any country which developed it to become the most powerful country in the world within a few decades, and would be an incentive in line with some other technologies that have been rejected.

The UK has AI as one of the... (read more)

The impression of incuriosity is probably just because I collapsed my thoughts into a few bullet points.

The causal link between human intelligence and neurons is not just because they're both complicated. My thought process here is something more like:

  • All instances of human intelligence we are familiar with are associated with a brain.
  • Brains are built out of neurons.
  • Neurons' dynamics looks very different from the dynamics of bits.
  • Maybe these differences are important for some of the things brains can do.

It feels pretty plausible that the underlying archite... (read more)

9Richard Korzekwa
This seems very reasonable to me, but I think it's easy to get the impression from your writing that you think it's very likely that: 1. The differences in dynamics between neurons and bits are important for the things brains do 2. The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level. I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that's relevant to doing things like "learning rocket engineering", that's also hard to replicate in a digital computer. (To be clear, I think this is difficult and I don't have much of an object level take on any of this, but I think I can empathize with Steven's position here)

Brains do these kinds of things because they run algorithms designed to do these kinds of things. 

If by 'algorithm', you mean thing-that-does-a-thing, then I think I agree. If by 'algorithm', you mean thing-that-can-be-implemented-in-python, then I disagree.

Perhaps a good analogy comes from quantum computing.* Shor's algorithm is not implementable on a classical computer. It can be approximated by a classical computer, at very high cost. Qubits are not bits, or combinations of bits. They have different underlying dynamics, which makes quantum computer... (read more)

5Steven Byrnes
Different kinds of computers have different operations that are fast versus slow. On a CPU, performing 1,000,000 inevitably-serial floating point multiplications is insanely fast, whereas multiplying 10,000×10,000 floating-point matrices is rather slow. On a GPU, it’s the reverse. By the same token, there are certain low-level operations that are far faster on quantum computers than classical computers, and vice-versa. In regards to Shor’s algorithm, of course you can compute discrete logs on classical computers, it just takes exponentially longer than with quantum computers (at least with currently-known algorithms), because quantum computers happen to have an affordance for certain fast low-level operations that lead to calculations of the discrete log. So anyway, it’s coherent to say that: * Maybe there is some subproblem which is extremely helpful for human-like intelligence, in the same way that calculating discrete logs is extremely helpful for factoring large numbers. * Maybe neurons and collections of neurons have particular affordances which enable blazingly-fast low-level possibly-analog solution of that subproblem. Like, maybe the dynamics of membrane proteins just happens to line up with the thing you need to do in order to approximate the solution to some funny database query thing, or whatever. * …and therefore, maybe brains can do things that would require some insanely large amount of computer chips to do. …But I don’t think there’s any reason to believe that, and it strikes me as very implausible. Hmm, I guess I get the impression from you of a general lack of curiosity about what’s going on here under the hood. Like, exactly what kinds of algorithmic subproblems might come up if you were building a human-like intelligence from scratch? And exactly what kind of fast low-level affordances are enabled by collections of neurons, that are not emulate-able by the fast low-level affordances of chips? Do we expect those two sets to overlap or not?

I don't believe that "current AI is at human intelligence in most areas". I think that it is superhuman in a few areas, within the human range in some areas, and subhuman in many areas - especially areas where the things you're trying to do are not well specified tasks.

I'm not sure how to weight people who think most about how to build AGI vs more general AI researchers (median says HLAI in 2059, p(Doom) 5-10%) vs forecasters more generally.  There's a difference in how much people have thought about it, but also selection bias: most people who are sk... (read more)

3Seth Herd
I agree that there's a heavy self-selection bias for those working in safety or AGI labs. So I'd say both of these factors are large, and how to balance them is unclear. I agree that you can't use the Wright Brothers as a reference class, because you don't know in advance who's going to succeed. I do want to draw a distinction between AI researchers, who think about improving narrow ML systems, and AGI researchers. There are people who spend much more time thinking about how breakthroughs to next-level abilities might be achieved, and what a fully agentic, human-level AGI would be like. The line is fuzzy, but I'd say these two ends of a spectrum exist. I'd say the AGI researchers are more like the society for aerial locomotion. I assume that society had a much better prediction than the class of engineers who'd rarely thought about integrating their favorite technologies (sailmaking, bicycle design, internal combustion engine design) into flying machines.

From Yudkowsky's description of the AI-Box Experiment:

The Gatekeeper party may resist the AI party’s arguments by any means chosen – logic, illogic, simple refusal to be convinced, even dropping out of character – as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

4Jiro
If that meant what you interpret it to mean, "does not actually stop talking" would be satisfied by the Gatekeeper typing any string of characters to the AI every so often regardless of whether it responds to the AI or whether he is actually reading what the AI says. All that that shows is that the rules contradict themselves. There's a requirement that the Gatekeeper stay engaged with the AI and the requirement that the Gatekeeper "actually talk with the AI". The straightforward reading of that does not allow for a Gatekeeper who ignores everything and just types "no" every time--only a weird literal Internet guy would consider that to be staying engaged and actually talking.
2Richard_Kennaway
Ok.

One of the tactics listed on RationalWiki's description of the AI-box experiment is: 

Jump out of character, keep reminding yourself that money is on the line (if there actually is money on the line), and keep saying "no" over and over

2Richard_Kennaway
RationalWiki is not a reliable source on any subject. Jumping out of character ignores the entire point of the AI-box exercise. It's like a naive chess player just grabbing the opponent's king and claiming victory.

The Lord of the Rings tells us that the hobbit’s simple notion of goodness is more effective at resisting the influence of a hostile artificial intelligence than the more complicated ethical systems of the Wise.

The miscellaneous quotes at the end are not directly connected to the thesis statement.

In practice, smoothness interacts with measurement: we can usually measure the higher-order bits without measuring lower-order bits, but we can’t easily measure the lower-order bits without the higher-order bits. Imagine, for instance, trying to design a thermometer which measures the fifth bit of temperature but not the four highest-order bits. Probably we’d build a thermometer which measured them all, and then threw away the first four bits! Fundamentally, it’s because of the informational asymmetry: higher-order bits affect everything, but lower-order b

... (read more)
3Alexander Gietelink Oldenziel
Beautiful example😌

It seems like your comment is saying something like:

These restrictions are more relevant to an Oracle than to other kinds of AI.
 

4Richard_Kennaway
Even an Oracle can act by answering questions in whatever way will get people to further its intentions.

Unfortunately, decisions about units are made by a bunch of unaccountable bureaucrats. They would rather define the second in terms that only the techno-aristocracy can understand instead of using a definition that everyone can understand. It's time to turn control over our systems of measurement back to the people !
#DemocratizeUnits

Adding a compass is unlikely to also make the bird disoriented when exposed to a weak magnetic field which oscillates at the right frequency. Which means that the emulated bird will not behave like the real bird in this scenario. 

You could add this phenomenon in by hand. Attach some detector to your compass and have it turn off the compass when these fields are measured.

More generally, adding in these features ad hoc will likely work for the things that you know about ahead of time, but is very unlikely to work like the bird outside of its training di... (read more)

Load More