So odd that this got slated first for Idaho.
I assume it's because INL near Arco is one of the very few places in the US where a critical mass of people are likely to believe that SMRs are a good thing.
This is a great start, but still a drop in the bucket, as I understand it, compared to what we will need if we intend to largely rely on solar and wind in the future.
Yes, but also, that graph is in GW, not GWh. It's saying that, at peak discharge, batteries can already provide about 5% of record peak demand. That's ~4.3 doublings from what we need in that record. But, those are on average only 2 hr batteries - 83GWh. We'll need at least 7 doubles of capacity to be able to deal with the intermittency of a mostly-renewables grid - exact amount TBD based on how many other things we can get right. Luckily that also means when we move beyond ~4 hrs or so we can largely switch to (what should become) cheaper but lower-peak-power options than Li-ion (that are also less critical mineral intensive).
The US Forest Service wanted to implement a wildfire prevention plan, so it had to fill out an environmental impact statement. Before they could complete the environmental impact statement, though, half the forest burned down.
I take it as given that in a saner world, we would be rushing to gather up excess fuel that feeds wildfires, and turning it into biochar, or putting it into bioenergy plants. Instead, we can't even give ourselves permission to controlled-burn it, when it's going to uncontrolled-burn anyway.
"There exists a physical world that keeps going whether or not you want to deal with it" seems to not be a part of our societal consensus.
How long does that intermediate step take? If the machine shop-shop outputs one shop per week, but every 4th (or whatever) output is a mining-robot-shop or a forestry-robot-shop or something else instead, is that sufficient to solve your inputs problem while only slowing down the doubling time by a modest amount?
I think you're overestimating the discourse on Frost.
But this is already presupposing the existence of the superintelligence whose feasibility we are trying to explain.
Strictly speaking I only presupposed an AI could reach close to the limits of human intelligence in terms of thinking ability, but with the inherent speed and parallelizability and memory advantages of a digital mind.
Do you have any examples handy of AI being successful at real-world goals?
In small ways (aka sized appropriately for current AI capabilities) this kind of thing shows up all the time in chains of thought in response to all kinds of prompts, to the point that no, I don't have specific examples, because I wouldn't know how to pick one. The one that first comes to mind, I guess, was using AI to help me develop a personalized nutrition/supplement/weight loss/training regimen.
Stepping back, I should reiterate that I'm talking about "the current AI paradigm"
That's fair, and a reasonable thing to discuss. After all, the fundamental claim of the book's title is about a conditional probability: IF it turns out that the anything like our current methods scale to superintelligent agents, we'd all be screwed.
I sincerely hope that if anyone has a concrete, actionable answer to this question, that they're smart enough not to share it publicly, for what I hope are obvious reasons.
But aside from that caveat, I think you are making several incorrect assumptions.
True, but I think in this case there's at least no risk of an infinite regress. At one end, yes, it bottoms out in an extremely vague and inefficient but general hyperprior. I would guess from the little I've read that in humans these are the layers that govern how we learn from even before we're born. I would imagine an ASI would have at least one layer more fundamental than this, which enable it to change various fixed-in-humans assumptions about things.
At the other end would be the most specific or most abstracted layer of priors that has proven useful to date. Somewhere in the stack are your current best processes for deciding whether particular priors or layers of priors are useful or worth keeping or if you need a new one.
I am actually not sure whether 'prior' is quite the right term here? Some of it feels like the distinction between thingspace and conceptspace, where the priors might be more about the expectations what things exist and where natural concept boundaries lie and how to evaluate and re-evaluate those?
I equally hope to write "life in the day of" posts for each category soon as a better visualisation of what each of these worlds entails.
I think this would be really interesting and useful! For me, just reading the flowchart and seeing the list laid out makes me assume most people would seriously underestimate how broad these categories could actually be.
Exact placement would of course involve a number of value judgment calls. For example, I would probably characterize something like the outcome in Friendship is Optimal as an example of #7, but it could also be considered 8/10/11.
I'm also curious about your thoughts on the relative stability of each of these categories. To me, #6 seems metastable at best, for example, while #9 is an event, not a trajectory. AKA it is at least theoretically recoverable to some of the other states (or else declines into 10/11).
The ability to get to consciously decide when to discard or rewrite or call on the simple programs is a superpower evolution didn't give humans. One that it seems would be the obvious solution for an AI that gets to call on an external, updatable set of tools. Or an ASI got got to rewrite the parts of itself that call the tools or notice (what it previously thought were) edge cases.
AKA, an ASI can go ahead and have a human-specific prior. It can choose to apply it until it meets entities that are alien, then stop applying it. Humans can't really do that, in the same way that we can't turn off our visual heuristics when encountering things we consciously know are weirdly constructed adversarial examples, even if we can sometimes override them with enough effort. The ASI, presumably, would further react to encountering aliens by reasoning from more basic principles (recurse as needed) as it learns enough to create 1) a new prior specific to those aliens, 2) a new prior specific to those aliens' species, culture, world, etc.
Or at least, that's my <4 minute human-level single attempt at guessing a lower bound on an ASI's solution.
America would have to pay the subsidies off.
This is not necessarily true. At least not on any currently-human-relevant timescale. The ballooning can be a problem, especially when the money is spent very poorly. But if a reasonable fraction of it is spent on productive assets and other forms of growth, debt can grow for a long time. Longer than the typical lifespan of a country or currency.
You know, I thought we wouldn't start seeing AIs preferentially cooperate with each other and shut out humans from the gains from trade until a substantially higher level of capabilities.
This seems like yet another pretty clear warning of things to come if we don't get a whole lot smarter and more serious about what we're doing.