it takes on average 18 years to build a mine
Always worth reminding people that the "building" is ~4 of those years, on average. Half the 18 is permitting/litigation, which the government seems to have forgotten it gets to change if it wants to, except when it wants to shut down more things. The rest is exploration and feasibility studies, which can be accelerated (or at least parallelized) by spending more money, if the time-value of minerals is high.
We noble few are, there are dozens of us
The annoying pedant in me wants to say: These are important realizations! How could you have realized them sooner?
The less annoying pedant in me wants to say: This is, I think, a very hard lesson in general. Some people never learn it. The ones who do are rarely young. Those who try to tell the young rarely succeed. What's a better strategy for convincing people of this?
You know, I thought we wouldn't start seeing AIs preferentially cooperate with each other and shut out humans from the gains from trade until a substantially higher level of capabilities.
This seems like yet another pretty clear warning of things to come if we don't get a whole lot smarter and more serious about what we're doing.
So odd that this got slated first for Idaho.
I assume it's because INL near Arco is one of the very few places in the US where a critical mass of people are likely to believe that SMRs are a good thing.
This is a great start, but still a drop in the bucket, as I understand it, compared to what we will need if we intend to largely rely on solar and wind in the future.
Yes, but also, that graph is in GW, not GWh. It's saying that, at peak discharge, batteries can already provide about 5% of record peak demand. That's ~4.3 doublings from what we need in that record. But, those are on average only 2 hr batteries - 83GWh. We'll need at least 7 doubles of capacity to be able to deal with the intermittency of a mostly-renewables grid - exact amount TBD based on how many other things we can get right. Luckily that also means when we move beyond ~4 hrs or so we can largely switch to (what should become) cheaper but lower-peak-power options than Li-ion (that are also less critical mineral intensive).
The US Forest Service wanted to implement a wildfire prevention plan, so it had to fill out an environmental impact statement. Before they could complete the environmental impact statement, though, half the forest burned down.
I take it as given that in a saner world, we would be rushing to gather up excess fuel that feeds wildfires, and turning it into biochar, or putting it into bioenergy plants. Instead, we can't even give ourselves permission to controlled-burn it, when it's going to uncontrolled-burn anyway.
"There exists a physical world that keeps going whether or not you want to deal with it" seems to not be a part of our societal consensus.
How long does that intermediate step take? If the machine shop-shop outputs one shop per week, but every 4th (or whatever) output is a mining-robot-shop or a forestry-robot-shop or something else instead, is that sufficient to solve your inputs problem while only slowing down the doubling time by a modest amount?
I think you're overestimating the discourse on Frost.
But this is already presupposing the existence of the superintelligence whose feasibility we are trying to explain.
Strictly speaking I only presupposed an AI could reach close to the limits of human intelligence in terms of thinking ability, but with the inherent speed and parallelizability and memory advantages of a digital mind.
Do you have any examples handy of AI being successful at real-world goals?
In small ways (aka sized appropriately for current AI capabilities) this kind of thing shows up all the time in chains of thought in response to all kinds of prompts, to the point that no, I don't have specific examples, because I wouldn't know how to pick one. The one that first comes to mind, I guess, was using AI to help me develop a personalized nutrition/supplement/weight loss/training regimen.
Stepping back, I should reiterate that I'm talking about "the current AI paradigm"
That's fair, and a reasonable thing to discuss. After all, the fundamental claim of the book's title is about a conditional probability: IF it turns out that the anything like our current methods scale to superintelligent agents, we'd all be screwed.
I sincerely hope that if anyone has a concrete, actionable answer to this question, that they're smart enough not to share it publicly, for what I hope are obvious reasons.
But aside from that caveat, I think you are making several incorrect assumptions.
True, but I think in this case there's at least no risk of an infinite regress. At one end, yes, it bottoms out in an extremely vague and inefficient but general hyperprior. I would guess from the little I've read that in humans these are the layers that govern how we learn from even before we're born. I would imagine an ASI would have at least one layer more fundamental than this, which enable it to change various fixed-in-humans assumptions about things.
At the other end would be the most specific or most abstracted layer of priors that has proven useful to date. Somewhere in the stack are your current best processes for deciding whether particular priors or layers of priors are useful or worth keeping or if you need a new one.
I am actually not sure whether 'prior' is quite the right term here? Some of it feels like the distinction between thingspace and conceptspace, where the priors might be more about the expectations what things exist and where natural concept boundaries lie and how to evaluate and re-evaluate those?
It would probably be a good idea to have the junior associate check the citations and read the document first, before the senior lawyer does. Still should save a lot of net time.