“That which does not kill us makes us stronger.”
Hillary Clinton, who is still alive
I'm proud and excited to announce the founding of my new startup, Open Asteroid Impact, where we redirect asteroids towards Earth for the benefit of humanity. Our mission is to have as high an impact as possible.
Below, I've copied over the one-pager I've sent potential investors and early employees:
Name: Open Asteroid Impact
Launch Date: April 1 2024
Website: openasteroidimpact.org
Mission: To have as high an impact as possible
Pitch: We are an asteroid mining company. When most people think about asteroid mining, they think of getting all the mining equipment to space and carefully mining and refining ore in space, before bringing the ore back down in a controlled landing. But humanity has zero experience in Zero-G mining in the vacuum of space. This is obviously very inefficient. Instead, it’s much more efficient to bring the asteroids down to Earth first, and mine it on the ground.
Furthermore, we are first and foremost an asteroid mining *safety* company. That is why we need to race as fast as possible to be at the forefront of asteroid redirection, so more dangerous companies don’t get there before us, letting us set safety standards.
Cofounder and CEO: Linch Zhang
Other employees: Austin Chen (CTO), Zach Weinersmith (Chief Culinary Officer), Annie Vu (ESG Analyst)
Board: tbd
Competitors: DeepMine, Anthropocene
Valuation: Astronomical
Design Principles: Bigger, Faster, Safer
Organizational Structure: for-profit C corp owned by B corp owned by public benefit corporation owned by 501c4 owned by 501c3 with a charter set through a combination of regulations from Imperial France, tlatoani Aztec Monarchy, Incan federalism, and Qin-dynasty China to avoid problems with Arrow’s Impossibility Theorem
Safety Statement: “Mitigating the risk of extinction from human-directed asteroids should be a global priority alongside other civilizational risks such as nuclear war and artificial general intelligence”
You can learn more about us on our website.
It comes from the discussion here: https://www.lesswrong.com/posts/LvKDMWQ3yLG9R3gHw/empiricism-as-anti-epistemology
which also has https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn as a related topic.
As near as I can summarize the arguments, the argument distills to :
What intelligence in any thinking agent does :
A. Perceive the situation at present. It will functionally always be unique in the real world. (it will never exact match to a Q-table entry with a few exceptions like board games)
B. Lookup reference classes that are similar to the situation you are in.
C. Use the reference classes that are similar to the situation, and assume the laws of physics will cause a similar outcome now as to then, predict the future outcomes conditional on the agent's actions. (aka I do nothing, reality will cause outcome 1, I do action A, reality will cause outcome 2...) (D. choose the action that predicts the future with the highest EV from the agent's perspective)
This will fail badly when the situation is a black swan.
For example, at one point I sold 80! bitcoins for $10 each because I reasoned that it was similar to fake experimental e-currencies that had been tried in the past.
How you can project the future:
So when people try to answer questions like 'will there be a recession' and similar, that's how. You try to find a reference class, or a numerical variable that predicted '15 of the last 10 recessions' and project it will happen when this happens.
The argument for "AI won't be that bad" comes to reasoning:
A. A piece of software we call a transformers model is kinda like the reference classes "useful software", "useful technology tools", "military applicable technology".
B. You then assume the laws of physics are similar, and assume the other tens of thousands of things that match the above will cause a similar outcome, and there you go, almost 0% AI doom because none of the other technologies risked the doom of humanity (except 1-2). And also you hit a corollary, we know historically that countries that didn't adopt the latest and most expensive military technology got slaughtered. Recent examples : Afghanistan invaded by the Soviets, then later the USA. Iraq hit by the USA twice. Ukraine.
And we see the results in Ukraine what even a little bit of better weapons technology donated to their side does on the battlefield, the results are dramatic. This results in another pro-AI argument, "we didn't really have a choice".
The counterargument:
The simplest counterargument to above is to say "AI, especially ASI, doesn't match the reference classes of "useful software", "useful technology tools", "military applicable technology".
A. useful software counter: https://www.lesswrong.com/posts/kSq5qiafd6SqQoJWv/ by @Davidmanheim
B. "useful technology tools" : the argument here is usually that the ASI isn't a tool because it can betray you while a hammer can't. Also it's smarter than yourself, so you can't really even check it's work or know when it's betraying.
C. "military applicable technology" : ditto, you can't trust a weapon that can think for itself or coordinate to betray you
This thread:
The "in joke" is that we all know that slamming an asteroid into the earth and causing a > 1 gigaton explosion of plasma and probably an earthquake (I checked and it's a tiny asteroid to reach 1 gigaton) is something that we know the consequences for. It really is a bad idea and if we need platinum or iridium or other elements common to asteroids we'll have to process it in place and bring it back the hard way.
So we're trying to play with the "reference class" to make it seem like a good idea among other purposeful argument faults. This one is making fun of people saying the asteroid fits the 'reference class' of the one that extincted the dinosaurs and that most doom advocates, unlike yourself, aren't qualified in ML.
Mine tries to say that because the reference class data is old, we should get into the business of slamming asteroids and find out the consequences later, and I also make the military applicable tech argument, which is a true argument : if you want to stop people deorbiting asteroids from out past the orbit of Mars, you need space warships and vehicles that can redirect asteroids on an impact course away. (so the same technology as the bad guys, meaning you cannot afford a 'spacecraft building pause')
I also made fun of Sam Altman's double dealing.