Since a few people have mentioned the Miller/Rootclaim debate:
My hourly rate is $200. I will accept a donation of $5000 to sit down and watch the entire Miller/Rootclaim debate (17 hours of video content plus various supporting materials) and write a 2000 word piece describing how I updated on it and why.
Anyone can feel free to message me if they want to go ahead and fund this.
Whilst the LEDs are not around the corner, I think the Kr-Cl excimer lamps might already be good enough.
When we wrote the original post on this, it was not clear how quickly covid was spreading through the air, but I think it is now clear that covid can hang around for a long time (on the order of minutes or hours rather than seconds) and still infect people.
It seems that a power density of 0.25W/m^2 would probably be enough to sterilize air in 1-2 minutes, meaning that a 5m x 8m room would need a 10W source. Assuming 2% efficiency that 10W source needs 500W electrical, which is certainly possible and in the days of incandescent lights you would have had a few 100W bulbs anyway.
EDIT: Having looked into this a bit more, it seems that right now the low efficiency of excimer lamps is not a binding constraint because the legally allowed far-UVC exposure is so low.
"TLV exposure limit for 222 nm (23 mJ cm^−2)"
23 mJ per cm^2 per day is just 0.002 W/m^2 , so you really don't need much power until you hit legal limitations.
Looking historically we see that strength of property rights correlates with technological sophistication and scale of society.
Here's a deep research report on that issue:
https://chatgpt.com/share/698902ca-9e78-8002-b350-13073c662d9d
More generally they'd get more value by making it economically untenable to take up resources by holding savings and benefiting from growth than they would by allowing that.
But then others could play the same trick on them. It's not worth it. "Group G of Agents could get more resources by doing X" does not necessarily imply that Group G will do X!
Humans even keep groups like The Amish around.
Hard property rights are an equilibrium in a multi-player game where power shifts are uncertain and either agents are risk averse or there are gains from investment, trade and specialization.
when you lose the intelligence race badly enough, your existing structures of cooperation and economic production just get ignored.
yes this is a risk, but I think it can be avoided by humans getting a faithful AI agent wrapper with fiduciary responsibility.
The concept and institutions for fiduciary responsibility were not around when humans surpassed apes, otherwise apes could have hired humans to act as their agents and simply invested in the human gold and later stock market.
I don't think you need Banksian benevolent AIs for this, an agent can be trustlessly faithful via modern trust minimized AI. Ethereum is already working on a nascent standard for this, ERC-8004.
Humans can buy into index funds like QQQ or similar structures, or scarce commodities like gold or maybe Bitcoin. As the overall economy grows, QQQ, gold, etc go up in dollar value.
There can be a land value tax but it will ideally lag behind the growth of QQQ unless that land is especially scarce.
Historically if you just held gold long-term, you could turn modest savings into a fortune even if you have to pay some property tax.
You don't have to generate any value to benefit from growth.
I will have to expand on this elsewhere
But Chimps and Homo Erectus lack(ed) their own property rights regimes.
Owning shares in most modern companies won't be useful in sufficiently distant future, and might prove insufficient to pay for survival
Well there may simply be better index funds. In fact QQQ is already pretty good.
The insight is that better property rights are both positive for AI civilization (whether the owners are AIs, humans, uplifted dolphins, etc) and also better for normie legacy humans.
It is not a battle of humans vs AIs, but rather of order (strong property rights, good solutions to game theory) versus chaos (weak property rights, burning of the cosmic commons, bad equilibria).
I think the "order vs chaos not humans vs AIs", "we (AIs, humans) are all on team order" is an underrated perspective.
The Contrarian 'AI Alignment' Agenda
Overall Thesis: technical alignment is generally irrelevant to outcomes, and almost everyone in the AI Alignment field is stuck with this incorrect assumption, working on technical alignment of LLM models
(1) aligned superintelligence that is provably logically realizable [already proved]
(2) aligned superintelligence is not just logically but also physically realizable [TBD]
(3) ML interpretability/mechanistic interpretability cannot possibly be logically necessary for aligned superintelligence [TBD]
(4) ML interpretability/mechanistic interpretability cannot possibly be logically sufficient for aligned superintelligence [TBD]
(5) given certain minimal intelligence, minimal emulation ability of humans by AI (e.g. understands common-sense morality and cause and effect) and of AI by humans (humans can do multiplications etc) the internal details of AI models cannot possibly make a difference to the set of realizable good outcomes, though they can make a difference to the ease/efficiency of realizing them [TBD]
(6) given near-perfect or perfect technical alignment (=AI will do what the creators ask of it with correct intent) awful outcomes are Nash Equilibrium for rational agents [TBD]
(7) small or even large alignment deviations make no fundamental difference to outcomes - the boundary between good/bad is determined by game theory, mechanism design and initial conditions, and only by a satisficing condition on alignment fidelity which is below the level of alignment of current humans (and AIs) [TBD]
(8) There is no such thing as superintelligence anyway because intelligence factors into many specific expert systems rather than one all-encompassing general purpose thinker. No human has a job as a “thinker” - we are all quite specialized. Thus, it doesn’t make sense to talk about “aligning superintelligence”, but rather about “aligning civilization” (or some other entity which has the ability to control outcomes) [TBD]