A lot of the key people are CEO's of big AI companies making vast amounts of money. And busy people with lots of money are not easy to tempt with financial rewards for jumping through whatever hoops you set out.
This proposal, done without care, might turn out as a variation on advertisements, which are usually delivered to phone-tapping farms instead of to people who would be interested in a product/concept. If you have a provably better solution, you'd outcompete existing ad systems!
I strongly believe that at least half of the population would never believe it is an unbiased source of information no matter what you do, and would find the centralization very suspicious
I am posting this on LW because I don't currently have a way to stamp my intellectual property onto a blockchain in such a way that only Eliezer can see it and I can pay him to demonstrate that he has thought about it.
If you want your algorithm to be seen by Eliezer specifically, I'd suggest e.g emailing MIRI or messaging a member. (If you're concerned about surveillance, also asking for a public key first)
(Note: my prior on this kind of claim being true is low, but this is the advice for in case it's true)
Just an update. I did email MIRI and had a correspondence with a representative.
Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.
I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.
Questions:
Here are some common questions I get, along with answers.
How do individuals make money?
By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.
Why would someone spend their time evaluating arguments?
Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for "value" metric).
Why would others pay them?
Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong. This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It's easy to point out where they are wrong.
Why would we want to pay someone for publicly making themselves look obviously wrong?
Because we want them to change their beliefs. Imagine here, anyone you've wanted to change the mind of..
Why won't there be many contradictory beliefs held by anyone?
Because they don't want to be obviously wrong. Think politicians.
How much money goes to which arguments? The market decides.
What makes this system different than other communication tools? It is capable of identifying and directing discourse to the various cruxes we never seem to reach.
There's a number of influential intellectual leaders I've been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn't need their presence to have an effective argument with them. I could simply insert the "missing proposition" into such a system and 'buy them a wager' on the truth they're not looking at.
This project is really important for a different reason though. This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.
I think this could be the one thing capable of pulling the fire alarm Eliezer's been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from "don't look up.") If we had a system that gave individuals control over the general attention mechanism of society.
There is a ton more.. This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can't afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I'm not willing to put online, please share my information with them.
Isn't this what CCP's been doing all along? In China, individuals vie for higher education and status by regurgitating Xi's communist bullshits all day long, starting from the tender age of 7. It's a system where compliance and repetition are rewarded, not genuine understanding or critical thinking.
Is this what you mean/want?
Yes, it does share the property of 'being a system that is used to put information into people's heads'.
There's one really important distinction.
One is centralized (like the dollar)
One is decentralized controlled (like Bitcoin)
There's quite a bit of math that goes into proving the difference.
Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.
It would feel more like a predictive market than Khan Academy. Specifically, it won't be 'teaching you a curriculum'. You will be exploring, voting on things. It will do a good job of persuading you of what to go vote on. It will 'recommend' the thing that will make you the most money to think about.
You see, this is where the money that you earn comes from. If people want something thought about, they can incentivize those propositions. The religious leaders will want you to learn about their religion, the phone companies will want you to learn about their new phone options. Your neighbors might want you to watch the news. Your parents might want you to learn math. Your wife might want you to know you need to take out the trash. People pay money to other people to demonstrate they understand things.
People will want to put money into this.
It's marketing, with an actual receipt.
The great thing is, you get to choose what you read.
You can listen to me talk about this for a couple hours here:
https://www.youtube.com/watch?v=eNirzUg7If8
https://x.com/therealkrantz/status/1739768900248654019
https://x.com/therealkrantz/status/1764713384790921355
If learning by wagering is more your thing, you can do that here.
https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6
Asking people to listen to a long presentation is a bigger ask than a concise presentation with more details than the current post. Got anything in between?
If you read the description here you can get a rough idea of what I'm talking about.
Or I would recommend the Manifold predictions and comments. An intelligent person should be able to discern the scope of the project efficiently from there.
But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each other) on the internet. For concerns that they may be infohazardous.
If we continue to scale ML systems, we will all perish.
If you disagree with that, you should probably be reading Eliezer's work instead.
If you are caught up on his work, then I have something new for you to think about.
One solution for solving this problem, would be to teach everyone on the planet the full set of reasons Eliezer holds this position. That would be the 'humanity grows up and decides not to build AI' possible future.
That seems like an intractable task. Most people do not care about those reasons. They don't care about learning what they can do to alter the trajectory of our future. They have no incentive to comprehend what you are wanting them to comprehend. We can't force knowledge into people heads.
How on Earth could we possible educate so many people about such a nuanced topic?
How could we verify that they really understand?
You give them something they do care about, money.
If you pay individuals to prove they understand something (in a way that works) you create a function that takes money as input and outputs public comprehension of the topics that are incentivized.
That's something we really need right now. It would serve the function of a 'fire alarm' not only for Eliezer's message, but for any information any person wants another person to consider.
This can be done by constructing a decentralized collective intelligence that rewards individuals for using it.
If you aren't familiar with collective intelligence, do not mistake it for artificial intelligence.
It is a completely different field in computer science. (https://cci.mit.edu/)
It is a paradigm shift from a self contained intelligent system that can evolve beyond us to a system that has humans as its parts and requires our interactions to grow.
It's a shift from trying to get a machine to twirl a pencil all by itself to getting a machine that can coordinate billions of people to solve much more complex problems.
That's intelligence also.
There are projects in this spirit (Community Notes, Wikipedia, Research Hub, predictive markets, Anthropic/collectiveintelligenceproject), but they fall short.
What we actually need is a place where people intentionally go to receive an education, read the news, verify the truth and get rewarded financially for it (without needing to invest capital).
I think people really underestimate the effect such a mechanism would have on society.
Or they just haven't thought about it carefully enough.
I believe I have an algorithm (from work on a gofai project from 2010) that will scale the effectiveness of collective intelligence orders of magnitude. Unfortunately, it would also help LLMs reason more effectively and possible piss off intelligence agencies as much as Bitcoin pissed off the banks, so it's not online anywhere.
I am posting this on LW because I don't currently have a way to stamp my intellectual property onto a blockchain in such a way that only Eliezer can see it and I can pay him to demonstrate that he has thought about it.
It wasn't the meteor that killed everyone in "Don't look up.", it was the fact that they hadn't yet build a functional decentralized social reasoning platform where everyone earned their living by learning and verifying things. If they would have built that, they could have communicated the problem, negotiated solutions and survived. That's the lesson we were supposed to draw from that.
If we would have built what I'm talking about back in 2010 when I came up with it, then billions of people would share Eliezer's concerns today.
That's what we need to build.
You can listen to me talk about this for a couple hours here:
https://www.youtube.com/watch?v=eNirzUg7If8
https://x.com/therealkrantz/status/1739768900248654019
https://x.com/therealkrantz/status/1764713384790921355
If learning by wagering is more your thing, you can do that here.
https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6
https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6
https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6