Just an update. I did email MIRI and had a correspondence with a representative.
Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.
I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.
These are great rules. I wish they existed on a decentralized general constitution ledger, so I could consent to them being true on an infrastructure nobody controls (yet everyone tends to obey because other people consider it an authority).
That way, when I read them, I could get paid by society for learning to be a good citizen and I could escape my oppressed state and make a living learning stuff (authenticating the social contract) on the internet.
If you read the description here you can get a rough idea of what I'm talking about.
Or I would recommend the Manifold predictions and comments. An intelligent person should be able to discern the scope of the project efficiently from there.
But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each oth...
Yes, it does share the property of 'being a system that is used to put information into people's heads'.
There's one really important distinction.
One is centralized (like the dollar)
One is decentralized controlled (like Bitcoin)
There's quite a bit of math that goes into proving the difference.
Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.
It would feel more like a predictive market than Khan Acad...
Thanks. Done.
I'll let you know if I get a reply.
You can listen to me talk about this for a couple hours here:
https://www.youtube.com/watch?v=eNirzUg7If8
https://x.com/therealkrantz/status/1739768900248654019
https://x.com/therealkrantz/status/1764713384790921355
If learning by wagering is more your thing, you can do that here.
https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6
https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6
https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6
https://manifold.markets/Krantz/i...
Why would I want to change a person's belief if they already value philosophical solutions? I think people should value philosophical solutions. I value them.
Maybe I'm misunderstanding your question.
It seemed like the poster above stated they do not value philosophical solutions. The paper isn't really aimed at converting a person that doesn't value 'the why' into a person that does. It is aimed at people which already do care about 'the why' and are looking to further reinforce/challenge their beliefs about what induction is capable...
It sounds like they are simply suggesting I accept the principle of uniformity of nature as an axiom.
Although I agree that this is the crux of the issue, as it has been discussed for decades, it doesn't really address the points I aim to urge the reader to consider.
This is a good question.
I agree that you can't justify a prediction until it happens, but I'm urging us to consider what it actually means for a prediction to happen. This can become nuanced when you consider predictions that are statements which require multiple observations to be justified.
If I predict that a box (that we all know contains 10 swans) contains 10 white swans (My prediction is 'There are ten white swans in this box.'). When does that prediction actually 'happen'? When does it become 'justified'?
I think we all agree that af...
This was a paper I wrote 8 - 10 years ago while taking a philosophy of science course primarily directed at Hume and Popper. Sorry about the math, I'll try to fix it when I have a moment.
The general point is this:
I am trying to highlight a distinction between two cases.
Case A - We say 'All swans are white.' and mean something like, 'There are an infinite number of swans in the Universe and all of them are white.'.
Hume's primary point, as I interpreted him, is that since there are an infinite number of observations that would need to be ma...
Well, that looks absolutely horrible.
I promise, it looked normal until I hit the publish button..
It will not take a long time if we use collective intelligence to do it together. The technology is already here. I've been trying to share it with others that understand the value of doing this before AI learns to do it on its own. If you want to learn more about that, feel free to look me up on the 'X' platform @therealkrantz.
I understood the context provided by your 4 color problem example.
What I'm unsure about is how that relates to your question.
Maybe I don't understand the question you have.
I thought it was, "What should happen if both (1) everything it says makes sense and (2) you can't follow the full argument?".
My claim is "Following enough of an argument to agree is precisely what it means for something to make sense.".
In the case of the four color problem, it sounds like for 20 years there were many folks that did not follow the full argument because it was too l...
I'm not sure what the hypothetical Objectivist 'should do', but I believe the options they have to choose from are:
(1) Choose to follow the full argument (in which case everything that it said made sense)
and they are no longer an Objectivist
or
(2) Choose to not follow the full argument (in which case some stuff didn't make sense)
and they remain an Objectivist
In some sense, this is the case already. People are free to believe whatever they like. They can choose to research their beliefs and challenge them more. They might read t...
I'm not sure I can imagine a concrete example of an instance where both (1) everything that it said made sense and (2) I am not able to follow the full argument.
Maybe you could give me an example of a scenario?
I believe, if the alignment bandwidth is high enough, it should be the case that whatever an external agent does could be explained to 'the host' if that were what the host desired.
Here are some common questions I get, along with answers.
How do individuals make money?
By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.
Why would someone spend their time evaluating arguments?
Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a ... (read more)