All of Krantz's Comments + Replies

Krantz10

Here are some common questions I get, along with answers.

How do individuals make money?

By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.

Why would someone spend their time evaluating arguments?

Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a ... (read more)

Krantz10

Just an update.  I did email MIRI and had a correspondence with a representative.

Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.

I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.

Krantz30

These are great rules. I wish they existed on a decentralized general constitution ledger, so I could consent to them being true on an infrastructure nobody controls (yet everyone tends to obey because other people consider it an authority).

That way, when I read them, I could get paid by society for learning to be a good citizen and I could escape my oppressed state and make a living learning stuff (authenticating the social contract) on the internet.

Krantz10

If you read the description here you can get a rough idea of what I'm talking about.

Or I would recommend the Manifold predictions and comments.  An intelligent person should be able to discern the scope of the project efficiently from there.

But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each oth... (read more)

Krantz10

Yes, it does share the property of 'being a system that is used to put information into people's heads'.

There's one really important distinction.

One is centralized (like the dollar)

One is decentralized controlled (like Bitcoin)

There's quite a bit of math that goes into proving the difference.

Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.

It would feel more like a predictive market than Khan Acad... (read more)

Krantz10

Thanks.  Done.

I'll let you know if I get a reply.

2Seth Herd
Asking people to listen to a long presentation is a bigger ask than a concise presentation with more details than the current post. Got anything in between?
Krantz10

Why would I want to change a person's belief if they already value philosophical solutions?  I think people should value philosophical solutions. I value them.

Maybe I'm misunderstanding your question.

It seemed like the poster above stated they do not value philosophical solutions.  The paper isn't really aimed at converting a person that doesn't value 'the why' into a person that does.  It is aimed at people which already do care about 'the why' and are looking to further reinforce/challenge their beliefs about what induction is capable... (read more)

Krantz10

It sounds like they are simply suggesting I accept the principle of uniformity of nature as an axiom.

Although I agree that this is the crux of the issue, as it has been discussed for decades, it doesn't really address the points I aim to urge the reader to consider.

2TAG
If the reader values having solutions to the philosophical issues as well as the practical ones, how are you going to change their mind? It's just a personal preference.
Krantz20

This is a good question.

I agree that you can't justify a prediction until it happens, but I'm urging us to consider what it actually means for a prediction to happen.  This can become nuanced when you consider predictions that are statements which require multiple observations to be justified.

If I predict that a box (that we all know contains 10 swans) contains 10 white swans (My prediction is 'There are ten white swans in this box.').  When does that prediction actually 'happen'?  When does it become 'justified'?

I think we all agree that af... (read more)

Krantz10

This was a paper I wrote 8 - 10 years ago while taking a philosophy of science course primarily directed at Hume and Popper.  Sorry about the math, I'll try to fix it when I have a moment.

 

The general point is this:

I am trying to highlight a distinction between two cases.

 

Case A - We say 'All swans are white.' and mean something like, 'There are an infinite number of swans in the Universe and all of them are white.'.

Hume's primary point, as I interpreted him, is that since there are an infinite number of observations that would need to be ma... (read more)

2TAG
Hume's pont might just be that theres no deductive justification of induction...one thing happening doesn't imply that a similar one must.
Krantz53

Well, that looks absolutely horrible.

I promise, it looked normal until I hit the publish button..

1Valdes
This looks interesting. I will come back to this post later and read it if the math displays properly.
3Viliam
When editing the article body, select a block of text, and a menu appears (the one with options like bold and italic). Choose the "f(x)" symbol, with tooltip "Insert math". The syntax is that of TeX. (\epsilon, \delta) -> (ϵ,δ) B \subset A -> B⊂A
3rotatingpaguro
Lesswrong supports math.
Krantz10

It will not take a long time if we use collective intelligence to do it together.  The technology is already here.  I've been trying to share it with others that understand the value of doing this before AI learns to do it on its own.  If you want to learn more about that, feel free to look me up on the 'X' platform @therealkrantz.

1RationalDino
It depends on subject matter. For math, it is already here. Several options exist, Coq is the most popular. For philosophy, the language requirements alone need AI at the level of reasonably current LLMs. Which brings their flaws as well. Plus you need knowledge of human experience. By the time you put it together, I don't see how a mechanistic interpreter can be anything less than a (hopefully somewhat limited) AI. Which again raises the question of how we come to trust in it enough for it not to be a leap of faith.
Krantz10

I understood the context provided by your 4 color problem example.

What I'm unsure about is how that relates to your question.

Maybe I don't understand the question you have.

I thought it was, "What should happen if both (1) everything it says makes sense and (2) you can't follow the full argument?".

My claim is "Following enough of an argument  to agree is precisely what it means for something to make sense.".

In the case of the four color problem, it sounds like for 20 years there were many folks that did not follow the full argument because it was too l... (read more)

1RationalDino
Nobody ever read the 1995 proof. Instead they wound up reading the program. This time it was written in C - which is easier to follow. And the fact that there were now two independent proofs in different languages that ran on different computers greatly reduced the worries that one of them might have a simple bug. I do not know that any human has ever tried to properly read any proof of the 4 color theorem. Now to the issue. The overall flow and method of argument were obviously correct. Spot checking individual points gave results that were also correct. The basic strategy was also obviously correct. It was a basic, "We prove that if it holds in every one of these special cases, then it is true. Then we check each special case." Therefore it "made sense". The problem was the question, "Might there be a mistake somewhere?" After all proofs do not simply have to make sense, they need to be verified. And that was what people couldn't accept. The same thing with the Objectivist. You can in fact come up with flaws in proposed understandings of the philosophy fairly easily. It happens all the time. But Objectivists believe that, after enough thought and evidence, it will converge on the one objective version. The AI's proposed proof therefore can make sense in all of the same ways. It would even likely have a similar form. "Here is a categorization of all of the special cases which might be true. We just have to show that each one can't work." You might look at them and agree that those sound right. You can look at individual cases and accept that they don't work. But do you abandon the belief that somewhere, somehow, there is a way to make it work? As opposed to the AI saying that there is none? As you said, it requires a leap of faith. And your answer is mechanistic interpretability. Which is exactly what happened in the end with the 4 color proof. A mechanistically interpretable proof was produced, and mechanistically interpreted by Coq. QED. But for something a
Krantz10

I'm not sure what the hypothetical Objectivist 'should do', but I believe the options they have to choose from are:

 

(1) Choose to follow the full argument (in which case everything that it said made sense)

and they are no longer an Objectivist

or

(2) Choose to not follow the full argument (in which case some stuff didn't make sense)

and they remain an Objectivist

 

In some sense, this is the case already.  People are free to believe whatever they like.  They can choose to research their beliefs and challenge them more.  They might read t... (read more)

2RationalDino
You may be missing context on my reference to the 4 color problem. The original 1976 proof, by Appel and Haken, took over 1000 hours of computer time to check. A human lifetime is too short to verify that proof. This eliminates your first option. The Objectivist cannot, even in principle, check the proof. Life is too short. Your first option is therefore, by hypothesis, not an option. You can believe the AI or not. But you can't actually check its reasoning. The history of the 4 color problem proof shows this kind of debate. People argued for nearly 20 years about whether there might be a bug. Then an independent, and easier to check, computer proof came along in 1995. The debate mostly ended. More efficient computer generated proofs have since been created. The best that I'm aware of is 60,000 lines. In principle that would be verifiable by a human. But no human that I know of has actually bothered. Instead the proof was verified by the proof assistant Coq. And, today, most mathematicians trust Coq over any human. We have literally come full circle on the 4 color problem. We started by asking whether we can trust a computer if a human can't check it. And now we accept that a computer can be more trustworthy than a human! However it took a long time to get the proof down to such a manageable size. And it took a long time to get a computer program that is so trustworthy that most believe it over themselves. And so the key epistemological challenge. What would it take for you to trust an AI's reasoning over your own beliefs when you're unable to actually verify the AI's reasoning?
Krantz10

I'm not sure I can imagine a concrete example of an instance where both (1) everything that it said made sense and (2) I am not able to follow the full argument.

Maybe you could give me an example of a scenario?

I believe, if the alignment bandwidth is high enough, it should be the case that whatever an external agent does could be explained to 'the host' if that were what the host desired.

2RationalDino
Concrete example. Let's presuppose that you are an Objectivist. If you don't know about Objectivism, I'll just give some key facts. 1. Objectivists place great value in rationality and intellectual integrity. 2. Objectivists believe that they have a closed philosophy. Meaning that there is a circle of fundamental ideas set out by Ayn Rand that will never change, though the consequences of those ideas certainly are not obvious and still needs to be worked out. 3. Objectivists believe that there is a single objective morality that can be achieved from Ayn Rand's ideas if we only figure out the details well enough. Now suppose that an Objectivist used your system. And the AIs came to the conclusion that there is no single objective morality obtainable by Ayn Rand's ideas. But the conclusion required a long enumeration of different possible resolutions, only to find a problem in each one. With the enumeration, like the proof of the 4-color problem, being too long for any human to read. What should the hypothetical Objectivist do upon obtaining the bad news? Abandon the idea of an absolute morality? Reject the result obtained by the intelligent AI? Ignore the contradiction? Now I don't know your epistemology. There might be no such possible conflict for you. I doubt there is for me. But in the abstract, this is something that really could happen to someone who thinks of themselves as truly rational.