Wiki Contributions

Comments

Sorted by
Krantz10

Here are some common questions I get, along with answers.

How do individuals make money?

By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.

Why would someone spend their time evaluating arguments?

Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for "value" metric).

Why would others pay them?

Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong. This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It's easy to point out where they are wrong.

Why would we want to pay someone for publicly making themselves look obviously wrong?

Because we want them to change their beliefs. Imagine here, anyone you've wanted to change the mind of..

Why won't there be many contradictory beliefs held by anyone?

Because they don't want to be obviously wrong. Think politicians.

How much money goes to which arguments? The market decides.

What makes this system different than other communication tools? It is capable of identifying and directing discourse to the various cruxes we never seem to reach.

There's a number of influential intellectual leaders I've been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn't need their presence to have an effective argument with them. I could simply insert the "missing proposition" into such a system and 'buy them a wager' on the truth they're not looking at.

This project is really important for a different reason though. This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.

I think this could be the one thing capable of pulling the fire alarm Eliezer's been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from "don't look up.") If we had a system that gave individuals control over the general attention mechanism of society.

There is a ton more.. This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can't afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I'm not willing to put online, please share my information with them.

Krantz10

Just an update.  I did email MIRI and had a correspondence with a representative.

Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.

I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.

Krantz30

These are great rules. I wish they existed on a decentralized general constitution ledger, so I could consent to them being true on an infrastructure nobody controls (yet everyone tends to obey because other people consider it an authority).

That way, when I read them, I could get paid by society for learning to be a good citizen and I could escape my oppressed state and make a living learning stuff (authenticating the social contract) on the internet.

Krantz10

If you read the description here you can get a rough idea of what I'm talking about.

Or I would recommend the Manifold predictions and comments.  An intelligent person should be able to discern the scope of the project efficiently from there.

But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each other) on the internet.  For concerns that they may be infohazardous.

Krantz10

Yes, it does share the property of 'being a system that is used to put information into people's heads'.

There's one really important distinction.

One is centralized (like the dollar)

One is decentralized controlled (like Bitcoin)

There's quite a bit of math that goes into proving the difference.

Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.

It would feel more like a predictive market than Khan Academy.  Specifically, it won't be 'teaching you a curriculum'.  You will be exploring, voting on things.  It will do a good job of persuading you of what to go vote on.  It will 'recommend' the thing that will make you the most money to think about.

You see, this is where the money that you earn comes from.  If people want something thought about, they can incentivize those propositions.  The religious leaders will want you to learn about their religion, the phone companies will want you to learn about their new phone options.  Your neighbors might want you to watch the news.  Your parents might want you to learn math.  Your wife might want you to know you need to take out the trash.  People pay money to other people to demonstrate they understand things.

People will want to put money into this.

It's marketing, with an actual receipt.

 

The great thing is, you get to choose what you read.

Krantz10

Thanks.  Done.

I'll let you know if I get a reply.

Krantz10

Why would I want to change a person's belief if they already value philosophical solutions?  I think people should value philosophical solutions. I value them.

Maybe I'm misunderstanding your question.

It seemed like the poster above stated they do not value philosophical solutions.  The paper isn't really aimed at converting a person that doesn't value 'the why' into a person that does.  It is aimed at people which already do care about 'the why' and are looking to further reinforce/challenge their beliefs about what induction is capable of doing.

The principle of uniformity of nature is something we need to assume if we are going to declare we have evidence that the tenth swan to come out of the box would be white (in the situation where we have a box of ten swans and have observed 9 of them come out of the box and be white).  Hume successfully convinced me that this can't be done without assuming the principle of uniformity in nature.

What I am claiming though, is that although we have no evidence to support the assertion 'The 10th swan will be white.' we do have evidence to support the assertion 'All ten swans in the box will be white.' (an assertion made before we opened the box.).  This justification is not dependent upon assuming the principle of uniformity of nature.

 

In general, it is a clarification specifically about what induction is capable of producing justification for.

Future observation instances?  No.

But general statements?  I think this is plausible.

It's really just an inquiry into what counts as justification.

Necessary or sufficient evidence.

Krantz10

It sounds like they are simply suggesting I accept the principle of uniformity of nature as an axiom.

Although I agree that this is the crux of the issue, as it has been discussed for decades, it doesn't really address the points I aim to urge the reader to consider.

Krantz20

This is a good question.

I agree that you can't justify a prediction until it happens, but I'm urging us to consider what it actually means for a prediction to happen.  This can become nuanced when you consider predictions that are statements which require multiple observations to be justified.

If I predict that a box (that we all know contains 10 swans) contains 10 white swans (My prediction is 'There are ten white swans in this box.').  When does that prediction actually 'happen'?  When does it become 'justified'?

I think we all agree that after we've witnessed the 10th white swan, my assertion is justified. But am I justified at all to believe I am more likely to be correct after I've only witnessed 8 or 9 white swans? 

This is controversial.

Load More