ChristianKl

Sequences

Random Attempts at Apllied Rationality
Using Credence Calibration for Everything
NLP and other Self-Improvement
The Grueling Subject
Medical Paradigms

Comments

Sorted by

I would expect by their natures of how AI gets deployed that a lot of cooperation happens pretty soon. 

Let's say I want my agent to book me a doctor's appointment because I have an issue. I would expect that it's fairly soon that my AI agent is able to autonomously send out emails to book a doctor's appointment. On the other side, it makes a lot of sense for the doctor's office to have an AI that runs doctors' appointments on their side. 

In Germany, where I live, how soon the appointment is can depend on factors like how urgent the appointment is and the type of insurance the patient is using.

This is a simple case of two AIs cooperating with each other so that the doctors' appointment gets scheduled. 

Interestingly, the AI of the patient has the choice whether or not to defect with the AI of the doctor's office. The AI of the patient can lie about how critical the condition of the patient happens to be to get an appointment that's sooner.

There's no need for AI that is near human capabilities to be around for the ability of AI to negotiate with each other to be relevant. All the major companies will need training data to optimize how their AI negotiates fairly soon.

If you go to Yudkowski and say "Hey, I want to build an AGI, can you please tell me how to do so safely?", he will answer "I can't tell you how to build an AGI safely. If you are going to successfully build an AGI you are likely to kill us all. Don't build an AGI".

Yudkoswky is self-aware that he doesn't have the expertise that would be necessary to build a safe AGI. If you then search for an expert who gives you some advice about how to build your AGI and you succeed in building your AGI and kill everyone, that was a trap of your own making.

If you want a past example, David Chapman´s article about Domestic Science is great. More recently, you had a flatlining of childhood obesity rates in the second Bush administration. Afterwards, Michelle Obama started a huge program to improve childhood nutrition through various measures recommended by nutrition scientists. After those programs childhood obesity rates were rising again. 

Scientists studying nutrition do believe themselves to be experts, but seem to give advice that produces more problems than it solves. 

Reality doesn´t just reward you for having put in a lot of effort. 

Before asking for help, they made a wrong decision, which maybe solved their short term problems but shot them in the foot in the long term. This bad decision made other bad decisions more likely until they were helplessly stuck in a trap of their own making.

It's worth noting that there are domains where there are no experts. In those domains people who explore them still tie themselves up.

A class of people who thinks of themselves as experts but doesn't really have a clue is the most dangerous when it comes to trapping themselves in traps of their own making.

AGI development is the obvious example for LessWrong and other examples are better left as exercise to the reader.

The US does not have laws that forbid people who don't have a security clearance from publishing classified material. The UK is a country that has such laws but the first amendment prevents that.

I don't think that chosing jurisdiction in the hope that they will protect you is a a good strategy. If you want to host leaks from the US in China, it's possible that China's offers to surpress that information as part of a deal.

4chan has a single point of failure. If the NSA would be motivated enough to burn some of their 0-days, taking it offline wouldn't be hard. 

Taking a decentralized system with an incentive structure like ArDrive down is significantly harder.

Attacking ArDrive is likely also politically more costly as it breaks other usages of it. The people with NFT that store data on ArDrive can pay lobbyists to defend it.

Just convincing the developers is not enough. You also need the patch they created to be accepted by the network, and it's possible for the system to be forked if different network participants want different things.

Torrents are also bad for privacy everybody can see the IP addresses of all the other people who subscribe to a torrent.

For privacy onion routing is great. Tor uses that. Tor however doesn't have a data storage layer.

Veiled and the network on which Session runs use onion routing as well and have a data storage layer.

In the case of Veiled you get the nice property that the more people want to download a certain piece of content the more notes in the network store the information.

As far as creating public knowledge goes, I do think that Discord, servers and Telegram chats serve currently as social media.

When it comes to meta data and plain text extraction it's worth noting that meta data can both be used to verify documents and to expose whistleblowers. If a journalist can verify authenticity of emails because they have access to the meta data that's useful.

  • guidelines for latest hard-to-censor social media
    • to publish torrent link, maybe raw docs, and social media discussions
    • guidelines must be country-wise and include legal considerations. always use a social media of a country different from the country where leak happened.

The Session messenger is probably better than country-specific social media.

country-wise torrents (not sure if this is needed)

  • torrents proposed above are illegal in all countries. instead we can legally circulate country A's secrets via torrent within country B and legally circulate country B's secrets via torrent within country A. only getting the info past the border is illegal, for that again need securedrop or hard disk dead drop if any country or org seals their geographic borders.

The US does have the first amendment. That currently means that all the relevant information about AI labs is legal to share. It's possible to have a legal regime where sharing model weights of AI gets legally restricted but for the sake of AI safety I don't think we want Open AI researchers to leak model weights of powerful models.

The main information that's currently forbidden from being shared legally in the US is child pornography, but  whistleblowing is not about intentionally sharing child pornography. When it comes to child pornography, the right thought isn't "How can we host it through a jurisdiction where it's legal", but to try to avoid sharing it.

While sharing all the bitcoin blocks involves sharing child pornography, nobody went after bitcoin minors for child pornography. People who develop cryptography who don't intend to share child pornography generally has not been prosecuted. 

Torrents are not a good technology for censorship-resistant hosting. Technology like veilid, where a data set that gets queried by a lot of people automatically gets distributed over more of the network is better because it prevents the people who hosts the torrents from being DDoSed. 

If you just want to host plaintext, blockchain technology like ArDrive also exists. You need to pay ~12$ per GB but if you do so, you get permanent storage that's nearly impossible to censor. 

I don't think it's a spectrum. A spectrum is something one-dimensional. The problem with your distinction is that someone might think that there are safe from problems arising from power seeking (in a more broad definition) if they prevent the AI from doing things that they don't desire. 

There are probably three variables that matter:

  1. How much agency has the human in the interaction.
  2. How much agency has the AI agent in the interaction.
  3. Does the AI cooperate or defect in the game theoretic sense.

If you have a low agency CEO and a high agency very smart middle manager that always cooperates, that middle manager can still acquire more and more power over the organization.

Plenty of things I desire happen without me intending for them to happen.

In general, the reference class is misalignment of agents and AIs aren't the only agents. We can look at how the terms works in a corporation.

There are certain powers that a CEO of a big corporation intentionally delegates to a mid-level manager. I think there's plenty that a CEO appreciates his mid-level manager to do that the CEO does not explicitly task the mid-level manager to do. The CEO likely appreciates if the mid-level manager autonomously takes charge of solving problems without bothering the CEO about it. 

On the other hand, there are also ways where the mid-level manager does company politics and engineers a situation so that the CEO giving the mid-level manager certain powers that the CEO doesn't desire to give the mid-level manager. The CEO feels forced to do so because of how the company politics play out, so the CEO does intentionally give the powers out to the mid-level manager. 

What's the difference between being given affordances and getting power? If you are given more affordances you have more power. However seeking is about doing things that increase the affordances you have. 

I searched a bit more and it seems they don't have personal relationships with other members of the same species the way mammals and birds can.

Personal relationships seem to something that needs intelligence and that birds and mammals evolved separately.

The 2019 update add many codes that orthodox Western medicine disagrees with. 

If someone wants Chinese medicine codes they got it in the update. Ayurveda got codes. Activists for Chronic Lyme got their codes as well.  

The philosophy of that update seemed to be "If there anything that doctors want to diagnose, it should get a code so that it can go into standardized electronic health records."

Load More