All of null's Comments + Replies

puffymist*10

Example in California:

I OBJECT to the use of my personal information, including my information on Facebook, to train, fine-tune, or otherwise improve AI.

I assert that my information on Facebook includes sensitive personal information as defined by the California Consumer Privacy Act: I have had discussions about my religious or philosophical beliefs on Facebook.

I therefore exercise my right to limit the disclosure of my sensitive personal information.

Despite any precautions by Meta, adversaries may later discover "jailbreaks" or otherwise adversarial pro

... (read more)
puffymist10

Or you could have an LLM write it for you.

Example prompt:

Meta, Inc wants to train AI on my personal data. Its notice is as follows:

> You have the right to object to Meta using the information you’ve shared on our Products and services to develop and improve AI at Meta. You can submit this form to exercise that right.
> 
> AI at Meta is our collection of generative AI features and experiences, like Meta AI and AI Creative Tools, along with the models that power them.
> 
> Information you’ve shared on our Products and services could be thin
... (read more)
puffymist*10

Example in UK / EU:

I OBJECT to the use of my personal data, including my data on Facebook, to train, fine-tune, or otherwise improve AI.

Against legitimate interest: I assert that Meta's processing of my personal information to train, fine-tune, or otherwise improve AI (thereafter: "to train AI") would violate the requirements of legitimate interest under GDPR as follows:

  • Failing the "necessity" prong: OpenAI and Anthropic have successfully trained highly capable AI models without the use of my personal data. My personal data is therefore unnecessary to

... (read more)
puffymist20

Re: opting out to Facebook training AI on your data:

Fill in the form like a Dangerous Professional, as Patrick McKenzie would put it.

1puffymist
Example in California:
1puffymist
Or you could have an LLM write it for you. Example prompt: Meta, Inc wants to train AI on my personal data. Its notice is as follows: > You have the right to object to Meta using the information you’ve shared on our Products and services to develop and improve AI at Meta. You can submit this form to exercise that right. > > AI at Meta is our collection of generative AI features and experiences, like Meta AI and AI Creative Tools, along with the models that power them. > > Information you’ve shared on our Products and services could be things like: > - Posts > - Photos and their captions > - The messages you send to an AI > > We do not use the content of your private messages with friends and family to train our AIs. > > We’ll review objection requests in accordance with relevant data protection laws. If your request is honored, it will be applied going forward. > > We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our Products and services. For example, this could happen if you or your information: > - Appear anywhere in an image shared on our Products or services by someone who uses them > - Are mentioned in posts or captions that someone else shares on our Products and services > > To learn more about the other rights you have related to information you’ve shared on Meta Products and services, visit our Privacy Policy. I live in [JURISDICTION]. Could you take on the role of a Dangerous Professional as Patrick McKenzie would say, and help me draft an objection request under [JURISDICTION] law?
1puffymist
Example in UK / EU:
puffymist10

Gorton, G. (2018), Financial Crises is a survey article. I thought its explanation of banking and financial crises as information shocks was enlightening.

Banking and financial crises as information shocks

Money, or bank notes, (or similar on-demand debt liabilities of a bank,) need to be information-insensitive (thus, interchangeable: $1 at Bank A == $1 at Bank B) to facilitate exchange. Otherwise, uninformed agents (any non banking-professionals) face adverse sele... (read more)

If we had the ability to create one machine capable to centrally planning our current world economy, how much processing power/ memory would it need to have? Interested in some Fermi estimates.

To which I would reply, this is AI-complete, at which point the AI would solve the problem by taking control of the future. That’s way easier than actually solving the Socialist Calculation Debate.

 

As a data point, Byrne Hobart argues in Amazon sees like a state that Amazon is approximately solving the economic calculation problem (ECP) in the Socialist Calculat... (read more)

Ah, increasing the number of researchers is simply increasing  in . I didn't realize that!

Minor comment on one small paragraph:

Price's Law says that half of the contributions in a field come from the square root of the number of contributors. In other words, productivity increases linearly as the number of contributors increases exponentially. Therefore, as the number of AI safety researchers increases exponentially, we might expect the total productivity of the AI safety community to increase linearly.

I think Price's law is false, but I don't know what law it should be instead. I'll look at the literature on the rate of scientific progress (eg... (read more)

2Stephen McAleese
Edit: rewrote the section on Price's Law to use Lotka's Law instead.
2Stephen McAleese
Thanks for the explanation. It seems like Lotka's Law is much more accurate than Price's Law (though Price's Law is simpler and more memorable).

Intuition pump / generalising from fictional evidence: in the games Pandemic / Plague Inc. (where the player "controls" a pathogen and attempts to infect the whole human population on Earth), a lucky, early cross-border infection can help you win the game faster — more than the difference between a starting infected population of 1 vs 100,000.

This informs my intuition behind when the bonus of earlier spaceflight (through human help) could outweigh the penalty of not dismantling Earth.


When might human help outweigh the penalty of not dismantling Earth? It r... (read more)

Answer by puffymist*130

Even if we're already doomed, we might still negotiate with the AGI.

I borrow the idea in Astronomical Waste. The Virgo Supercluster has a luminosity of about  solar luminosity  W, losing mass at a rate of  kg / s.[1]

The Earth has mass  kg.

If human help (or nonresistance) can allow the AGI to effectively start up (and begin space colonization) 600 seconds = 10 minutes earlier, then it would be mutually beneficial for humans to cooperate with the AGI (in the initial stages when the AGI could benefit from... (read more)

The opportunity cost to spare earth is far larger than the cost to spare a random planet halfway across the universe. The AI starts on earth. If it can't disassemble earth for spaceship mass, it has to send a small probe from earth to mars, and then disassemble mars instead. Which introduces a fair bit of delay. Not touching earth is a big restriction in the first few years and first few doublings. Once it gets to a few other solar systems, not touching earth becomes less importat of a restriction.

Of course, you can't TDT trade with the AI because you have no acausal correlation with it. We can't predict the AI's actions well enough.

3Yitz
I’m really intrigued by this idea! It seems very similar to past thoughts I’ve had about “blackmailing” the AI, but with a more positive spin

I think the dramatic impact would be stronger without the "The end", and instead adding more blank space.

Idea copied from a comment on the final chapter of Three Worlds Collide.