LESSWRONG
LW

2819
AnthonyC
3548Ω2813590
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
7AnthonyC's Shortform
8mo
2
No wikitag contributions to display.
Everyday Clean Air
AnthonyC18h20

Is there a reason why hypochlorous acid fogging isn't part of the conversation as well? Either alone or in combination with UVC. It's very safe at relevant concentrations, and can sterilize a room in minutes using water, electricity, salt, and vinegar. 

Reply
Some Sun Tsu quotes sound like they're actually about debates/epistemics
AnthonyC19h20

At a sufficient level of abstraction, the fundamental principles of strategy are approximately substrate-independent, I think. They are formless. Void.

what if your enemy also knows themselves and knows you?

IDK what Sun Tzu would say to this, but I would say, 

  1. No one in practice has actually perfect knowledge, so one side would still have more than the other, and
  2. The quote does not say you will win every battle. It says you need not fear the result. It might be that you know the result will be that you will lose, and therefore you should either avoid the battle or surrender from the start. To paraphrase another well-known leader with a challenging strategic situation: If you know you will win, there is no need to fear. If you know you will lose, it's of no use to fear.
Reply
AI #142: Common Ground
AnthonyC2d20

California’s Attorney General will be hiring an AI expert.

The linked article headline says they're hiring an AI 'expert.' IDK what they're planning on paying this person, but I doubt it's enough to hire an expert. An 'expert' seems like exactly what they'll get.

Reply
The Pope Offers Wisdom
AnthonyC3d20

Hmm...

Reply
How human-like do safe AI motivations need to be?
AnthonyC4d20

This is an interesting question. Without commenting on whether I think the approach would work (which if it does, would be a great thing!), it does not address the "anyone" dimension of IABIED. In other words - one reason to try to build long-term consequentialist AI (if we think we can do so safely) would be to try to prevent anyone else from doing so unsafely.

Reply1
Mourning a life without AI
AnthonyC4d42

Would have != could have.

Reply
We're Not The Center of the Moral Universe
AnthonyC7d20

This was also my reaction, better stated than I would have done.

I think there's a version of this argument that says that most people would not reflectively endorse the animal suffering they cause, if they truly understood themselves and their own values in a CEV-like sense. I don't know if that version is true either.

Reply
Mourning a life without AI
AnthonyC8d182

If there was a button that would kill me with a 60% probability and transport me into a utopia for billions of years with a 15% probability, I would feel very scared to press that button

This is because the correct answer is option three: try to modify the button to lower the 60 and raise the 15, until such time as a 1-in-5 chance of survival is a net improvement relative to your default situation. I'd be much more likely to press that button if I'd just jumped out of an airplane without a parachute. Or if there was a hundred mile wide asteroid near-guaranteed to hit Earth next Tuesday.

Also, this is the first year where the people close to me are cognizant enough of AI that I can talk to them about life plan derailment expectations and not be dismissed as crazy. I can tell my parents to try to really attend to their health more than they have in the past, and why. I can explain to my wife that hey, we should both expect start surfing a wave of frequent job changes until the concept of a job stops making sense. It's been honestly very freeing to be able to discuss these things somewhere other than this community. I'm still a little hesitant to openly talk to my sisters about what their children's futures might look like, but even that is starting to change.

Reply2
A country of alien idiots in a datacenter: AI progress and public alarm
AnthonyC8d40

Excellent post, thank you! Two thoughts:

  1. When I talk to less knowledgeable people about AI, they tend not to distinguish capabilities/incompetence from poor implementation choices. For example, at the grocery store yesterday the cashier and bagger were complaining about the new AI camera monitoring system tracking everything they do. It's easy to blame the AI because it's new, but at some level they're really mad about a boss demonstrating that he doesn't trust his employees to do their jobs but does trust the outputs of a still-error-prone automated system without sufficient human review.
  2. "They can't hold a whole job, let alone self-improve, without help" - this kind of talk (that I hear regularly)  feels like we're at the "craftsmen balk when asked to work the first assembly line" stage. After the development of cottage industry, factories, and the assembly line, and almost 70 years from "I, Pencil," we still haven't collectively internalized the idea that "whole job" is not a coherent concept, and tasks can be refactored, and too bad for the humans trying to keep up and learn the right skills and mindset.
Reply
Build the life you actually want
AnthonyC8d20

Much appreciated, and a concept (on the physical not digital side) I've had trouble explaining to people in the past. I've mentioned elsewhere on LW that I sold my house and lived in an RV for the past 4 years. It was very freeing and good for my mental health to downsize and learn to live minimally. We always had the intention to eventually build a house when we decided to stop traveling, and apparently a lot of people were very surprised that we were planning on building a non-tiny house. We had to explain that the point was never to have less space for its own sake. It's to make sure the space and things we have serve a meaningful purpose in our lives

Reply
Load More
7AnthonyC's Shortform
8mo
2
8Dependencies and conditional probabilities in weather forecasts
Q
4y
Q
2
27Money creation and debt
Q
5y
Q
15
19Superintelligence and physical law
9y
1
1Scope sensitivity?
10y
3
23Types of recursion
12y
16
14David Brooks from the NY Times writes on earning-to-give
12y
3
9Cryonics priors
13y
22