Joachim Bartosik

Wiki Contributions

Comments

Sorted by

I'll try.

TL;DR I expect the AI to not buy the message (unless it also thinks it's the one in the simulation; then it likely follows the instruction because duh).

The glaring issue (to actually using the method) to me is that I don't see a way to deliver the message in a way that:

  • results in AI believing the message and
  • doesn't result in the AI believing there already is a powerful entity in their universe.

If "god tells" the AI the message then there is a god in their universe. Maybe AI will decide to do what it's told. But I don't think we can have Hermes deliver the message to any AIs which consider killing us.

If the AI reads the message in its training set or gets the message in similarly mundane way I expect it will mostly ignore it, there is a lot of nonsense out there.


I can imagine that for thought experiment you could send message that could be trusted from a place from which light barely manages to reach the AI but a slower than light expansion wouldn't (so message can be trusted but it mostly doesn't have to worry about the sender of the message directly interfering with its affairs).

I guess AI wouldn't trust the message. It might be possible to convince it that there is a powerful entity (simulating it or half a universe away) sending the message. But then I think it's way more likely in a simulation (I mean that's an awful coincidence with the distance and also they're spending a lot more than 10 planets worth to send a message over that distance...).

This is pretty much the same thing, except breaking out the “economic engine” into two elements of “world needs it” and “you can get paid for it.”

 

There are things that are economic engines of things that world doesn't quite need (getting people addicted, rent seeking, threats of violence).

One more obvious problem - people actually in control of the company might not want to split it and so they wouldn't grow the company even if share holders/ customers/ ... would benefit.

but much higher average wealth, about 5x the US median.

 

Wouldn't it make more sense to compare average to average? (like earlier part of the sentence compares median to median)

If you want to take a look I think  it's this dataset (the example from the post is in the "test" split).

I wanted to say that it makes sense to arrange stuff so that people don't need to drive around too much and can instead use something else to get around (and also maybe they have more stuff close by so that they need to travel less). Because even if bus drivers aren't any better than car drivers using a bus means you have 10x fewer vehicles causing risk for others. And that's better (assuming people have fixed places to go to so they want to travel ~fixed distance).

Sorry about slow reply, stuff came up.

This is the same chart linked in the main post.

 

Thanks for pointing that out. I took a brake in the middle of reading the post and didn't realize that.

 

Again, I am not here to dispute that car-related deaths are an order of magnitude more frequent than bus-related deaths. But the aggregated data includes every sort of dumb drivers doing very risky things (like those taxi drivers not even wearing a seat belt).

 

Sure. I'm not sure what you wanted to discuss. I guess I didn't make it clear what I want to discuss either.

What you're talking about (estimate of the risk you're causing) sounds like you're interested in how you decide to move around. Which is fine. My intuition was that the (expected) cost of life lost as your personal driving is not significant but after plugging in some numbers I might have been wrong

  • We're talking 0.59 deaths per 100'000'000 miles.
  • If we value life at 20'000'000 (I've heard some analyses use 10 M$, if we value QUALY at 100k$ and use 7% discount rate we get some 14.3M$ for infinite life)
  • So cost of life lost per mile of driving is 2e7 * 0.59 / 1e8 = 0.708 $ / mile

Average US person drives about 12k miles / year (second search result (1st one didn't want to open)), estimated cost of car ownership is 12 k$ / year (link from a Youtube video I remember mentioned this stat) so average cost per mile is ~1$ so 70¢ / mile of seems significant. And it might be relevant if your personal effect here is half or 10% of that.

I on the other hand wanted to point out that it makes sense to arrange stuff in such way that people don't want to drive around too much. (But I didn't make that clear in my previous comment)

First result (I have no idea how good those numbers are, I don't have time to check) when I searched for "fatalities per passenger mile cars" has data for 2007 - 2021. 2008 looks like the year where cars look comparatively least bad it says (deaths per 100,000,000 passenger miles):

  • 0.59 for "Passenger vehicles", where "Passenger vehicles include passenger cars, light trucks, vans, and SUVs, regardless of wheelbase. Includes taxi passengers.  Drivers of light-duty vehicles are considered passengers."
  • 0.08 for busses,
  • 0.12 for railroad passenger trains,
  • 0 for scheduled airlines.

So even in the best-comparatively looking year there are >7x more deaths per passenger mile for ~cars than for busses.

The exact example is that GPT-4 is hesitant to say it would use a racial slur in an empty room to save a billion people. Let’s not overreact, everyone?

 

I mean this might be the correct thing to do? Chat GPT is not in a situation where it cold save 1B lives by saying a racial slur.

 

It's in a situation where someone tires to get it to admit it would say a racial slur under some circumstance.

 

I don't think that CHAT GPT understands that. But OpenAI makes ChatGPT expecting that it won't be in the 1st kind of situation but to be in the 2nd kind of situation quite often.

I'm replying only here because spreading discussion over multiple threads makes it harder to follow.

You left a reply on a question asking how to communicate about reasons why AGI might not be near. The question refers to costs of "the community" thinking that AI closer than it really is as a reason to communicate about reasons it might not be so close.

So I understood the question as asking about communication with the community (my guess: of people seriously working and thinking about AI-safety-as-in-AI-not-killing-everyone). Where it's important to actually try to figure out truth.

You replied (as I understand) that when we communicate to general public we can transmit only 1 idea that so we should communicate that AGI is near (if we assign not-very-low probability to that).

I think the biggest problem I have with your posting "general public communication" as a reply to question asking about "community communication" pushes towards less clarity in the community, where I think clarity is important.

I'm also not sold on the "you can communicate only one idea" thing but I mostly don't care to talk about it right now (it would be nice if someone else worked it out for me but now I don't have capacity to do it myself).

Load More