Ariel Kwiatkowski

Wiki Contributions

Comments

Sorted by

Because you could make the same argument could be made earlier in the "exponential curve". I don't think we should have paused AI (or more broadly CS) in the 50's, and I don't think we should do it now.

Modern misaligned AI systems are good, actually. There's some recent news about Sakana AI developing a system where the agents tried to extend their own runtime by editing their code/config. 

This is amazing for safety! Current systems are laughably incapable of posing x-risks. Now, thanks to capabilities research, we have a clear example of behaviour that would be dangerous in a more "serious" system. So we can proceed with empirical research, create and evaluate methods to deal with this specific risk, so that future systems do not have this failure mode.

The future of AI and AI safety has never been brighter.

Expert opinion is an argument for people who are not themselves particularly informed about the topic. For everyone else, it basically turns into an authority fallacy.

And how would one go about procuring such a rock? Asking for a friend.

The ML researchers saying stuff like AGI is 15 years away have either not carefully thought it through, or are lying to themselves or the survey.

 

Ah yes, the good ol' "If someone disagrees with me, they must be stupid or lying"

For what it's worth, I think you're approaching this in good faith, which I appreciate. But I also think you're approaching the whole thing from a very, uh, lesswrong.com-y perspective, quietly making assumptions and using concepts that are common here, but not anywhere else.

 

I won't reply to every individual point, because there's lots of them, so I'm choosing the (subjectively) most important ones.

 

This is the actual topic. It's the Black Marble thought experiment by Bostrom,

No it's not, and obviously so. The actual topic is AI safety. It's not false vacuum, it's not a black marble, or a marble of any color for that matter. 
Connor wasn't talking about the topic, he was building up to the topic using an analogy, a more abstract model of the situation. Which might be fair enough, except you can't just assert this model. I'm sure saying that AI is a black marble will be accepted as true around here, but it would obviously get pushback in that debate, so you shouldn't sneak it past quietly. 

 

Again, Connor is simply correct here. This is not a novel argument. It's Goodhart's Law.

As I'm pretty sure I said in the post, you can apply this reasoning to pretty much any expression of values or goals. Let's say your goal is stopping AI progress. If you're consistent, that means you'd want humanity to go extinct, because then AI would stop. This is the exact argument that Connor was using, it's so transparent and I'm disappointed that you don't see it.

 

Again, this is what Eliezer, Connor, and I think is the obvious thing that would happen once an unaligned superintelligence exists: it pushes its goals to the limit at the expense of all we value. This is not Connor being unfair; this is literally his position.

Great! So state and defend and argue for this position, in this specific case of an unaligned superintelligence! Because the way he did it in a debate, was just by extrapolating whatever views Beff expressed, without care for what they actually are, and showing that when you push them to the extreme, they fall apart. Because obviously they do, because of Goodhart's Law. But you can't dismiss a specific philosophy via a rhethorical device that can dismiss any philosophy.

 

Finally? Connor has been talking about this the whole time. Black marble!

Again, I extremely strongly disagree, but I suspect that's a mannerism common in rationalist circles, using additional layers of abstraction and pretending they don't exist. Black marble isn't the point of the debate. AI safety is. You could put forward the claim that "AI = black marble". I would lean towards disagreeing, I suspect Beff would strongly disagree, and then there could be a debate about this proposition.

Instead, Connor implicitly assumed the conclusion, and then proceeded to argue the obvious next point that "If we assume that AI black marble will kill us all, then we should not build it".

Duh. The point of contention isn't that we should destroy the world. The point of contention is that AI won't destroy the world.

 

Connor is correctly making a very legit point here.

He's not making a point. He's again assuming the conclusion. You happen to agree with the conclusion, so you don't have a problem with it.

The conclusion he's assuming is: "Due to the nature of AI, it will progress so quickly going forward that already at this point we need to slow down or stop, because we won't have time to do that later."

My contention with this would be "No, I think AI capabilities will keep growing progressively, and we'll have plenty of time to stop when that becomes necessary."

This is the part that would have to be discussed. Not assumed.

 

That is a very old, very bad argument.

Believe it or not, I actually agree. Sort of. I think it's not good as an argument, because (for me) it's not meant to be an argument. It's meant to be an analogy. I think we shouldn't worry about overpopulation on Mars because the world we live in will be so vastly different when that becomes an immediate concern. Similarly, I think we shouldn't (overly) worry about superintelligent AGI killing us, because the state of AI technology will be so vastly different when that becomes an immediate concern. 

And of course, whether or not the two situations are comparable would be up to debate. I just used this to state my own position, without going the full length to justify it.

 

Yes. That would have been good. I could tell Connor was really trying to get there. Beff wasn't listening though.

I kinda agree here? But the problem is on both sides. Beff was awfully resistant to even innocuous rhethorical devices, which I'd understand if that started late in the debate, but... it took him like idk 10 minutes to even respond to the initial technology ban question.

At the same time Connor was awfully bad at leading the conversation in that direction. Let's just say he took the scenic route with a debate partner who made it even more scenic.

 

 

 

Besides that (which you didn't even mention), I cannot imagine what Connor possibly could have done differently to meet your unstated standards, given his position. [...] What do you even want from him?

Great question. Ideally, the debate would go something like this.

B: So my view is that we should accelerate blahblah free energy blah AI blah [note: I'm not actually that familiar with the philosophical context, thermodynamic gods and whatever else; it's probably mostly bullshit and imo irrelevant]

C: Yea, so my position is if we build AI without blah and before blah, then we will all die.

B: But the risk of dying is low because of X and Y reasons.

C: It's actually high because of Z, I don't think X is valid because W.

 

And keep trying to understand at what point exactly they disagree. Clearly they both want humanity/life/something to proliferate in some capacity, so even establishing that common ground in the beginning would be valuable. They did sorta reach it towards the end, but at that point the whole debate was played out.

 

 

Overall, I'm highly disappointed that people seem to agree with you. My problem isn't even whether Connor is right, it's how he argued for his positions. Obviously people around here will mostly agree with him. This doesn't mean that his atrocious performance in the debate will convince anyone else that AI safety is important. It's just preaching to the choir. 

So I genuinely don't want to be mean, but this reminds me why I dislike so much of philosophy, including many chunks of rationalist writing.

This whole proposition is based on vibes, and is obviously false - just for sake of philosophy, we decide to ignore the "obvious" part, and roll with it for fun.

 

The chair I'm sitting on is finite. I may not be able to draw a specific boundary, but I can have a bounding box the size of the planet, and that's still finite.

My life as a conscious being, as far as I know, is finite. It started some years ago, it will end some more years in the future. Admittedly I don't have any evidence regarding what happens to qualia after death, but a vibe of infiniteness isn't enough to convince me that I will infinitely keep experiencing things.

My childhood hamster's life was finite. Sure, the particles are still somewhere in my hometown, but that's no longer my hamster, nor my hamster's life.

A day in my local frame is finite. It lasts about 24 hours, depending on how we define it - to be safe, it's surely contained within 48 hours.

 

This whole thing just feels like... saying things. You can't just say things and assume they are true, or even make sense. But apparently you can do that if you just refer to (ideally eastern) philosophy.

What are the actual costs of running AISC? I participated in it some time ago, kinda participating this year again (it's complicated).  As far as I can tell, the only things that are required is some amount of organization, and then maybe a paid slack workspace. Is this just about salaries for the organizers?

Huh, whaddayaknow, turns out Altman was in the end pushed back, the new interim CEO is someone who is pretty safety-focused, and you were entirely wrong.

 

Normalize waiting for more details before dropping confident hot takes.

Load More