All of Random Trader's Comments + Replies

>There is no evidence that anti-aging is psychologically what's driving the AI race

 

Sure. As I've said, I'm just speculating. I think it's extremely hard to get evidence for this, since people don't talk openly about it. Those of us who admit publicly that we want to live forever (or indefinitely long) are excepcional cases. Even most people working in longevity will tell you that they don't care about increasing our lifespan, that they just want us to be healthier. Sam Altman will tell the media that he has no interest in living forever, that he j... (read more)

According to Eliezer, free will is an illusion, so Shane doesn't really have a choice.

Yeah, but let's be honest; oil barons don't think climate change is going to kill them. Capitalism may produce all sorts of coordination problems, but personal survival is still the strongest incentive. I think Google execs wouldn't hesitate to stop the research if they were expecting a paperclip maximizer.

2lc
I think you're being naive. But it doesn't really matter. Oil barons, in practice, also tend to convince themselves climate change is a hoax, or rationalize their participation away with "if we don't do it somebody else will". That's what the vast majority of Google executives would do if it got to the point where they started worrying a bit, and unfortunately the social pressure isn't sufficient to even drive them there yet.

You've deleted the first part of your comment cause you probably realized it didn't make much sense, but I'm going to answer to it anyway. You made a comparison between solving the alignment problem and predicting the price of a stock, and that's just not right. Google execs don't have to solve the alignment problem themselves, they just have to recognize its existence and its magnitude, in the same way that retail investors don't have to build the AGI themselves, they just have to notice that it's going to happen soon.

1lc
I deleted it cause my comment sounds cooler in my head if I leave the explanation out, and also I was tired of arguing. The point I was (maybe poorly) trying to make was that Google execs are not individually incentivized to lobby their company to prevent AGI collapse in the same way that hedge fund managers are incentivized to predict Google's stock price. Those executives are not getting paid to delay AGI timelines, and many are getting paid not to delay AGI timelines. AGI prevention is a coordination problem. Securities pricing is a technical problem. In the same way society is really bad at tax law, or preventing global warming, and really good at video game development, society is really bad at AGI alignment and really good at pricing securities. And thus oil barons continue producing oil and Google continues producing AGI research.

The efficient market hypothesis? Are you serious? So... you're saying the world is going to end and nobody is doing anything to avoid it, but I can't say that a stock is going to appreciate and nobody is buying.

2lc
Yeah, pretty much. Welcome to Earth.

Who are those latest people? Do you have any examples?

1lc
It's subtle because few people explicitly believe that's what they're doing in their heads; they just agree on doomerism and then perform greed- or prestige-induced rationalizations that what they're doing isn't really contributing. For example, Shane Legg; he'll admit that the chance of human extinction from AGI is somewhere "between 5-50%" but then go found DeepMind. Many people at OpenAI also fit the bill, for varying reasons.

But he's not a doomer like you. Aren't you pissed at everyone who's not a doomer?

1lc
I'm not pissed at the Indian rice farmer who doesn't understand alignment and will be as much of a victim as me when DeepMind researchers accidentally kill me and my relatives.  I'm very not pissed at Robin Shah, who, whatever his beliefs, is making a highly respectable attempt to solve the problem and not contributing to it.  I am appropriately angry at the DeepMind researchers who push the capabilities frontier and for some reason err in their anticipation of the consequences.  I am utterly infuriated at the people who agree with me about the consequences and decide to help push that capabilities frontier anyways, either out of greed or some "science fiction protagonist" syndrome.

Fair enough. Have you already told Rohin Shah to go fuck himself?

3lc
Don't split hairs. He's an alignment researcher.

Well, Google already has cash to build an AGI 20 times over. I don't think you can blame human extinction on average Joes who buy shares right before the end of the world.

3lc
I can and do blame everyone that invests in Google, especially the ones that do it because of their end-of-the-world research department. My circle of blame actually extends as far as the crypto miners increasing the price of Nvidia GPUs.
3lc
There are some deep theoretical reasons why even conscientiously designed AGI would be antagonistic towards human values. No solution or roadmap to a solution for those problems is known. Deliberately investing in the companies that succeed in pushing the AI capabilities frontier, before it's clear that research won't eventually kill everyone, so we can make a little bit of money in the 10-15 year interim, is probably counterproductive. This is by no means the only example, but if you'd like a good intuitive understanding of the type of thing that can go wrong, Rob Miles did a computerphile episode you can find here.