All of Dem_'s Comments + Replies

Dem_*30

I think it’s an amazing post but it seems to suggest that AGI is inevitable, which it isn’t. Narrow AI will flourish humanity in remarkable ways and many are waking up to the concerns of EY and are agreeing that AGI is a foolish goal.

This article promotes a steadfast pursuit or acceptance towards AGI and that it will likely be for the better.

Perhaps though you could join the growing number of people that are calling for a halt on new AGI systems well beyond chatgpt?

This is a perfectly fine response and one that will eliminate your fears if you are to succ... (read more)

1DivineMango
Do you see acceptance as it's mentioned here as referring to a stance of "AGI is coming, we might as well feel okay about it", or something else?
Dem_*10

One of the best replies I’ve seen and calmed much of my fears about AI. My pushback is this. The things you list below as reasons to justify advancing AGI are either already solvable with narrow AI or not solution problems but implementation and alignment problems.

“dying from hunger, working in factories, air pollution and other climate change issues, people dying on roads in car accidents, and a lot of deceases that kill us, and most of us (80% worldwide) work in a meaningless jobs just for survival. “

Developing an intelligence that has 2-5x general human... (read more)

Dem_10

Are AI scientists that you know in a pursuit for AGI or more powerful narrow AI systems?

As someone who is knew to this space I’m trying to simply wrap my head around the desire to create AGI, which could be intensely frightening and dangerous to the developer of such system.

I mean not that many people are hell bent on finding the next big virus or developing the next weapon so I don’t see why AGI is as inevitable as you say it is. Thus I suppose developers of these systems must have a firm belief there are very little dangers attached to developing a syste... (read more)

3Nathan Helm-Burger
There are a lot of groups pursuing AGI. Some claiming that they are doing so with the goal of benefiting humanity, some simply in pursuit of profit and power. Indeed, the actors I personally am most concerned about are those who are relatively selfish and immoral as well as self-confident and incautious, and sufficiently competent to at least utilize and modify code published by researchers. Those who think they can dodge or externalize-to-society the negative consequences and reap the benefits, who don't take the existential risk stuff seriously. You know what I mean. The L33T |-|ACKZ0R demographic.
3dr_s
I don't personally work in AI. But Open AI for example states clearly in its own goals that they aim at building AGI, and Sam Altman wrote a whole post called "Moore's Law for Everything" in which he outlines his vision for an AGI future. I consider it naïve nonsense, personally, but the drive seems to be simply the idea of a utopian world of abundance and technological development going faster and faster as AGI makes itself smarter. EDIT: sorry, didn't realise you weren't answering to me, so my answer doesn't make a lot of sense. Still, gonna leave it here.
Dem_66

Thanks for writing this. I had in mind to express a similar view but wouldn’t have expressed it nearly as well.

In the past two months I’ve gone from over the moon excited about AI to a deep concern.

This is largely because I misunderstood the sentiment around super intelligent AGI .

I thought we were on the same page about utilizing narrow LLM’s to help us solve problems that plague society (ie protein folding.) But what I see cluttered on my timeline and clogging the podcast airwaves was the utter delight at how much closer we are to having an AGI some 6-10... (read more)

2dr_s
I actually have a theory about this thing which I will probably write my next post on. I think people mix up different things in the concept of "work" and that's how we get these contradictory impulses. I also think this is relevant to concepts of alignment.