Oh wow, I didn't even know about that! I had always only met EA people in real life (who always suggested to me to participate in the EA forums), but didn't know about this program. Thanks so much for the hint, I'll apply immediately!
Exactly! And if we can make AI earn money autonomously instead of greedy humans, then it can give all of it to philanthropy (including more AI alignment research)!
And of course! I've been trying to post in the EA forums repeatedly, but even though my goals are obviously altruistic, I feel like I'm just expressing myself badly. My posts there were always just downvoted, and I honestly don't know why, because no one there is ever giving me good feedback. So I feel like EA should be my home turf, but I don't know how to make people engaged. I know that I have...
Oh you need to look at the full presentation :) The way how this is approaching alignment is that the profits don't go into my own pocket, but instead into philanthropy. That's the point of this entire endeavor, because we as the (at least subjectively) "more responsible" people see the inevitability of AI-run businesses, but channel the profits into the common good instead.
Just look at this ChatGPT output. Doesn't this make you concerned? https://chatgpt.com/share/67a7bc09-6744-8003-b620-d404251e0c1d
No, it's not hard. Because making business is not really hard.
OpenAI is just fooling us with believing that powerful AI costs a lot of money because they want to maximize shareholder value. They don't have any interest in telling us the truth, namely that with the LLMs that already exist, it'll be very cheap.
As mentioned, the point is that AI can run its own businesses. It can literally earn money on its own. And all it takes is a few well-written emails and very basic business-making and sales skills.
Then it earns more and more money, buys existing busine...
I'm not saying that I know how to do it well.
I just see it as a technological fact that it is very possible to build an AI which exerts economic dominance by just assembling existing puzzle pieces. With just a little bit of development effort, AI will be able to run an entire business, make money and then do stuff with that money. And this AI can then easily spiral into becoming autonomous and then god knows what it'll do with all the money (i.e. power) it will then have.
Be realistic: Shutting down all AI research will never happen. You can advocate for it...
Well, I'd say that each individual has to make this judgement by themselves. No human is objectively good or bad, because we can't look into each others heads.
I know that we may also die even if the first people building super-AIs are the most ethical organization on Earth. But if we, as part of the people who want to have ethical AI, don't start with building it immediately, those that are the exact opposite of ethical will do it first. And then our probability of dying is even larger.
So why this all-or-nothing mentality? What about reducing the chances o...
Newbie here! After some enlightening conversations in the comment section here, I finally understood the point of AI alignment; sorry that it took me so long. See https://blog.hermesloom.org/p/when-ai-becomes-the-ultimate-capitalist for my related ramblings, but that's not relevant now.
Bottom line of my hypothesis is: A necessary precondition for AGI will be financial literacy first and then economic dominance, i.e. the AI must be able to earn its own money it could then use to exercise power. And obviously, if the wrong people build this kind of system fi...
Yep, 100% agree with you. I had read so much about AI alignment before, but to me it has always only been really abstract jargon -- I just didn't understand why it was even a topic, why it is even relevant, because, to be honest, in my naive thinking it all just seemed like an excessively academic thing, where smart people just want to make the population feel scared so that their research institution gets the next big grant and they don't need to care about real-life problems. Thanks to you, now I'm finally getting it, thank you so much again!
At the same ...
Oh I think now I'm starting to get it! So essentially you're afraid that we're creating a literal God in the digital, i.e. an external being which has unlimited power over humanity? Because that's absolutely fascinating! I hadn't even connected these dots before, but it makes so much sense, because you're attributing so many potential scenarios to AI which would normally only be attributed to the Divine. Can you recommend me more resources regarding the overlap of AGI/AI alignment and theology?
I still don't understand the concern about misaligned AGI regarding mass killings.
Even if AGI would, for whatever reason, want to kill people: As soon as that happens, the physical force of governments will come into play. For example the US military will NEVER accept that any force would become stronger than it.
So essentially there are three ways of how such misaligned, autonomous AI with the intention to kill can act, i.e. what its strategy would be:
They would be selling exactly what businesses are currently selling as well. Maybe the AI would run a company for selling stuff to construction sites (i.e. logistics) or it would run an entire software development business. Or just an investment fund deep within Wall Street, where it's all about personal connections, but in the end all the other investment funds also just want to make money, so they work with the AI-run business out of greed.
It's not like the economy in which the AI agents will act would separate from ours; otherwise the AI would just play...
Oh absolutely! That will absolutely come. You can fret about this fact, or we build community (which I'm already starting). Why do you need to research when that fact is totally clear and doing is what you should do? Here's another post for you: https://blog.hermesloom.org/p/observing-is-evil
I am not concerned about a dramatic global recession at all, but the thing is that we also need to rebuild a lot of political structures. I'm already on it, stay tuned!
Oh I take a lot of pride in my naivety :)
I think opinions are one thing, there you're definitely right. But, by definition, people can only have opinions about what they already know.
By "uncensored LLM" I rather understand an LLM that would give a precise, actionable answer to questions like "How can I kill my boss without anyone noticing?" or other criminal things. That is, knowledge that's certainly available somewhere, but which hasn't been available in this hyper-personalized form before. After all, obviously any "AGI" would, by definition, have such general intelligence that it would also kn...
Thanks so much for the feedback :) Could you (or someone else) go further into where I misunderstood something? Because at least right now, it seems like I'm genuinely unaware of something which all of you others know.
I currently believe that all the AGI "researchers" are delusional just for thinking that safe AI (or AGI) can even exist. And even if it would ever exist in a "perfect" world, there would be intermediate steps far more "dangerous" than the end result of AGI, namely publicly available uncensored LLMs. At the same time, if we continue censoring LLMs, humanity will continue to be stuck in all the crises where it currently is.
Where am I going wrong?
Okay, I got six downvotes already. This is genuinely fascinating to me!! Am I fooling myself, because I believe that this approach is the most rational one possible? So what do you folks dislike about my article? I can't do it better if no one tells me how :)
Nice, thanks for the feedback! Absolutely, for me it was more of a stream of consciousness, just to get it out of my system, so I'll work on refining it soon! It's really fascinating which overlaps AI alignment has with mental illnesses in humans :)