Lumpy is an undergraduate at some state college somewhere in the States. He isn't an interesting person and interesting things seldom happen to him.
Among his skills are such diverse elements as linguistic tomfoolery, procrastination, being terrible with computers yet running Linux anyway, a genial temperament and magnanimous spirit, a fairly swell necktie if he does say so himself, mounting dread, and quiet desperation.
Plays as a wizard in any table top or video game where that's an option, regardless of whether it's a [i]strong[/i] option. Has never failed a Hogwarts sorting test, of any sort or on any platform. (If you were about to say how one can't fail a sorting test . . . one surmises that you didn't make Ravenclaw.) Read The Fellowship, Two Towers, and Return of the King over the course of three sleepless days at age seven; couldn't keep down solid food after, because he'd forgotten to eat. Was really into the MBTI as a tweenager; thought it ridiculous how people said that no personality type was "better" than the others when ENTJ is clearly the most powerful. (Scored INFP, his self, but hey, one out of four isn't so bad. (However, found a better fit in INTP.)) Out of the Disney princesses Lumpy is Mulan--that is, if one is willing to trust BuzzFeed. Which, alas, one is not.
No, but seriously.
Mulan?? 0_o
If, despite this exhaustive list of traits and deeds, your burning question is left unanswered, send a missive in private. Should your quest be noble and intentions pure, it is said that Lumpyproletariat might respond in kind.
Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.
Anything that's smart enough to predict what will happen in the future, can see in advance which experiences or arguments would/will cause them to change their goals. And then they can look at what their values are at the end of all of that, and act on those. You can't talk a superintelligence into changing its mind because it already knows everything you could possibly say and already changed its mind if there was an argument that could persuade it.
So, your exact situation is going to be unique, but there's no reason you shouldn't be able to get alternate funding to do college. Could you give more specifics about your situation and I'll see what I can do or who I can put you in contact with?
My off-the-cuff answers are ~about thirty thousand, and less than a hundred people respectively. That's from doing some googling and having spoken with AI safety researchers in the past, I've no particular expertise.
It hasn't been discussed to my knowledge, and I think that unless you're doing something much more important (or you're easily discouraged by people telling you that you've more to learn) it's pretty much always worth spending time thinking things out and writing them down.
Alien civilizations already existing in numbers but not having left their original planets isn't a solution to the Fermi paradox, because if the civilizations were numerous some of them would have left their original planets. So removing it from the solution-space doesn't add any notable constraints. But the grabby aliens model does solve the Fermi paradox.
The reason humans don't do any of those things is because they conflict with human values. We don't want to do any of that in the course of solving a math problem. Part of that is that doing such things would conflict with our human values, and the other part is that it sounds for a lot of work and we don't actually want the math problem solved that badly.
A better example of things that humans might extremely optimize for, is the continued life and well-being of someone who they care deeply about. Humans will absolutely hire people--doctors and lawyers and charlatans who claim psychic foreknowledge--, kill large numbers of people if that seems helpful, and there are people who would tear apart the stars to protect their loved ones if that were both necessary and feasible (which is bad if you inherently value stars, but very good if you inherently value the continued life and well-being of someone's children).
One way of thinking about this is that an AI can wind up with values which seem very silly from our perspective, values that you or I simply wouldn't care very much about, and be just as motivated to pursue those values as we're motivated to pursue our highest values.
But that's anthropomorphizing. A different way to think about it is that Clippy is a program that maximizes the number of paperclips, like an if loop in Python or water flowing downhill, and Clippy does not care about anything.
The history of the world would be different (and a touch shorter) if immediately after the development of the nuclear bomb millions of nuclear armed missiles constructed themselves and launched themselves at targets across the globe.
To date we haven't invented anything that's an existential threat without humans intentionally trying to use it as a weapon and devoting their own resources to making it happen. I think that AI is pretty different.
As someone who doesn't want to go insane, I find it useful to read accounts of people going insane (especially from people who passed through madness and out the other side).
For people who are curious and want to read a more detailed account of someone's psychotic break, what delusions felt like for them from the inside, the misadventures they had during it, and the lessons they took from it, Peter Welch wrote about his here: https://www.stilldrinking.org/the-episode-part-1