A jester unemployed is nobody's fool.
I would probably define AGI first, just because, and I'm not sure about the idea that we are "competing" with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that "machine" wouldn't be autonomous per se…
Guess it's all about defining that "A" (maybe we use "S" for synthetic or "S" for silicon?)
Well and I guess defining that "I".
Dang. This is for sure the best place to start. Everyone needs to be as certain as possible (heh) they are talking about the same things. AI itself as a concept is like, a mess. Maybe we use ML and whatnot instead even? Get real specific as to the type y todo?
I dunno but I enjoyed this piece! I am left wondering, what if we prove AGI is uncontrollable but not that it is possible to create? Is "uncontrollable" enough justification to not even try, and moreso, to somehow [personally I think this impossible, but] dissuade people from writing better programs?
I'm more afraid of humans and censorship and autonomous policing and whathaveyou than "AGI" (or ASI)
Yes, it is, because it took like five years to understand minority-carrier injection.
LOL! Gesturing in a vague direction is fine. And I get it. My kind of rationality is for sure in the minority here, I knew it wouldn't be getting updoots. Wasn't sure that was required or whatnot, but I see that it is. Which is fine. Content moderation separates the wheat from the chaff and the public interwebs from personal blogs or whatnot.
I'm a nitpicker too, sometimes, so it would be neat to suss out further why the not new idea that “everything in some way connects to everything else" is "false" or technically incorrect, as it were, but I probably didn't express what I meant well (really, it's not a new idea, maybe as old as questions about trees falling in forests— and about as provable I guess).
Heh, I didn't even really know I was debating, I reckon. Just kind of thinking, I was thinking. Thus the questioning ideas or whatnot… but it's in the title, kinda, right? Or at least less wrong? Ha! Regardless, thanks for the gesture(s), and no worries!
I love it! Kind of like Gödel numbers!
I think we're sorta saying the same thing, right?
Like, you'd need to be "outside" the box to verify these things, correct?
So we can imagine potential connections (I can imagine a tree falling, and making sound, as it were) but unless there is some type of real reference— say the the realities intersect, or there's a higher dimension, or we see light/feel gravity or what have you— they don't exist from "inside", no?
Even imagining things connects or references them to some extent… that's what I meant about unknown unknowns (if I didn't edit that bit out)… even if that does go to extremes.
Does this reasoning make sense? I know defining existence is pretty abstract, to say the least. :)
My point is that complexity, no matter how objective a concept, is relative. Things we thought were "hard" or "complex" before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure "human alignment[1]", they will also work for "AI alignment" (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we're so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it's going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
assuming alignment is possible— "how strong of a consensus is needed?" etc.
For something to "exist", it must relate, somehow, to something else, right?
If so, everything relates to everything else by extension, and to some degree, thus "it's all relative".
Some folk on LW have said I should fear Evil AI more than Rogue Space Rock Collisions, and yet, we keep having near misses with these rocks that "came out of nowhere".
I'm more afraid of humans humaning, than of sentient computers humaning.
Is not the biggest challenge we face the same as it has been— namely spreading ourselves across multiple rocks and other places in space, so all our eggs aren't on a single rock, as it were?
I don't know. I think so. But I also think we should do things in as much as a group as possible, and with as much free will as possible.
If I persuade someone, did I usurp their free will? There's strength in numbers, generally, so the more people you persuade, the more people you persuade, so to speak. Which is kind of frightening.
What if the "bigger" danger is the Evil AI? Or Climate Change? Or Biological Warfare? Global Nuclear Warfare would be bad too. Is it our duty to try to organize our fellow existence-sharers, and align them with working towards idea X? Is there a Root Idea that might make tackling All of the Above™ easier?
Is trying to avoid leadership a cop-out? Are the ideas of free will, and group alignment, at odds with each other?
Why not just kick back and enjoy the show? See where things go? Because as long as we exist, we somehow, inescapably, relate? How responsible is the individual, really, in the grand scheme of things? And is "short" a relative concept? Why is my form so haphazard? Can I stop this here[1]?
Regarding "all things being equal" / ceteris paribus, I think you are correct (assuming I'm interpreting this last bullet-point as intended) in that it "binds" a system in ways that "divorce it from reality" to some extent.
I feel like this is a given, but also that since the concept exists on a "spectrum of isolation", the ones that are closer to the edge of "impossible to separate" necessarily skew/divorce reality further.
I'm not sure if I've ever explicitly thought about that feature of this cognitive device— and it's worth explicitly thinking about! (You might be meaning something else, but this is what I got out of it.)
As for this overall article, it is [what I find to be] humorous satire, so it's more anti-value than value, if you will.
It pokes fun at the idea that we should fear[1] intelligence— which seems to be an overarching theme to many of the "AI safety" posts on LessWrong, and which I find highly ironic and humorous, as so many people here seem to feel (and no few number literally express) that they are more intelligent than the average person (some say it is "society" expressing it, versus themselves, per se— but still).
Thus, to some extent, this "intelligence is dangerous" sentiment is a bit of ego puffery as well…
But to address the rest of your comment, it's cool that you keyed into the "probably dangerous" title element, as yes, it's not just how bad a thing could be, but how likely the thing is to happen, which we use to assess risks to determine if they are "worth" taking.
Does increased intelligence bring increased capability for deception?
It is so hard to separate things! (To hark back a little, lol)
I can't help but think there is a strange relationship here— take Mutually Assured Destruction for instance— at some point, the capability is so high it appears to limit not only the probability— but the capability itself!
I think I will end here, as the M.A.D. angle has me pondering semantics and whatnot… but thanks for the impetus to post!
whatever terminology you prefer that conveys "intelligence" as a pejorative