I would probably define AGI first, just because, and I'm not sure about the idea that we are "competing" with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that "machine" wouldn't be autonomous per se…
Guess it's ...
The transistor is a neat example.
No, it's not, because we have a pretty good idea of how transistors work and in fact someone needed to directly anticipate how they might work in order to engineer them. The "unknown" part about the deep learning models is not the network layer or the software that uses the inscrutable matrices, it's how the model is getting the answers that it does.
LOL! Gesturing in a vague direction is fine. And I get it. My kind of rationality is for sure in the minority here, I knew it wouldn't be getting updoots. Wasn't sure that was required or whatnot, but I see that it is. Which is fine. Content moderation separates the wheat from the chaff and the public interwebs from personal blogs or whatnot.
I'm a nitpicker too, sometimes, so it would be neat to suss out further why the not new idea that “everything in some way connects to everything else" is "false" or technically incor...
I love it! Kind of like Gödel numbers!
I think we're sorta saying the same thing, right?
Like, you'd need to be "outside" the box to verify these things, correct?
So we can imagine potential connections (I can imagine a tree falling, and making sound, as it were) but unless there is some type of real reference— say the the realities intersect, or there's a higher dimension, or we see light/feel gravity or what have you— they don't exist from "inside", no?
Even imagining things connects or references them to some extent… that's what I meant about unknown ...
My point is that complexity, no matter how objective a concept, is relative. Things we thought were "hard" or "complex" before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure "human alignment[1]", they will also work for "AI alignment" (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we're so much smarter than the humans that came before us, and...
For something to "exist", it must relate, somehow, to something else, right?
If so, everything relates to everything else by extension, and to some degree, thus "it's all relative".
Some folk on LW have said I should fear Evil AI more than Rogue Space Rock Collisions, and yet, we keep having near misses with these rocks that "came out of nowhere".
I'm more afraid of humans humaning, than of sentient computers humaning.
Is not the biggest challenge we face the same as it has been— namely spreading ourselves across multiple rocks and other places in space, so al...
It's a weird one to think about, and perhaps paradoxicle. Order and chaos are flip sides of the same coin— with some amorphous 3rd as the infinitely varied combinations of the two!
The new patterns are made from the old patterns. How hard is it to create something totally new, when it must be created from existing matter, or existing energy, or existing thoughts? It must relate, somehow, or else it doesn't "exist"[1]. That relation ties it down, and by tying it down, gives it form.
For instance, some folk are mad at computer-assisted ...
Contributes about as much as a "me too!" comment.
"I think this is wrong and demonstrating flawed reasoning" would be more a substantive repudiation with some backing as to why you think the data is, in fact, representative of "true" productivity values.
This statement makes a lot more sense than your "sounds like cope" rejoinder brief explanation:
Having a default base of being extremely skeptical of sweeping claims based on extrapolations on GDP metrics seems like a prudent default.
You don't have to look far to see people, um, not exactly satisfied with how...
Illustrative perhaps?
Am I wrong re: Death? Have you personally feared it all your life?
Frustratingly, all I can speak from is my own experience, and what people have shared with me, and I have no way to objectively verify that anything is "true".
I am looking at reality and saying "It seems this way to me; does it seem this way to you?"
That— and experiencing love and war &c. — is maybe why we're "here"… but who knows, right?
Signals, and indeed, opposites, are an interesting concept! What does it all mean? Yin and yang and what have you…
Would you agree that it's hard to be scared of something you don't believe in?
And if so, do you agree that some people don't believe in death?
Like, we could define it at the "reality" level of "do we even exist?" (which I think is apart from life & death per se), or we could use the "soul is eternal" one, but regardless, it appears to me that lots of people don't believe they will die, much less contemplate it. (Perhaps we...
"sounds like cope"? At least come in good faith! Your comments contribute nothing but "I think you're wrong".
Several people have articulated problems with the proposed way of measuring — and/or even defining — the core terms being discussed.
(I like the "I might be wrong" nod, but it might be good to note as well how problematic the problem domain is. Econ in general is not what I'd call a "hard" science. But maybe that was supposed to be a given?).
Others have proposed better concrete examples, but here's a relative/abstract bit via ...
I'm familiar with AGI, and the concepts herein (why the OP likes the proposed definition of CT better than PONR), it was just a curious post, what with having "decisions in the past cannot be changed" and "does X concept exist" and all.
I think maybe we shouldn't muddy the waters more than we already have with "AI" (like AGI is probably a better term for what was meant here— or was it? Are we talking about losing millions of call center jobs to "AI" (not AGI) and how that will impact the economy/whatnot? I'm not sure if that's transformatively u...
LOL! Yeah I thought TAI meant
TAI: Threat Artificial Intelligence
The acronym was the only thing I had trouble following, the rest is pretty old hat.
Unless folks think "crunch time" is something new having only to do with "the singularity" so to speak?
If you're serious about finding out if "crunch time" exists[1] or not, as it were, perhaps looking at existing examples might shed some light on it?
even if only in regards to AGI
I'd toss software into the mix as well. How much does it cost to reproduce a program? How much does software increase productivity?
I dunno, I don't think the way the econ numbers are portrayed here jive with reality. For instance:
"And yet, if I had only said, “there is no way that online video will meaningfully contribute to economic growth,” I would have been right."
doesn't strike me as a factual statement. In what world has streaming video not meaningfully contributed to economic growth? At a glance it's ~$100B industr...
Traditionally it's uncommon (or should be) for youth to have existential worries, so I don't know about cradle to the grave[1], tho external forces are certainly "always" concerned with it— which means perhaps the answer is "maybe"?
There's the trope that some of us act like we will never die… but maybe I'm going too deep here? Especially since what I was referring to was more a matter of feeling "obsolete", or being replaced, which is a bit different than existential worries in the mortal sense[2].
I think this is different from the Luddite feelings b...
It seems like the more things change, the more they stay the same, socially.
Complexity is more a problem of scope and focus, right? Like even the most complex system can be broken down into smaller, less complex pieces— I think? I guess anything that needs to take into consideration the "whole", if you will, is pretty complex.
I don't know if information itself makes things more complex. Generally it does the opposite.
As long as you can organize it I reckon! =]
It's neat that this popped up for me! I was just waxing poetic (or not so much) about something kind of similar the other day.
The words we use to describe things matter. How much, is of course up for debate, and it takes different messages to make different people "understand" what is being conveyed, as "you are unique; just like everyone else", so multiple angles help cover the bases :)
I think using the word "reward" is misleading[1], since it seems have sent a lot of people reasoning down paths that aren't exactly in the direction of the meaning in...
I get the premise, and it's a fun one to think about, but what springs to mind is
Phase 1: collect underpants
Phase 2: ???
Phase 3: kill all humans
As you note, we don't have nukes connected to the internet.
But we do use systems to determine when to launch nukes, and our senses/sensors are fallible, etc., which we've (barely— almost suspiciously "barely", if you catch my drift[1]) managed to not interpret in a manner that caused us to change the season to "winter: nuclear style".
Really I'm doing the same thing as the alignment debate is on about, but about the...
Do we all have the same definition of what AGI is? Do you mean being able to um, mimic the things a human can do, or are you talking full on Strong AI, sentient computers, etc.?
Like, if we're talking The Singularity, we call it that because all bets are off past the event horizon.
Most the discussion here seems to sort of be talking about weak AI, or the road we're on from what we have now (not even worthy of actually calling "AI", IMHO— ML at least is a less overloaded term) to true AI, or the edge of that horizon line, as it were.
When you said "the ...
Saying ChatGPT is "lying" is an anthropomorphism— unless you think it's conscious?
The issue is instantly muddied when using terms like "lying" or "bullshitting"[1], which imply levels of intelligence simply not in existence yet. Not even with models that were produced literally today. Unless my prior experiences and the history of robotics have somehow been disconnected from the timeline I'm inhabiting. Not impossible. Who can say. Maybe someone who knows me, but even then… it's questionable. :)
I get the idea that "Real ...
I like that you have reservations about if we're even powerful enough to destroy ourselves yet. Often I think "of course we are! Nukes, bioweapons, melting ice!", but really, there's no hard proof that we even can end ourselves.
It seems like the question of human regulation would be the first question, if we're talking about AI safety, as the AI isn't making itself (the egg comes first). Unless we're talking about some type of fundamental rules that exist a priori. :)
This is what I've been asking and so far not finding any satisfactory an...
It must depend on levels of intelligence and agency, right? I wonder if there is a threshold for both of those in machines and people that we'd need to reach for there to even be abstract solutions to these problems? For sure with machines we're talking about far past what exists currently (they are not very intelligent, and do not have much agency), and it seems that while humans have been working on it for a while, we're not exactly there yet either.
Seems like the alignment would have to be from micro to macro as well, with constant communica...
It might be fun to pair Humankind: A Hopeful History with The Precipice, as both have been suggested reading recently.
It seems to me that we are, as individuals, getting more and more powerful. So this question of "alignment" is a quite important one— as much for humanity, with the power it currently has, as for these hypothetical hyper-intelligent AIs.
Looking at it through a Sci-Fi AI lens seems limiting, and I still haven't really found anything more than "the future could go very very badly", which is always a given, I think.
I've read those papers...
It seems to me that a lot of the hate towards "AI art" is that it's actually good. It was one thing when it was abstract, but now that it's more "human", a lot of people are uncomfortable. "I was a unique creative, unlike you normie robots who don't do teh art, and sure, programming has been replacing manual labor everywhere, for ages… but art isn't labor!" (Although getting paid seems to plays a major factor in most people's reasoning about why AI art is bad— here's to hoping for UBI!)
I think they're mainly uncomfortable because the math works...
Oh snap, I read and wrote "sarcasm" but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other "voluntary" freedom giving-ups.
I've had people I respect literally say "maybe we need to monitor all compute resources, Because AI". Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all "AI" output. Just nuts s...
I think the human has to have the power first, logically, for the AI to have the power.
Like, if we put a computer model in charge of our nuclear arsenal, I could see the potential for Bad Stuff. Beyond all the movies we have of just humans being in charge of it (and the documented near catastrophic failures of said systems— which could have potentially made the Earth a Rough Place for Life for a while). I just don't see us putting anything besides a human's finger on the button, as it were.
By definition, if the model kills everyone instea...
I haven't seen anything even close to a program that could say, prevent itself from being shut off— which is a popular thing to ruminate on of late (I read the paper that had the "press" maths =]).
What evidence is there that we are near (even within 50 years!) to achieving conscious programs, with their own will, and the power to affect it? People are seriously contemplating programs sophisticated enough to intentionally lie to us. Lying is a sentient concept if ever there was one!
Like, I've seen Ex Machina, and Terminator, and Electric Dreams,...
Oh, hey, I hadn't noticed I was getting downvoted. Interesting!
I'm always willing to have true debate— or even false debate if it's good. =]
I'm just sarcasming in this one for fun and to express what I've already been expressing here lately in a different form or whatnot.
The strong proof is what I'm after, for sure, and more interesting/exciting to me than just bypassing the hard questions to rehash the same old same old.
Imagine what AI is going to show us about ourselves. There is nothing bad or scary there, unless we find "the truth" bad and ...
Since we're anthropomorphizing[1] so much— how to we align humans?
We're worried about AI getting too powerful, but logically that means humans are getting too powerful, right? Thus what we have to do to cover question 1 (how), regardless of question 2 (what), is control human behavior, correct?
How do we ensure that we churn out "good" humans? Gods? Laws? Logic? Communication? Education? This is not a new question per se, and I guess the scary thing is that, perhaps, it is impossible to ensure that literally ev...
Perspective is powerful. As you say, one person's wonderful is another person's terrible. Heck, maybe people even change their minds, right? Oof! "Yesterday I was feeling pretty hive-mindy, but today I'm digging being alone, quote unquote", as it were.
Maybe that's already the reality we inhabit. Perhaps, we can change likes and dislikes on a whim, if we, um, like.
Holy molely! what if it turns out we chose all of this?!? ARG! What if this is the universe we want?!
- - ...
I guess what I'm getting at is that those tracks are jumping the gun, so to speak.
Like, what if the concept of alignment itself is the dangerous bit? And I know I have seen this elsewhere, but it's usually in the form of "we shouldn't build an AI to prevent us from building an AI because duh, we just build that AI we were worried about"[1], and what I'm starting to wonder is, maybe the danger is when we realize that what we're talking about here is not "AI" or "them", but "humans" and "us".
We have CRISPR and other powerful tech that allow a single "m...
Nice! I read a few of the stories.
This is more along the lines I was thinking. One of the most fascinating aspects of AI is what it can show us about ourselves, and it seems like many people either think we have it all sorted out already, or that sorting it all out is inevitable.
Often (always?) the only "correct" answer to a question is "it depends", so thinking there's some silver bullet solution to be discovered for the preponderance of ponderance consciousness faces is, in my humble opinion, naive.
Like, how do we even assign meaning t...
Thanks for the links!
I see more interesting things going on in the comments, as far as what I was wondering, than what is in the posts themselves, as the posts all seem to assume we've sorted out some super basic stuff that I don't know that humans have sorted out yet, such as if there is an objective "good", etc., which seem rather necessary things to suss before trying to hew to— be it for us or AIs we create.
I get the premise, and I think Science Fiction has done an admirable job of laying it all out for us already, and I guess I'm just a bit confused as to if we're writing fiction here or trying to be non-fictional?
How do we ensure that humans are not misaligned, so to speak?
The crux, to me, is that we've developed all kinds of tech that one person alone can use to basically wipe out everyone. Perhaps I'm being overly optimistic (or pessimistic, depending on perspective), but no one can deny that the individual is currently the most powerful individuals have ever been, and there is no sign of that slowing down.
Mostly I believe this is because of information.
So the only real solution I can see, is some type of thought police, basically, be it for humans or AI.[1...
I don't see how we could have a "the" AGI. Unlike humans, AI doesn't need to grow copies. As soon as we have one, we have legion. I don't think we (humanity as a collective) could manage one AI, let alone limitless numbers, right? I mean this purely logistically, not even in a "could we control it" way. We have a hard time agreeing on stuff, which is alluded to here with the "value" bit (forever a great concept to think about), so I don't have much hope for some kind of "all the governments in the world coming together to mana...
Ironically this still seems pretty pessimistic to me. I'm glad to see something other than "AHHH!" though, so props for that.
I find it probably more prudent to worry about a massive solar flare, or an errant astral body collision, than to worry about "evil" AI taking a "sharp turn".
I put quotes around evil because I'm a fan of Nietzsche's thinking on the matter of good and evil. Like, what, exactly are we saying we're "aligning" with? Is there some universal concept of good?
Many people seem to dismiss blatant problems with the base premis...
So can you control emotion with rationality, or can't you? "There's more fish in the sea" seems like classic emotion response control. Or maybe it's that "emotion" vs. "feelings" idea— one you have control of, and one you do not? Or it's the reaction you can control, not the emotion itself?
Having to "take a dream out behind the woodshed", as it were, is part of becoming a whole person I guess, but it's, basically by definition, not a pleasant experience. I reckon that's by design, as sometimes, reality surprises you.
I think it boils...
I'm going to guess it's like mumble Resource Organization, something you'd like "farm out" some work to rather than have them on payroll and in meetings as it were. Window Washers or Chimney Sweeps mayhap?
Just a guess, and I hope I'm not training an Evil AI by answering this question with what sprang to mind from the context.
Regarding "all things being equal" / ceteris paribus, I think you are correct (assuming I'm interpreting this last bullet-point as intended) in that it "binds" a system in ways that "divorce it from reality" to some extent.
I feel like this is a given, but also that since the concept exists on a "spectrum of isolation", the ones that are closer to the edge of "impossible to separate" necessarily skew/divorce reality further.
I'm not sure if I've ever explicitly thought about that feature of this cognitive device— and it's worth explicitly thinking about! &nb... (read more)