I think you are incorrect on dangerous use case, though I am open to your thoughts. The most obvious dangerous case right now, for example, is AI algorithmic polarization via social media. As a society we are reacting, but it doesn't seem like it is in an particularly effectual way.
Another way to see this current destruction of the commons is via automated spam and search engine quality decline which is already happening, and this reduces utility to humans. This is only in the "bit" universe but it certainly affects us in the atoms universe and as AI has "...
Its not a myth, but an oversimplification which makes the original thesis much less useful. The mind, as we are care about, is a product and phenomenon of the entire environment it is in, as well as the values we can expect it to espouse.
It would indeed be akin to taking an engine, putting it in another environment like the ocean and expecting the similar phenomenon of torque to rise from it.
Lifelong quadriplegics are perfectly capable of love, right?
As a living being in need of emotional comfort and who would die quite easily, it would be extremely useful to express love to motivate care and indeed excessively so. A digital construct of the same brain would have immediately different concerns, e.g. less need for love and caring, more to switch to a different body, etc.
Substrate matters massively. More on this below.
...Again, an perfect ideal whole-brain-emulation is a particularly straightforward case. A perfect emulation of my brain wou
But you do pass on your consciousness in a significant way to your children through education, communication and relationships and there is an entire set of admirable behaviors selected around that.
I generally am less opposed to any biological strategy, though the dissolution of the self into copies would definitely bring up issues. But I do think that anything biological has significant advantages in that ultimate relatedness to being, and moreover in the promotion of life: biology is made up of trillions of individual cells, all arguably agentic, which coordinate marvelously into a holobioant and through which endless deaths and waste all transform into more life through nutrient recycling.
I am in Vision 3 and 4, and indeed am a member of Pause.ai and have worked to inform technocrats, etc to help increase regulations on it.
My primary concern here is that biology remains substantial as the most important cruxes of value to me such as love, caring and family all are part and parcel of the biological body.
Transhumans who are still substantially biological, while they may drift in values substantially, will still likely hold those values as important. Digital constructions, having completely different evolutionary pressures and influences, will not.
I think I am among the majority of the planet here, though as you noted, likely an ignored majority.
I don't mind it: but not in a way that wipes out my descendants, which is pretty likely with AGI.
I would much rather die than to have a world without life and love, and as noted before, I think a lot of our mores and values as a species comes from reproduction. Immortality will decrease the value of replacement and thus, those values.
I want to die so my biological children can replace me: there is something essentially beautiful about it all. It speaks to life and nature, both which I have a great deal of esteem for.
That said, I don't mind life extension research but anything that threatens to end all biological life or essentially kill a human to replace it with a shadowy undead digital copy are both not worth it for it.
As another has mentioned, a lot of our fundamental values come from the opportunities and limitations of biology: fundamentally losing that eventually leads to a world...
I am speaking of their eventual evolution: as it is, no, they cannot love either. The simulation of mud is not the same as love and nor would it have similar utility in reproduction, self-sacrifice, etc. As in many things, context matters and something not biological fundamentally cannot have the context of biology beyond its training, while even simple cells will alter based on its chemical environment, etc, and is vastly more part of the world.
Love would be as useful to them as flippers and stone knapping are to us, so it would be selected out. So no, they won't have love. The full knowledge of a thing also requires context: you cannot experience being a cat without being a cat, substrate matters.
Biological reproduction is pretty much the requirement for maternal love to exist in any future, not just as a copy of an idea.
And moving doom back by a few years is entitely valid as a strategy, I think it should be realized, and is even pivitol. If someone is trying to punch you and you can delay it by a few seconds, that can determine the winner of the fight.
In this case, we also have other technologies which are concurrently advancing such as genetic therapy or brain computer interfaces.
Having them advance ahead of AI may very well change the trajectory of human survival.
The natural consequence of "postbiological humans" is effective disempowerment if not extinction of humanity as a whole.
Such "transhumanists" clearly do not find the eradication of biology abhorrent, any more than any normal person would find the idea of "substrate independence"(death of all love and life) to be abhorrent.
Value is based on scarcity. That which can be copied and pasted has little value.
In any story, this is the equivalent of discussing why undeath would be better than life.
All of this seems to be a higher value world to me than either a world of "artificial people" which thus ends the entire cycle of life itself or total extinction of humanity, which is likely also as a result of AI continuity.
As such, it seems that total human consciousness may endure longer, tell and feel more stories and thus have a higher total existence by having a near total catastrope to lower the rate of AI development.
I think even if AI proves strictly incapable of surviving in the long time due to various efficiency constraints, this has no relevance on its ability to kill us all.
A paperclip maximizer that eventually runs into a halting problem as it tries to paperclip itself may very well have killed everyone by that point.
I think the term for this is "minimal viable exterminator."
But land and food doesnt actually give you more computational capability: only having another human being cooperate with you in some way can.
The essential point here is that values depend upon the environment and the limitations thereof, so as you change the limitations, the values change. The values important for a deep sea creature with extremely limited energy budget, for example, will be necessarily different from that of human beings.
I disagree on the inference to the recent post, which I quite liked and object heavily to Hanson's conclusions.
The ideal end state is very different: in the post mentioned, biological humans, if cyborgs, are in control. The Hanson endpoint has only digital emulations of humanity.
This is the basic distinguishing point between the philosophies of Cyborgism vs more extreme ones like mind uploading or Hanson's extinction of humanity as we know it for "artificial descendants."
"You can't reason a man out of a position he has never reasoned himself into."
I think I have seen a similar argument on LW for this, and it is sensible. With vast intelligence, it is possible for the search space to support priors to be even greater. An AI with a silly but definite value like "the moon is great, I love the moon" may not change its value as much as develop an entire religion around the greatness of the moon.
We see this in goal misgeneralization, where it very much maximizes a reward function independent of the meaningful goal.
I have considered the loss of humanity from being in a hive mind versus the loss of humanity from being extinct completely or being emulated on digital processes, and concluded as bad as it might be to become much more akin to true eusocial insects like ants, you still have more humanity left by keeping some biology and individual bodies.
But if you believed that setting fire to everything around you was good, and by showing you that hurting ecosystems by fire would be bad, you would change your values, would that really be "changing your values?"
A lot of values update based on information, so perhaps one could realign such AI with such information.
I have never had much patience for Hanson and it seems someone as intelligent as himself should know that values emerge from circumstance. What use, for example, would AI have for romantic love in a world where procreation consists of digital copies? What use are coordinated behaviors for society if lies are impossible and you can just populate your "society" with clones of yourself? What use is there for taste without the evolutionary setup for sugars, etc.
Behaviors arise from environmental conditions, and its just wild to see a thought that eliminating a...
I count myself among the simple and the issue would seem to be that I would just take the easiest solution of not building a doom machine, to minimize risks of temptation.
Or as the Hobbits did, throw the Ring into a volcano, saving the world the temptation. Currently, though, I have no way of pressing a button to stop it.
I believe that the general consensus is that it is impossible to totally pause AI development due to Molochian concerns: I am like you, and if I could press a button to send us back to 2017 levels of AI technology, I would.
However, in the current situation, the intelligent people as you noted have found ways to convince themselves to take on a very high risk of humanity and the general coordination of humanity is not enough to convince them otherwise.
There have some positive updates but it seems that we have not been in a world of general sanity and safet...
I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.
One example would be the below:
The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.
As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.
So the answer is: enough.
I don't think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.
Its not a good idea to treat a disease right before it kills you: prevention is the way to go.
So no, I don't think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.
I'll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.
The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is "ASI is magic."
I would bet on the bomb.
This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.
Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it'll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.
In a world with controls, grim triggers or otherwise, AI would have to develop along ...
No, I wouldn't want it even if it was possible since by nature it is a replacement of humanity. I'd only accept Elon's vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.
My main crux is that humanity has to be largely biological due to holobiont theory. There's a lot of flexibility around that but anything that threatens that is a nonstarter.
I think even the wealthy supporters of it are more complex: I was surprised that Palantir's Peter Thiel came out discussing how AI "must not be allowed to surpass the human spirit" even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.
The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it's not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for "mundane" reasons, it's a matter of priority/concern and grim triggers are a natural consequence.
Elon had a personal discussion with China recently as well, and given his well known p...
I had a very long writeup on this but I had a similar journey from identifying as a transhumanist to deeply despising AI, so I appreciate seeing this. I'll quote part of mine and perhaps identify:
"I worked actively in frontier since at least 2012 including several stints in "disruptive technology" companies where I became very familiar with the technology cult perspective and to a significant extent, identified. One should note that there is a definitely healthy aspect to it, though even the most healthiest aspect is, as one could argue, colonialist - the ... (read more)