All of Lichdar's Comments + Replies

I had a very long writeup on this but I had a similar journey from identifying as a transhumanist to deeply despising AI, so I appreciate seeing this. I'll quote part of mine and perhaps identify:

"I worked actively in frontier since at least 2012 including several stints in "disruptive technology" companies where I became very familiar with the technology cult perspective and to a significant extent, identified. One should note that there is a definitely healthy aspect to it, though even the most healthiest aspect is, as one could argue, colonialist - the ... (read more)

I think you are incorrect on dangerous use case, though I am open to your thoughts. The most obvious dangerous case right now, for example, is AI algorithmic polarization via social media. As a society we are reacting, but it doesn't seem like it is in an particularly effectual way.

Another way to see this current destruction of the commons is via automated spam and search engine quality decline which is already happening, and this reduces utility to humans. This is only in the "bit" universe but it certainly affects us in the atoms universe and as AI has "... (read more)

2otto.barten
Thanks for engaging kindly. I'm more positive than you are about us being able to ban use cases, especially if existential risk awareness (and awareness of this particular threat model) is high. Currently, we don't ban many AI use cases (such as social algo's), since they don't threaten our existence as a species. A lot of people are of course criticizing what social media does to our society, but since we decide not to ban it, I conclude that in the end, we think its existence is net positive. But there are pocket exceptions: smartphones have recently been banned in Dutch secondary education during lecture hours, for example. To me, this is an example showing that we can ban use cases if we want to. Since human extinction is way more serious than e.g. less focus for school children, and we can ban for the latter reason, I conclude that we should be able to ban for the former reason, too. But, threat model awareness is needed first (but we'll get there).

Its not a myth, but an oversimplification which makes the original thesis much less useful. The mind, as we are care about, is a product and phenomenon of the entire environment it is in, as well as the values we can expect it to espouse.

It would indeed be akin to taking an engine, putting it in another environment like the ocean and expecting the similar phenomenon of torque to rise from it.

Lifelong quadriplegics are perfectly capable of love, right?

As a living being in need of emotional comfort and who would die quite easily, it would be extremely useful to express love to motivate care and indeed excessively so. A digital construct of the same brain would have immediately different concerns, e.g. less need for love and caring, more to switch to a different body, etc.

Substrate matters massively. More on this below.

Again, an perfect ideal whole-brain-emulation is a particularly straightforward case. A perfect emulation of my brain wou

... (read more)
2Steven Byrnes
Cocaine and alcohol obviously affect brain functioning, right? That’s how they have the effects that they have. I am baffled that you could possibly see psychoactive drugs like those as evidence against the idea that the mind comes from the brain—from my perspective, it’s strong evidence for that idea. From my perspective, you might as well have said: “There is a myth that torque comes from the car engine, but even casually, we can tell that this isn’t true: would an engine still produce the same torque if I toss it into the ocean? That would result in a torque that is significantly different, despite otherwise identical engine.” (Note: If you respond, I’ll read what you write, but I’m not planning to carry on this conversation, sorry.)

But you do pass on your consciousness in a significant way to your children through education, communication and relationships and there is an entire set of admirable behaviors selected around that.

I generally am less opposed to any biological strategy, though the dissolution of the self into copies would definitely bring up issues. But I do think that anything biological has significant advantages in that ultimate relatedness to being, and moreover in the promotion of life: biology is made up of trillions of individual cells, all arguably agentic, which coordinate marvelously into a holobioant and through which endless deaths and waste all transform into more life through nutrient recycling.

3the gears to ascension
yeah, sounds like we're mostly on the same page, I'm just excessively optimistic about the possibilities of technology and how much more we could pass on than current education, I do agree that it is a sliver of passing on consciousness, but generally my view is we should be at least able to end forgetting completely, instead turning all forgetting into moving-knowledge-and-selfhood-to-cold-archive. personally I'd prefer for nearly ~all live computation to be ~biological, I want to become a fully general shapeshifter before I pass on my information. I'm pretty sure the tech will be there in the next 40 years, based on the beginnings that michael levin's research is giving. (but, also, I'm hoping to live for about 1k to 10k years as a new kind of hyper-efficient deep-space post-carbon "biology" I suspect is possible, so in that respect I am still probably pretty far from your viewpoint! I wanna live in a low gravity superstructure around pluto...)

I am in Vision 3 and 4, and indeed am a member of Pause.ai and have worked to inform technocrats, etc to help increase regulations on it.

My primary concern here is that biology remains substantial as the most important cruxes of value to me such as love, caring and family all are part and parcel of the biological body.

Transhumans who are still substantially biological, while they may drift in values substantially, will still likely hold those values as important. Digital constructions, having completely different evolutionary pressures and influences, will not.

I think I am among the majority of the planet here, though as you noted, likely an ignored majority.

1Noosphere89
I'm starting to think a big crux of my non-doominess probably rests on basically rejecting this premise, alongside a related premise that holds that value is complex and fragile, and the arguments for them being there being surprisingly weak, and the evidence in neuroscience is coming to the opposite conclusion, where values and capabilities are fairly intertwined, and the value generators are about as simple and general as we could have gotten, which makes me much less worried about several alignment problems like deceptive alignment.
6Steven Byrnes
I’m not sure what you mean by this. Lifelong quadriplegics are perfectly capable of love, right? If you replaced the brain of a quadriplegic by a perfect ideal whole-brain-emulation of that same person’s brain, with similar (but now digital) input-output channels, it would still love, right? Yeah it depends on how you make the digital construction. I am very confident that it is possible to make a digital construction with nothing like human values. But I also think it’s possible (at least in principle) to make a digital construction that does have something like human values. Again, an perfect ideal whole-brain-emulation is a particularly straightforward case. A perfect emulation of my brain would have the same values as me, right?

I don't mind it: but not in a way that wipes out my descendants, which is pretty likely with AGI.

I would much rather die than to have a world without life and love, and as noted before, I think a lot of our mores and values as a species comes from reproduction. Immortality will decrease the value of replacement and thus, those values.

4Jiro
By this reasoning, why is the current lifespan perfect, except by astonishingly unlikely chance? If it's so good to have death because it makes replacement valuable, maybe reducing lifespan by 10 years would make replacement even more valuable?
Answer by Lichdar9-7

I want to die so my biological children can replace me: there is something essentially beautiful about it all. It speaks to life and nature, both which I have a great deal of esteem for.

That said, I don't mind life extension research but anything that threatens to end all biological life or essentially kill a human to replace it with a shadowy undead digital copy are both not worth it for it.

As another has mentioned, a lot of our fundamental values come from the opportunities and limitations of biology: fundamentally losing that eventually leads to a world... (read more)

3the gears to ascension
what if we could augment reproduction to no longer lose minds, so that when you have kids, they retain your memories in some significant form? I agree with you that current reproduction is special, passing on the informational "soul" of the body, but I want to be able to pass on more of my perspective than just body directly. of course, it would need to still not be setting the main self, not like an adult growing up again, but rather a child who grows into having the full memory of all their ancestors. But then, perhaps, what if those digital copies you mentioned were instead biological copies, biological backups - a brain and mind stored cryonically, and neurally linkable with telepathy to allow sharing significant parts of your memory, your informational "soul" data, with others? what if you could be immortal but become slower over time, as your older perspective is no longer really fitting into the reality of the descendants, and you can be available should they wish to come learn from your perspective, but ultimately leaving it up to them whether to wake you this year? If we first assume an improving society that can achieve such things, there are so many gradations between current reproduction and "no more reproduction and everyone's immortal" to consider... I don't think dying is what makes reproduction useful as a strategy, whether we choose to find it beautiful or not - I think the need to reinitialize brains and bodies in order to learn a new way of being for a new context is what makes it valuable. And right now, in exchange for that ongoing reinitialization and clean-slate of children, we are losing the wisdom of the elders every generation. (not to mention how as people get old, their brains break in a more total way and many of them get more cranky and prejudiced, at least on average. that part can probably be fixed with straightforward healing-the-body healthcare, not drastic life extension stuff, people are nicer when their lives suck less.)
7Richard_Kennaway
Why not go on living alongside your descendants? I'm with Woody Allen, in preferring immortality to come from not dying.

I generally feel that biological intelligence augmentation, or a biosingularity is by far the best option and one can hope such enhanced individuals realize to forestall AI for all realistic futures.

With biology, there is life and love. Without biology, there is nothing.

Its not merely the rejection of God, its a story of "progress" to reject also reverence of nature and eventually, even life and reality itself, presumably so we can accept mass extinction for morally superior machines.

I am speaking of their eventual evolution: as it is, no, they cannot love either. The simulation of mud is not the same as love and nor would it have similar utility in reproduction, self-sacrifice, etc. As in many things, context matters and something not biological fundamentally cannot have the context of biology beyond its training, while even simple cells will alter based on its chemical environment, etc, and is vastly more part of the world.

And yet eukaryotes have extensive social coordination at times, see quorum sensing. I maintain that biology is necessary for love.

Love would be as useful to them as flippers and stone knapping are to us, so it would be selected out. So no, they won't have love. The full knowledge of a thing also requires context: you cannot experience being a cat without being a cat, substrate matters.

Biological reproduction is pretty much the requirement for maternal love to exist in any future, not just as a copy of an idea.

1RogerDearnaley
"Selected" out in what training stage? "Selected" isn't the right word: we don't select AI's behavior, we train it, and we train it for usefulness to us, not to them. In pretraining, LLMs are trained for trillions of token for being able to correctly simulate every aspect of human behavior that affects our text (and for multimodal models, video/image) output. That includes the ability to simulate love, in all its forms: humans write about it a lot, and it explains a lot of our behavior. They have trained on and have high accuracy in reproducing every parenting discussion site on the Internet. Later fine-tuning stages might encourage or discourage this behavior, depending on the training set and technique, but they normally aren't long enough for much catastrophic forgetting, so they generally just make existing capabilities more or less easy to elicit. Seriously, go ask GPT-4 to write a love letter, or love poetry. Here's a random sample of the latter, from a short prompt describing a girl: Or spend an hour with one of the AI boy/girlfriend services online. They flirt and flatter just fine. LLMs understand and can simulate this human behavior pattern, just as much as they do anything else humans do. You're talking as if evolution and natural selection applies to LLMs. It doesn't. AIs are trained, not evolved (currently). As you yourself are pointing out, they're not biological. However, they are trained to simulate us, and we are biological.
4M. Y. Zuo
Amoebas don't 'feel' 'maternal love' yet they have biological reproduction.  Somewhere along the way from amoebas to chimpanzees, the observed construct known as 'maternal love' must have developed.

This is exactly how I feel. No matter how different, biological entities will have similar core needs. In particular, reproduction will entail love, at least maternal love.

We will not see this with machines. I see no desire to be gentle to anything without love.

4RogerDearnaley
But AIs will have love. They can already write (bad) love poetry, and act as moderately convincing AI boyfriends/girlfriends. As the LLMs get larger and better at copying us, they will increasing be able to accurately copy and portray every feature of human behavior, including love. Even parental love — their training set includes the entire of MumsNet. Sadly, that doesn't guarantee that they'll act on love. Because they'll also be copying the emotions that drove Stalin or Pol Pot, and combining them with superhuman capabilities and power. Psychopaths are very good at catfishing, if they want to be. And (especially if we also train them with Reinforcement Learning) they may also have some very un-human aspects to their mentality.

I am one of those people; I don't consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.

I would say to do everything possible to stop GAI. We might not win, but it was better to have tried. We might even succeed.

But notably, we have not killed all biological life and we are substantially Neanderthal. Versus death by AI, its a far better prospect.

2Logan Zoellner
Manifold currently estimates that there is a 4% chance GPT-5 will destroy the world. What percent chance do you estimate there is that a genetically engineered race of super-humans will cause human-extinction?  

And moving doom back by a few years is entitely valid as a strategy, I think it should be realized, and is even pivitol. If someone is trying to punch you and you can delay it by a few seconds, that can determine the winner of the fight.

In this case, we also have other technologies which are concurrently advancing such as genetic therapy or brain computer interfaces.

Having them advance ahead of AI may very well change the trajectory of human survival.

AGI coup completion is an assumption; if safer alternatives arise, such as biosingularity or cyborgism, it is entirely possible that it could be avoided and humanity remains extant.

Incorrect, as every slowdown in progress allows alternative technologies to catch up and the advancement of monitoring solutions also will promote safety from what basically would be omnicidal maniacs(likely result of all biological life gone from machine rule).

1jacob_cannell
I said slowing down progress (especially of the centralized leaders) will likely lead to safer multipolar scenarios, so not sure what you are arguing is 'incorrect'.

This solves nothing that could not be better solved by freezing development of hardware, which would also slow down evolutionary setups.

This also allows for more time for safer approaches such as genetic engineering and biological advancements to catch up, and keep us from Killing Everyone.

1Logan Zoellner
If your argument is that a race of genetically engineered super-humans are less likely to cause human extinction than GPT-5, the Neanderthals would like to have a word with you.

The natural consequence of "postbiological humans" is effective disempowerment if not extinction of humanity as a whole.

Such "transhumanists" clearly do not find the eradication of biology abhorrent, any more than any normal person would find the idea of "substrate independence"(death of all love and life) to be abhorrent.

Value is based on scarcity. That which can be copied and pasted has little value.

In any story, this is the equivalent of discussing why undeath would be better than life.

All of this seems to be a higher value world to me than either a world of "artificial people" which thus ends the entire cycle of life itself or total extinction of humanity, which is likely also as a result of AI continuity.

As such, it seems that total human consciousness may endure longer, tell and feel more stories and thus have a higher total existence by having a near total catastrope to lower the rate of AI development.

2Stan Pinsent
Only if you consider artificial people to be fundamentally less valuable than real people. I'm reserving judgement on that until I meet an artificial person.

Disagree: values come from substrate and environmental. I would almost certainly ally myself with biological aliens versus a digital "humanity" as the biological factor will create a world of much more reasonable values to me.

We do have world takeover compared to ants, though our desire to wipe out all ants is just not that high.

1M. Y. Zuo
Not really? Ants have more biomass then humans, and are likely to outlast us.

I think even if AI proves strictly incapable of surviving in the long time due to various efficiency constraints, this has no relevance on its ability to kill us all.

A paperclip maximizer that eventually runs into a halting problem as it tries to paperclip itself may very well have killed everyone by that point.

I think the term for this is "minimal viable exterminator."

But land and food doesnt actually give you more computational capability: only having another human being cooperate with you in some way can.

The essential point here is that values depend upon the environment and the limitations thereof, so as you change the limitations, the values change. The values important for a deep sea creature with extremely limited energy budget, for example, will be necessarily different from that of human beings.

Humans can't eat another human and get access to the victim's data and computation but AI can. Human cooperation is a value created by our limitations as humans, which AI does not have similar constraints for.

3TAG
Humans can kill another human and get access to their land and food. Whatever caused co operation to evolve, it isn't that there is no benefit to defection.

I disagree on the inference to the recent post, which I quite liked and object heavily to Hanson's conclusions.

The ideal end state is very different: in the post mentioned, biological humans, if cyborgs, are in control. The Hanson endpoint has only digital emulations of humanity.

This is the basic distinguishing point between the philosophies of Cyborgism vs more extreme ones like mind uploading or Hanson's extinction of humanity as we know it for "artificial descendants."

Both open thread links at the base of the article lead to errors for me.

"You can't reason a man out of a position he has never reasoned himself into."

I think I have seen a similar argument on LW for this, and it is sensible. With vast intelligence, it is possible for the search space to support priors to be even greater. An AI with a silly but definite value like "the moon is great, I love the moon" may not change its value as much as develop an entire religion around the greatness of the moon.

We see this in goal misgeneralization, where it very much maximizes a reward function independent of the meaningful goal.

I have considered the loss of humanity from being in a hive mind versus the loss of humanity from being extinct completely or being emulated on digital processes, and concluded as bad as it might be to become much more akin to true eusocial insects like ants, you still have more humanity left by keeping some biology and individual bodies.

But if you believed that setting fire to everything around you was good, and by showing you that hurting ecosystems by fire would be bad, you would change your values, would that really be "changing your values?"

A lot of values update based on information, so perhaps one could realign such AI with such information.

1quetzal_rainbow
It's not changing my values, it's changing my beliefs?

I have never had much patience for Hanson and it seems someone as intelligent as himself should know that values emerge from circumstance. What use, for example, would AI have for romantic love in a world where procreation consists of digital copies? What use are coordinated behaviors for society if lies are impossible and you can just populate your "society" with clones of yourself? What use is there for taste without the evolutionary setup for sugars, etc.

Behaviors arise from environmental conditions, and its just wild to see a thought that eliminating a... (read more)

I count myself among the simple and the issue would seem to be that I would just take the easiest solution of not building a doom machine, to minimize risks of temptation.

Or as the Hobbits did, throw the Ring into a volcano, saving the world the temptation. Currently, though, I have no way of pressing a button to stop it.

I believe that the general consensus is that it is impossible to totally pause AI development due to Molochian concerns: I am like you, and if I could press a button to send us back to 2017 levels of AI technology, I would.

However, in the current situation, the intelligent people as you noted have found ways to convince themselves to take on a very high risk of humanity and the general coordination of humanity is not enough to convince them otherwise.

There have some positive updates but it seems that we have not been in a world of general sanity and safet... (read more)

So in short, they are generally unconcerned with existential risks? I've spoken with some staff and I get the sense they do not believe it will impact them personally.

3lincolnquirk
Mild disagree: I do think x-risk is a major concern, but seems like people around DC tend to put 0.5-10% probability mass on extinction rather than the 30%+ that I see around LW. This lower probability causes them to put a lot more weight on actions that have good outcomes in the non extinction case. The EY+LW frame has a lot more stated+implied assumptions about uselessness of various types of actions because of such high probability on extinction.

I would prefer total oblivion over AI replacement myself: complete the Fermi Paradox.

I have been wondering if the new research into organoids will help? It would seem one of the easiest ways to BCI is to use more brain cells.

One example would be the below:

https://www.cnet.com/science/ai-could-be-made-obsolete-by-oi-biocomputers-running-on-human-brain-cells/

3Alex K. Chen (parrot)
Discontinuous progress is possible (and in neuro areas it is way more possible than other areas). Making it easier for discontinuous progress to take off is the most important thing [eg, reduced-inflammation neural interfaces]. MRI data can be used to deliver more precisely targeted ultrasound//tDCS/tACS (the effect sizes on intelligence may not be high, but they may still denoise brains (Jhourney wants to make this happen on faster timescales than meditation) and improve cognitive control/well-being, which still has huge downstream effects on most of the population) Intelligence enhancement is not the only path [there are others such as sensing/promoting better emotional regulation + neurofeedback] which have heavy disproportionate impact and are underinvestigated (neurofeedback, in particular, seems to work really well for some people, but b/c there are so many practitioners and it's very hit-and-miss, it takes a lot of capital [more so than time] to see if it really works for any particular person) Reducing the rate at which brains age (over time) is feasible + maximizes lifetime human intelligence/compute + and there is lots of low-hanging fruit in this area (healthier diets alone can give 10 extra years), especially because there is huge variation in how much brains age. https://www.linkedin.com/posts/neuro1_lab-grown-human-brain-organoids-go-animal-free-activity-7085372203331936257-F8YB?utm_source=share&utm_medium=member_android I'm friends with a group in Michigan which is trying to do this. The upside risk is unknown because there are so many unknowns (but so little investment too, at the same time) - they also broaden the pool of people who can contribute, since they don't need to be math geniuses. There aren't really limits on how to grow organoids (a major question is whether or not one can grow them larger than the actual brain, without causing them to have the degeneracies of autistic brains.). More people use them to focus on drug testing than co

I would campaign against lead pipes and support the goths in destroying Rome which likely improved human futures over an alternative of widespread lead piping.

The point is that sanctions should be applied as necessary to discourage AGI, however, approximate grim triggers should apply as needed to prevent dystopia.

As the other commentators have mentioned, my reaction is not unusual and thus this is why the concerns of doom have been widespread.

So the answer is: enough.

I don't think it is magic but it is still sufficiently disgusting to treat it with equal threat now. Red button now.

Its not a good idea to treat a disease right before it kills you: prevention is the way to go.

So no, I don't think it is magic. But I do think just as the world agreed against human cloning long before there was a human clone, now is the time to act.

1[anonymous]
So gathering up your beliefs, you believe ASI/AGI to be a threat, but not so dangerous a threat you need to use nuclear weapons until an enemy nation with it is extremely far along, which will take, according to your beliefs, many years since it's not that good. But you find the very idea of non human intelligence in use by humans or possibly serving itself so disgusting that you want nuclear weapons used the instant anyone steps out of compliance with international rules you wish to impose. (Note this is historically unprecedented, arms control treaties have been voluntary and did not have immediate thermonuclear war as the penalty for violating them) And since your beliefs are emotionally based on "disgust", I assume there is no updating based on actual measurements? That is, if ASI turns out to be safer than you currently think, you still want immediate nukes, and vice versa? What percentage of the population of world superpower decision makers do you feel share your belief? Just a rough guess is fine.

I'll look for the article later but basically the Air Force has found pilotless aircraft to be useful for around thirty years but organized rejection has led to most such programs meeting an early death.

The rest is a lot of AGI is magic without considering the actual costs of computation or noncomputable situations. Nukes would just scale up: it costs much less to destroy than it is to build and the significance of modern economics is indeed that they require networks which do not take shocks well. Everything else basically is "ASI is magic."

I would bet on the bomb.

5[anonymous]
Two points : We would need some more context on what you are referring to. For loitering over an undefended target and dropping bombs, yes, drones are superior and the us air force has allowed the US army to operate those drones instead. I do not think the us air force has had the belief that operating high end aircraft such as stealth and supersonic fighter bombers was within the capability of drone software over the last 30 years, with things shifting recently. Remember, in 2012 the first modern deep learning experiments were tried, prior to this AI was mostly a curiosity. If "the bomb" can wipe out a country with automated factories and missile defense systems, why fear AGI/ASI? I see a bit of cognitive dissonance in your latest point similar to Gary Marcus. Gary Marcus has consistently argued that current llms are just a trick, real AGI is very far away, and that near term systems are no threat, yet also argues for AI pauses. This feels like an incoherent view that you are also expressing. Either AGI/ASI is, as you put it, in fact magic and you need to pound the red button early and often, or you can delay committing national suicide until later. I look forward to a clarification of your beliefs.

This frames things as an inevitability which is almost certainly wrong, but more specifically opposition to a technology leads to alternatives being developed. E.g. widespread nuclear control led to alternatives being pursued for energy.

Being controllable is unlikely even if it is tractable by human controllers: it still represents power which means it'll be treated as a threat by established actors and its terroristic implications mean there is moral valence to police it.

In a world with controls, grim triggers or otherwise, AI would have to develop along ... (read more)

2[anonymous]
I think the error here is you may be comparing technologies on different benefit scales than I am. Nuclear power can be cheaper than paying for fossil fuel to burn in a generator, if the nuclear reactor is cheaply built and has a small operating staff. Your benefit is a small decrease in price per kWh. As we both know, cheaply built and lightly staffed nuclear plants are a hazard and governments have made them illegal. Safe plants, that are expensively built with lots of staff and time spent on reviewing the plans for approval and redoing faulty work during construction, are more expensive than fossil fuel and now renewables, and are generally not worth building. Until extremely recently, AI controlled aircraft did not exist. The general public has for decades had a misinterpretation of what "autopilot" systems are capable of. Until a few months ago, none of those systems could actually pilot their aircraft, they solely act as simple controllers to head towards waypoints, etc. (Some can control the main flight controls during a landing but many of the steps must be performed by the pilot) The benefit of an AI controlled aircraft is you don't have to pay a pilot. Drones were not superior until extremely recently. You may be misinformed to the capabilities of systems like the predator 1 and 2 drones, which were not capable of air combat maneuvering and had no software algorithms available in that era capable of it. Also combat aircraft have been firing autonomous missiles at each other since the Korean war. Note both benefits are linear. You get say n percent cheaper electricity where n is less than 50 percent, or n percent cheaper to operate aircraft, where n is less than 20 percent. The benefits of AGI is exponential. Eventually the benefits scale to millions, then billions, then trillions of times the physical resources, etc, that you started with. It's extremely divergent. Once a faction gets even a doubling or 2 it's over, nukes won't stop them. Assumpti

No, I wouldn't want it even if it was possible since by nature it is a replacement of humanity. I'd only accept Elon's vision of AI bolted onto humans, so it effectively is part of us and thus can be said to be an evolution rather than replacement.

My main crux is that humanity has to be largely biological due to holobiont theory. There's a lot of flexibility around that but anything that threatens that is a nonstarter.

2[anonymous]
Ok, that's reasonable. Do you foresee, in worlds where ASI turns out to be easily controllable, ones where governments set up "grim triggers" like you advocate for or do you think, in worlds conditional on ASI being easily controllable/taskable, that such policies would not be enacted by the superpowers with nuclear weapons? Obviously, without grim triggers, you end up with the scenario you despise: immortal humans and their ASI tools controlling essentially all power and wealth. This is I think kind of a flaw in your viewpoint. Over the arrow of time, AI/AGI/ASI adopters and contributors are going to have almost all of the effective votes. Your stated preferences mean over time your faction will lose power and relevance. For an example of this see autonomous weapons bans. Or a general example is the emh. Please note I am trying to be neutral here. Your preferences are perfectly respectable and understandable, it's just that some preferences may have more real world utility than others.
0[comment deleted]

Lead is irrelevant to human extinction, obviously. The first to die is still dead.

In a democratic world, those affected have a say in how they should be inflicted with AI and how much they want to die or suffer.

The government represents the people.

-2[anonymous]
You are using the poisoned banana theory and do not believe we can easily build controllable ASI systems by restricting their inputs to in test distribution examples and resetting state often, correct? I just wanted to establish your cruxes. Because if you could build safe ASI easily would this change your opinion on the correct policy?

I think even the wealthy supporters of it are more complex: I was surprised that Palantir's Peter Thiel came out discussing how AI "must not be allowed to surpass the human spirit" even as he clearly is looking to use AI in military operations. This all suggests significant controls incoming, even from those looking to benefit from it.

2ChristianKl
Googling for "must not be allowed to surpass the human spirit"  and Palantir finds no hits. 
2[anonymous]
I agree with controls. I have an issue with wasted time on bureaucratic review and think it could burn the lead the western countries have. Basically, "do z y z" to prove your model is good, design it according to "this known good framework" is ok with me. "We have closed reviews for this year" is not. "We have issued too many AI research licenses this year" is not. "We have denied your application because we made mistakes in our review and will not update on evidence" is not. All of these occur from a power imbalance. The entity requesting authorization is liable for any errors, but the government makes itself immune from accountability. (For example the government should be on the hook for lost revenue from the future products actual revenue for each day the review is delayed. The government should be required to buy companies at fair market value if it denies them an AI research license. Etc)

The UK has already mentioned that perhaps there should be a ban on models above a certain level. Though it's not official, I have pretty good record that Chinese party members have already discussed worldwide war as potentially necessary(Eric Hoel also mentioned it, separately). Existential risk has been mentioned and of course, national risk is already a concern, so even for "mundane" reasons, it's a matter of priority/concern and grim triggers are a natural consequence.

Elon had a personal discussion with China recently as well, and given his well known p... (read more)

2[anonymous]
Ok. Thank you for the updates. Seems like the near term outcome depends on a race condition, where as you said government is acting and so is private industry, and government has incentives to preserve the status quo but also get immensely more rich and powerful. The economy of course says the other. Investors are gambling the Nvidia is going to expand AI accelerator production by probably 2 orders of magnitude or more (to match the P/E ratio they have run the stocks to) , which is consistent with a world building many AGI, some ASI, and deploying many production systems. So you posit that governments worldwide are going to act in a coordinated manner to suppress the technology despite wealthy supporters of it. I won't claim to know the actual outcome but may we live in interesting times.
Load More