In part because I don't think freaking out would be a useful reaction to have.
This matters, even if you aren't as optimistic as I am on AI safety/alignment. Frankly, this sounds like the No Free Lunch theorem in action, that is our emotions should be ignored since they're misleading and gaslighting you on how bad the situation really is.
(To be honest, I really wish genetic engineering was good enough to be used instantly, but no luck there.)
>(To be honest, I really wish genetic engineering was good enough to be used instantly, but no luck there.)
Well, it depends on the kind of genetic engineering you have in mind. Many things are possible with current technology.
To be blunt, the problem of genetic engineering on humans can be boiled down to "The strongest modifications are locked behind gametes which requires you to have children, whereas the genetic modifications that can be applied instantly, or take effect in minutes or hours, are extremely weak, and the breakthroughs in gamete editing don't transfer at all to the somatic gene editing case."
And for such a radical redesign of the human brain like redesigning the emotional system, we are not nearly at the point where we can reliably edit potentially thousands of genes, especially without very dangerous side effects.
I remember reading about technology where retrovirus injected in bloodstream, that is capable of modifying receptiveness of braincells to certain artificial chemical. Virus cant pass brain barrier unless ultrasonic vibrations applied to selected brain area making it temporary passable there. This allows for making normally ~inert chemical into targeted drug. Probably nowhere close to applying it to humans - approval process will be nightmare.
Actually, more recently neuroscientists have discovered and utilized viral vectors for genetic payloads capable of crossing the blood brain barrier without ultrasonic bbb disruption. By recently, I mean... It was recent as of 6 years ago when I was last working on modifying the brains of mammals with engineered viruses.
As I understood it ultrasonic disruption was feature as it allows to target specific brain area.
Putting aside obvious trust issues how close is current technology to correcting things like undesired emotional states?
Hmm, I would say that the potential is there for that, but that there is a lot of aversion in the medical field about researching things which aren't about fixing nonspecific problems. Like, a weird rare mutation that causes the left side of the body to have spasms? Cool! Letting people regulate their mood or appetite, even though it would potentially save millions of lives? Taboo! So... yeah. The tech is there, the scientific community isn't working effectively on that.
One of the projects I worked on was using optogenetics (genetic modification plus fiberoptic implant) to induce anxiety in mice via an unusual neural circuit. Lots about emotional regulation is known and controllable.
I disagree with this. I would be happy to see more people freaking out about AI, especially mainstream medias. I would actually argue that getting more people to freak out as fast as possible is our best shot at lowering the odds of an extinction event.
Imagine you are the mainstream media and you see that on a website interested in ai people are sharing "calmness videos". Would your takeaway be that everything is perfectly fine? :D
Heavily disagree, even under the premise that AI is probably going to doom humanity this century.
The problem with freaking out and your intuitive emotional reaction is that it doesn't equip you with appropriate reactions or decisions to make sure humanity survives.
Also, we are much more uncertain over whether AI doom is real, which is another reason to stay calm.
In general, I think this is an area where people should treat their emotional reactions as no evidence at all about the problem's difficulty, how optimistic we should be on alignment, and more.
Also, we are much more uncertain over whether AI doom is real, which is another reason to stay calm.
Have to disagree with you on this point. I'm in the camp of "If there's a 1% chance that AI doom is real, we should be treating it like a 99% chance."
I'm inclined to agree that it might be a good thing if the mainstream did freak out more, to shake them out of their current complacency. But there's probably a sweet spot for how much to freak out, and I get the sense that a lot of LWers are way past that.
Also, I think that the ideal is for people to freak out once when they realize that AI is actually a big deal, in a way that gets them to shift more of their attention on the topic. But it would be good if they would then calm down so that they could think about the issue clearly. I think that an extended state of chronic freakout isn't going to be conducive to solving the issue.
Strong downvote, mild agree: I don't think freaking out is a very effective response; I appreciate that you are freaked out and I hope you can feel better biologically soon, for there is no need to panic in that way! all is well of that kind. but as a person who thinks we're only going to win because we are, this moment, right now, going to sprint for it, I think we're going to do so calmly. after all, the best way to panic productively is to calm down and think; and the best way to calm down and think is to take a walk in the field and play with toys of power while the sky falls and ponder how they can protect us. They just need to be the toys that the strength of your intuition tells you may have short paths to enjoyable insight about the problem~
Or perhaps there's some other attitude. I admit to being rather "mad self taught half-baked buddhist" when working well under pressure - if you're used to responding to immediate physical environment emergencies calmly it can help.
This is a bit of an unusual post.
I have gotten the impression that a lot of people are kind of freaked out, either by AI or weird Bay Area social dynamics in general.
I also think that a lot of freak-out reactions are driven at least as much by social contagion as any fact-based assessment of what's happening. When you see people around you freak out, you too are much more likely to freak out.
Conversely, if the people around you are calm, then you're also much more likely to stay calm.
There's also a selection effect where freakouts tend to spread much more online than calmness does. If you're calm, you don't necessarily feel the need to post anything. You might be content to just be.
Whereas if you're freaking out, you're much more likely to post stuff about how you're freaking out or how we're all going to die.
So there's easily a cycle where the most distressed views predominate, that freaks people out and causes there to be more distressed posts, which freaks out more people, and so on. And this might be mostly uncorrelated with how much of a reason there was to actually freak out.
But if we were all in the same physical space, we might all notice that only some people are freaking out and a lot are a lot more calm. And then the distress wouldn't spread as much, and we could think more clearly.
I too am concerned about AI, but I'm not freaked out. (In part because I don't think freaking out would be a useful reaction to have, in part because I'm somewhat more optimistic than most, in part because I spend a lot of time with people who aren't freaking out.) If I were physically located in the same place as others who were freaking out, I think that my calm could help with their freakout.
However, I'm not. And as stated, it's kinda hard to convey calmness over text, the same way you can convey distress.
So I thought of making a video where I'm calm. Maybe that would help convey it better.
It's here. In Finnish, but with English subtitles.
I know it's low video quality; I recorded it in Zoom, and only noticed afterward that there's an "HD quality" button I could have clicked in the settings. Oops.
But that was part of the intended vibe too. I could have spent a lot of time optimizing the video quality and everything. Instead, I just recorded it in one shot, because it's not such a big deal whether the video quality is great or not.
I'll probably make another calmness video with better quality.
No earlier than tomorrow.
Because I don't feel like I'm in a rush.