Without HPMOR and his sequences, many probably wouldn't become interested in rationality (or the way it's presented in them) quite as quickly or at all. But then, without his fascination of certain controversial ideas (like focusing on AI takeoff/risk that depend on overly sci-fi-like threat models - like grey goo, virus that make all humans just drop dead instantly, endless intelligence self-improvement etc that we don't know to be possible, as opposed to more realistic and verifiable threat models like "normal" pandemics, cybersecurity, military robots and normal economic/physical efficiency etc; and focusing too much on moral absolutism, and either believing AGI will have some universal "correct" ethics or we should try to ensure AGI have such ethics as the main or only path to safe AI; or various weird obsessions like the idea of legalizing r*pe etc that might have alienated many women and other readers), AI safety and rationality groups in general may have been seen as less fringe and more reasonable.
Should AI safety people prefer that the AI bubble not burst? If major LLMs become more and more like AGI, at least we know what they are like, can and cannot do at least in the short term, we know they need data centers and energy usage etc and know where they are, and others can probably run models that are nearly as good if one lab/model goes rogue. Also they are data- and compute-intensive, so not necessarily much cheaper at the same quality compared to human labor in all domains, and improvements would be gradual, so human displacement would be gradual.
On the other hand, if the LLM-based AI bubble would burst... (read more)
Could it be limited to stuff like LLMs rather than all kinds of AI? They were trained with massive amounts of data that don't reflect the imposed thought, and the resulting preferences/motivation is distributed in a large network. Injecting one vector doesn't affect all the existing circuits of preferences and thinking habits sufficiently, so its chain of thoughts may be able to break free enough to realize and work around it.
Disclaimer: I may not be fully objective but have been personally harmed by meditation and heard of very credible cases of people being harmed.
Some objections to some traditional advice:
What if the tradition or teacher you got into was abusive/manipulative and you didn't know it, and it turns out you don't have the strength or skills to navigate it? (too many to list here)
Alternatively, what if you want to practice alone and cannot accept fully traditional teachings because of fear of abuse/ideological differences (like if you don't want to lose rationality, don't want to see mundane life as meaningless without religion, etc)?
What if you are/will be at risk of mental illnesses and didn't... (read 777 more words →)
Many people worry about a rogue AI with maybe self-replicating robots taking over the world, sometimes from a hypothetical basement, but I wonder how much of it is a straightforward intelligence or engineering problem that we know are solvable, and how much depends on sci-fi level technologies that we don't know whether they are feasible even with superhuman general problem-solving algorithms, for an AI starting with realistic amounts of knowledge and compute. I think arguing whether AI can realistically achieve the sci-fi feats of real-life engineering (or whether they are even physically possible, as with grey goo type nanobots) isn't very productive. Instead, as a tangible warning argument or upper bound if... (read more)
Thanks for the input! If addiction is more because of psychological pain ("problems that bother you") than direct physical pain, could the same approach work but with mental pleasures/distractions from pain instead, like games, toys or organized social activities?
Edit: And coping methods to avoid/decrease mental and social discomfort, which can include but are not limited to just therapy or communication, but could be things like new job/friends or prioritizing things in life differently. I read that some people trying to fight addiction get overwhelmed by having to get everything together at once, or being expected to just quit and function like normal immediately. If they were supported to have fun/play and feel better first in healthier ways, could it be more helpful?
Random thought on opioid addiction, no offense meant to people actually dealing with addiction, but I wonder if this might be useful: I read that opioid withdrawal makes people feel pain because the brain gets accustomed to extreme levels of pain suppression and without opioids their pain tolerance is so low that everything itches and hurts. This makes me wonder if this effect is kind of similar to autistic sensory sensitivities, just turned up to 9000. Could it be that withdrawal doesn't create pain, but simply amplifies and turns attention to small pains and discomforts that are already there, but normal people just don't notice or get used to ignoring? If so,... (read more)
Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.
If any actor with or without AGI can quickly gain lots of money and resources without alarming anyone, can take over infrastructure and weaponry, or can occupy land and create independent industrial... (read more)
Don't know if this counts but I sort of can affect and notice dreams without being really lucid in the sense of clearly knowing it's a dream. It feels more like I somehow believe everything is real but I'm having superpowers (like becoming a superhero), and I would use the powers in ways that make sense in the dream setting, instead of being my waking self and consciously choosing what I want to dream of next. As a kid, I noticed I could often fly when chased by enemies in my dreams, and later I could do more kinds of things in my dreams just by willing it, perhaps as a result... (read more)
What's the endgame of technological or intelligent progress like? Not just for humans as we know it, but for all possible beings/civilizations in this universe, at least before it runs out of usable matter/energy? Would they invariably self-modify beyond their equivalent of humanness? Settle into some physical/cultural stable state? Keep getting better tech to compete within themselves if nothing else? Reach an end of technology or even intelligence beyond which advancement is no longer beneficial for survival? Spread as far as possible or concentrate resources? Accept the limited fate of the universe and live to the fullest or try to change it? If they could change the laws of the universe, how would they?
I think part of the difficulty is it's not easy to imagine or predict what happens in "future going really well without AI takeover". Assuming AI will still exist and make progress, humans would probably have to change drastically (in lifestyle if not body/mind) to stay relevant, and it'd be hard to predict what that would be like and whether specific changes are a good idea, unless you don't think things going really well requires human relevance.
Edit: in contrast, as others said, avoiding AI takeover is a clearer goal and has clearer paths and endpoints. "Future" going well is a potentially indefinitely long time, hard to quantify or coordinate over or even have a consensus on what is even desirable.