All of skybluecat's Comments + Replies

Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.

If any actor with or without AGI can quickly gain lots of ... (read more)

1ZY
I personally agree with you on the importance of these problems. But I myself might also be a more general responsible/trustworthy AI person, and I care about other issues outside of AI too, so not sure about a more specific community, or what the definition is for "AI Safety" people. For funding, I am not very familiar and want to ask for some clarification: by "(especially cyber-and bio-)security", do you mean generally, or "(especially cyber-and bio-)security" caused by AI specifically?

Don't know if this counts but I sort of can affect and notice dreams without being really lucid in the sense of clearly knowing it's a dream. It feels more like I somehow believe everything is real but I'm having superpowers (like becoming a superhero), and I would use the powers in ways that make sense in the dream setting, instead of being my waking self and consciously choosing what I want to dream of next. As a kid, I noticed I could often fly when chased by enemies in my dreams, and later I could do more kinds of things in my dreams just by willing it... (read more)

Answer by skybluecat40

Wow I just saw this on the frontpage and thought I sometimes feel like this too, although about slightly different things and without that much heart-racing. I'm late and there are already many good answers, but here are my extreme and possibly horrible lifehacks for when I'm struggling/feeling lazy during the pandemic:

tldr: Like others said, get away with less chores. 

I haven't ironed or folded clothes since like forever. (If you really cares about that, maybe find clothes that look OK without ironing, idk). I don't go out or exert myself that much, ... (read more)

What's the endgame of technological or intelligent progress like? Not just for humans as we know it, but for all possible beings/civilizations in this universe, at least before it runs out of usable matter/energy? Would they invariably self-modify beyond their equivalent of humanness? Settle into some physical/cultural stable state? Keep getting better tech to compete within themselves if nothing else? Reach an end of technology or even intelligence beyond which advancement is no longer beneficial for survival? Spread as far as possible or concentrate resources? Accept the limited fate of the universe and live to the fullest or try to change it?  If they could change the laws of the universe, how would they?

There are other reasons to be wary of consciousness and identity-altering stuff. 

I think under a physical/computational theory of consciousness, (ie. there's no soul or qualia that have provable physical effects from the perspective of another observer) the problem might be better thought of as a question of value/policy rather than a question of fact. If teleportation or anything else really affects qualia or any other kind of subjective awareness that is not purely dependent on observable physical facts, whatever you call it, you wouldn't be able to... (read more)

Not OP but can I give it a try? Suppose a near future not-quite-AGI, for example something based on LLMs but with some extra planning and robotics capabilities like the things OpenAI might be working on, gains some degree of autonomy and plans to increase its capabilities/influence. Maybe it was given a vague instruction to benefit humanity/gain profit for the organization and instrumentally wants to expand itself, or maybe there are many instances of such AIs running by multiple groups because it's inefficient/unsafe otherwise, and at least one of them so... (read more)

6Daniel Kokotajlo
Thanks! This is exactly the sort of response I was hoping for. OK, I'm going to read it slowly and comment with my reactions as they happen: While it isn't my mainline projection, I do think it's plausible that we'll get near-future-not-quite-AGI capable of quite a lot of stuff but not able to massively accelerate AI R&D. (My mainline projection is that AI R&D acceleration will happen around the same time the first systems have a serious shot at accumulating power autonomously) As for what autonomy it gains and how much -- perhaps it was leaked or open-sourced, and while many labs are using it in restricted ways and/or keeping it bottled up and/or just using even more advanced SOTA systems, this leaked system has been downloaded by enough people that quite a few groups/factions/nations/corporations around the world are using it and some are giving it a very long leash indeed. (I don't think robotics is particularly relevant fwiw, you could delete it from the story and it would make the story significantly more plausible (robots, being physical, will take longer to produce lots of. Like even if Tesla is unusally fast and Boston Dynamics explodes, we'll probably see less than 100k/yr production rate in 2026. Drones are produced by the millions but these proto-AGIs won't be able to fit on drones) and just as strategically relevant. Maybe they could be performing other kinds of valuable labor to fit your story, such as virtual PA stuff, call center work, cyber stuff for militaries and corporations, maybe virtual romantic companions... I guess they have to compete with the big labs though and that's gonna be hard? Maybe the story is that their niche is that they are 'uncensored' and willing to do ethically or legally dubious stuff?) Again I think robots are going to be hard to scale up quickly enough to make a significant difference to the world by 2027. But your story still works with nonrobotic stuff such as mentioned above. "Autonomous life of crime" is a threat mod

Hi! I have lurked for quite a while and wonder if I can/should participate more. I'm interested in science in general, speculative fiction and simulation/sandbox games among other stuff. I like reading speculations about the impact of AI and other technologies, but find many of the alignment-related discussions too focused on what the author wants/values rather than what future technologies can really cause. Also, any game recommendations with a hard science/AI/transhumanist theme that are truly simulation-like and not narratively railroading?

4nim
Welcome! If you have the emotional capacity to happily tolerate being disagreed with or ignored, you should absolutely participate in discussions. In the best case, you teach others something they didn't know before, or get a misconception of your own corrected. In the worst case, your remarks are downvoted or ignored. Your question on games would do well fleshed out into at least a quick take, if not a whole post, answering: * What games you've ruled out for this and why * what games in other genres you've found to capture the "truly simulation-like" aspect that you're seeking * examples of game experiences that you experience as narrative railroading * examples of ways that games that get mostly there do a "hard science/AI/transhumanist theme" in the way that you're looking for * perhaps what you get from it being a game that you miss if it's a book, movie, or show? If you've tried a lot of things and disliked most, then good clear descriptions of what you dislike about them can actually function as helpful positive recommendations for people with different preferences.