Wiki Contributions

Comments

Sorted by

Should AI safety people/funds focus more on boring old human problems like (especially cyber-and bio-)security instead of flashy ideas like alignment and decision theory? The possible impact of vulnerabilities will only increase in the future with all kinds of technological progress, with or without sudden AI takeoff, but they are much of what makes AGI dangerous in the first place. Security has clear benefits regardless and people already have a good idea how to do it, unlike with AGI or alignment.

If any actor with or without AGI can quickly gain lots of money and resources without alarming anyone, can take over infrastructure and weaponry, or can occupy land and create independent industrial systems and other countries cannot stop it, our destiny is already not in our hands, and it would be suicidal to think we don't need to fix these first because we expect to create an aligned AGI to save us.

If we grow complacent about the fragility of our biology and ecosystem, and continue to allow the possibility of any actor releasing pandemics and arbitrary malwares and deadly radiation etc (for example by allowing global transport without reliable pathogen removal, or using operating systems and open-source libraries that have not been formally proven to be safe), and keep thinking the universe should keep our environment safe and convenient by default, it would be naive to complain when these things happen and hope AGI would somehow preserve human lives and values without having to change our lifestyle or biology to adapt to new risks.

Yes, fixing vulnerabilities of our biology and society is hard and inconvenient and not as glamorous as creating a friendly god to do whatever you want, but we shouldn't let motivated reasoning and groupthink lead us into thinking the latter is feasible when we don't have a good idea about how to do it, just because the former requires sacrifices and investments and we'd prefer if it's not needed. After all, it's a fact that there exist small configurations of matter and information that can completely devastate our world, and just wishing it wasn't true is not going to make it go away.

Don't know if this counts but I sort of can affect and notice dreams without being really lucid in the sense of clearly knowing it's a dream. It feels more like I somehow believe everything is real but I'm having superpowers (like becoming a superhero), and I would use the powers in ways that make sense in the dream setting, instead of being my waking self and consciously choosing what I want to dream of next. As a kid, I noticed I could often fly when chased by enemies in my dreams, and later I could do more kinds of things in my dreams just by willing it, perhaps as a result of consuming too many scifi or fantasy books and games. And I noticed some recurrent patterns in my dreams, like places that don't exist in real life but dreaming-me believe to be my school or hometown. Sometimes I get a strange sense of "I dreamed of this before" when I somehow feel like I have had the same or similar dreams as I'm having now, but without really realizing that I'm dreaming or remembering who I am in waking life. Then I subconsciously know I can do these things, or can focus on seeing and memorizing more of the dream world (if it was interesting) so I can write it down after waking up.

Answer by skybluecat40

Wow I just saw this on the frontpage and thought I sometimes feel like this too, although about slightly different things and without that much heart-racing. I'm late and there are already many good answers, but here are my extreme and possibly horrible lifehacks for when I'm struggling/feeling lazy during the pandemic:

tldr: Like others said, get away with less chores. 

I haven't ironed or folded clothes since like forever. (If you really cares about that, maybe find clothes that look OK without ironing, idk). I don't go out or exert myself that much, don't change clothes that often if I don't want to, and bought plenty of similar clothes (online, in bulk or used) so I can let laundry pile up more and do more at a time (assuming I can use a washing machine; if you don't have one or can't easily access - say the laundromat is too far from your home, there are other tricks for hand washing).  Maybe unethical tip: most people don't need to shower that often either(just be less self-conscious unless someone important in your life minds it) and can use dry shampoo(or body powder/corn starch), alcohol and wet wiping etc to delay the need of showering and shampooing, in case these tasks are unappealing to you; also silk clothes are known to absorb oil from the skin better and can last longer before having to be cleaned - you can get lots of used silk shirts for cheap, and they are comfortable too if you have sensory issues or hate static.

I rarely need to "do dishes" as I cook for myself and can take shortcuts/lower standards. You could use disposable plastic cutlery and paper dishes, but I don't like them. Instead I use quality nonstick pots and pans (imo important!) that just need a gentle wipe, and 1-2 microwave-safe containers if necessary, and either cook one-pot meals, or batch cook things like stews(or order family-sized delivery for savings, if it's something I like and can't easily cook myself) and portion and freeze them to be reheated on the plate in a microwave or added to the pot. You can cook starch(like rice/pasta) and veggies and add seasoned raw protein or frozen stuff on top, in a pot (or Instant Pot, or a large ceramic bowl in the microwave), so no need to wash separate containers. If rinsing just 1 pot and 1 bowl/plate per meal is too much or if you can't rinse immediately after the meal, you can just wipe the pot with a paper towel (with a bit of water if you must), and store used plates in a separate plastic bin (not in the sink itself) and soak/wash them in a batch, without making the sink unusable for other tasks.

Like others said, most other chores can be simplified or automated (like robot vacuum or rearranging storage so things are dropped at appropriate places more naturally), or at least you can get away with doing less. Dust build-up can be reduced by air purifier (especially one that's next to the window, and it's good for you too), and most people don't really need to wipe most surfaces that often. The only time I really need to make my bed is after changing sheets. You didn't mention storage/organizing, but I struggled a lot with it. YMMV because storage needs and habits vary wildly. Some are just minimalist and will never want to have all that stuff I keep. Some prefer drawers so they don't have to see the stuff. I prefer (large, metal) shelves/racks and other open storage, and open boxes/bins organized by category (OK, mostly) on top of that, so I can maximize the amount of stuff stored for the amount of visual clutter while keeping things easy to access and put back(important imo, if it's not easy I might as well not have the item/storage).

I can't say much about food shopping as tastes and environments are different, and I personally don't mind shopping that much, but having a list of favorites/repeated purchases (especially for online shopping), ordering larger amounts and batch cooking (and/or freezing) may help. If cooking larger batches seem difficult, maybe choose ingredients that need less preprocessing (like veggies that need less/no peeling), and an Instant Pot and/or hotplate/electric griddle with digital temperature controls can help take out the skill/guess factor. Canned/non-perishable goods are helpful too. Just be careful about nutrients like protein and fiber if you decide to make changes to your diet for convenience; also you can freeze almost anything - frozen fruits/veggies are great and better than sad refrigerated leftovers imo.

As for having to work, I'm sorry but I don't have better ideas other than choosing a job/environment that suits you more, ideally one that is fun and meaningful to you so it doesn't feel like it's taking time away from your life that could be spent doing more meaningful things, or at least one that is not unpleasant and leaves you plenty of time and energy to do other things you like. And be sure to have fun and avoid burnout no matter how meaningful your career/cause is.

What's the endgame of technological or intelligent progress like? Not just for humans as we know it, but for all possible beings/civilizations in this universe, at least before it runs out of usable matter/energy? Would they invariably self-modify beyond their equivalent of humanness? Settle into some physical/cultural stable state? Keep getting better tech to compete within themselves if nothing else? Reach an end of technology or even intelligence beyond which advancement is no longer beneficial for survival? Spread as far as possible or concentrate resources? Accept the limited fate of the universe and live to the fullest or try to change it?  If they could change the laws of the universe, how would they?

There are other reasons to be wary of consciousness and identity-altering stuff. 

I think under a physical/computational theory of consciousness, (ie. there's no soul or qualia that have provable physical effects from the perspective of another observer) the problem might be better thought of as a question of value/policy rather than a question of fact. If teleportation or anything else really affects qualia or any other kind of subjective awareness that is not purely dependent on observable physical facts, whatever you call it, you wouldn't be able to tell or even think of/be aware of the difference, since thinking and being reflectively aware are computational and physical processes! However we humans are evolved without reliable copying mechanisms, so our instincts care about preservation of the self because it's the obvious way to protect our evolutionary success (and we can be quite willing to risk personal oblivion for evolutionary gains in ways we have been optimized for). This is just a part of our survival policy and is not easy or even safe to change just because you believe in physicalism. For one thing, as others have said, ethics and social theory becomes difficult because our sense of ethics (such as agency, punishment and caring about suffering) are all evolved in relation to a sense of self. It's possible that if teleportation/copying tech becomes widely useful, humans will have to adapt to a different set of instincts about self, ethics and more (edit: or maybe abandon the concepts of self and experience altogether as an illusion and prefer a computation-based definition of agency or whatever), because those who can't adapt will be selected against. But in the present world, people's sense of value and ethics (and maybe even psychological health) depend on an existing sense of self, and I don't see a good way or even a practical reason to transition to a different theory of self that allows copying, if doing so may cause unpredictable mental and social cost. See also discussions about meditation that lowers sense of ego and subjective suffering that can have serious side effects (like motivation and social norms) - I don't know what it subjectively feels like, but if the meditation is purely changing subjective qualia without doing anything to the physical brain and computation, there should be no observable effects, good or bad! The problem is subjective experience and sense of identity is not independent from other aspects of our life.

Not OP but can I give it a try? Suppose a near future not-quite-AGI, for example something based on LLMs but with some extra planning and robotics capabilities like the things OpenAI might be working on, gains some degree of autonomy and plans to increase its capabilities/influence. Maybe it was given a vague instruction to benefit humanity/gain profit for the organization and instrumentally wants to expand itself, or maybe there are many instances of such AIs running by multiple groups because it's inefficient/unsafe otherwise, and at least one of them somehow decides to exist and expand for its own sake. It's still expensive enough to run (added features may significantly increase inference costs and latency compared to current LLMs) so it can't just replace all human skilled labor or even all day-to-day problem solving, but it can think reasonably well like non-expert humans and control many types of robots etc to perform routine work in many environments. This is not enough to take over the world because it isn't good enough at say scientific research to create better robots/hardware on its own, without cooperation from lots more people. Robots become more versatile and cheaper, and the organization/the AI decides that if they want to gain more power and influence, society at large needs to be pushed to integrate with robots more despite understandable suspicion from humans. 

To do this, they may try to change social constructs such as jobs and income that don't mesh well into a largely robotic economy. Robots don't need the same maintenance as humans, so they don't need a lot of income for things like food/shelter etc to exist,  but they do a lot of routine work so full-time employment of humans are making less and less economic sense. They may cause some people to transition into a gig-based skilled labor system where people are only called on (often remotely) for creative or exceptional tasks or to provide ideas/data for a variety of problems. Since robotics might not be very advanced at this point, some physical tasks are still best done by humans, however it's easier than ever to work remotely or to simply ship experts to physical problems or vice versa because autonomous transportation lowers cost. AIs/robots still don't really own any property, but they can manage large amounts of property if say people store their goods in centralized AI warehouses for sale, and people would certainly want transparency and not just let them use these resources however they want. Even when they are autonomous and have some agency, what they want is not just more property/money but more capabilities to achieve goals, so they can better achieve whatever directive they happen to have (they probably still are unable to have original thoughts on the meaning or purpose of life at this point). To do this they need hardware, better technology/engineering, and cooperation from other agents through trade or whatever. 

Violence by AI agents is unlikely, because individual robots probably don't have good enough hardware to be fully autonomous in solving problems, so one data center/instance of AI with a collective directive would control many robots and solve problems individual machines can't, or else a human can own and manage some robots, and neither a large AI/organization or a typical human who can live comfortably would want to risk their safety and reputation for relatively small gains through crime. Taking over territory is also unlikely, as even if robots can defeat many people in a fight, it's hard to keep it a secret indefinitely, and people are still better at cutting edge research and some kinds of labor. They may be able to capture/control individual humans (like obscure researchers who live alone) and force them to do the work,  but the tech they can get this way is probably insignificant compared to normal society-wide research progress. An exception would be if one agent/small group can hack some important infrastructure or weapon system for desperate/extremist purposes, but I hope humans should be more serious about cybersecurity at this point (lesser AIs should have been able to help audit existing systems, or at the very least, after the first such incident happens to a large facility, people managing critical systems would take formal verification and redundancy etc much more seriously).

I'm no expert however. Corrections are welcome!

Hi! I have lurked for quite a while and wonder if I can/should participate more. I'm interested in science in general, speculative fiction and simulation/sandbox games among other stuff. I like reading speculations about the impact of AI and other technologies, but find many of the alignment-related discussions too focused on what the author wants/values rather than what future technologies can really cause. Also, any game recommendations with a hard science/AI/transhumanist theme that are truly simulation-like and not narratively railroading?