"Peer pressure" is a negatively-valanced term that could be phrased more neutrally as "social consequences". Seems to me it's good to think about what the social consequences of doing or not doing a thing will be (whether to "give in to peer pressure", and act in such a way as to get positive reactions from other people/avoid negative reactions, or not), but not to treat conforming when there is social pressure as inherently bad. It can lead to mob violence. Or, it can lead to a simplified social world which is easier for everyone to navigate, because you're doing things that have commonly understood meanings (think of teaching children to interact in a polite way). Or it can lead to great accomplishments, when someone internalizes whatever leads to status within their social hierarchy. Take away the social pressure to do things that impress other people, and lots of people might laze about doing the minimum required to have a nice life on the object-level, which in a society as affluent as the modern industrialized world is not much. There are of course other motivations for striving for internalized goals, but like, "people whose opinion I care about will be impressed" is one, and it does mean some good stuff gets done.
Someone who is literally immune to peer pressure to the extent that social consequences do not enter their mind as a thing that might happen or get considered at all in their decision-making, will probably face great difficulties in navigating their environment and accomplishing anything. People will try fairly subtle social pressure tactics, they will be disregarded as if they hadn't happened, and the person who tried it will either have to disengage from the not-peer-pressurable person, or escalate to more blunt control measures that do register as a thing this person will pay attention to.
Even if I'm right about "is immune to peer pressure" not being an ideal to aim for, I still do acknowledge that being extremely sensitive to what others may think has downsides, and when taken to extremes you get "I can't go to the store because of social anxiety". A balanced approach would be aiming to avoid paranoia while recognizing social pressure when someone is attempting to apply some, without immediately reacting to it, and be able to think through how to respond on a case-by-case basis. This is a nuanced social skill. "This person is trying to blackmail me by threatening social exclusion through blacklisting or exposing socially damaging information about me if I don't comply with what they want" requires a different response than "this person thinks my shirt looks tacky and their shirt looks cool. I note their sense of fashion, and how much importance they attach to clothing choices, and may choose to dress so as to get a particular reaction from them in future, without necessarily agreeing with/adopting/internalizing their perspective on the matter", which in turn is different from "everyone in this room disagrees with me about thing X (or at least says they disagree, preference falsification is a thing) should I say it anyway?".
The key, I would think, is to raise people to understand what social pressure is and its various forms, and that conformance is a choice they get to make rather than a thing they have to do or they'll suffer social death. Choices have consequences, but the worst outcomes I've seen from peer pressure are when people don't want to do the thing that is being peer-pressured towards, but don't treat "just don't conform" as an option they can even consider and ask what the consequences would be.
We can't steer the future
What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we're 100% screwed because I can't do that. But I do have some influence. A great deal of influence over my own actions (I'm resisting the temptation to go down a sidetrack about determinism, assuming you're modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word "we", but I don't know who the "we" is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we're not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to "we can't steer the future" is "not yet we can't, at least not very well"?
- it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”,
in part because this is so futile.
- if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity's future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of "goodness", rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of "steering" a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku - very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can't hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying "in this instance, you're stifling the individual" and "in this instance you're harming the group/long-term future" wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.
This is fun! I don't know which place I'm a citizen of, though, it just says "hello citizen"... I feel John Rawls would be pleased...
I think maybe the root of the confusion here might be a matter of language. We haven't had copier technology, and so our language doesn't have a common sense way of talking about different versions of ourselves. So when one asks "is this copy me?", it's easy to get confused. With versioning, it becomes clearer. I imagine once we have copier technology for a while, we'll come up with linguistic conventions for talking about different versions of ourselves that aren't clunky, but let me suggest a clunky convention to at least get the point across:
I, as I am currently, am Myron.1. If I were copied, I would remain Myron.1, and the copy would be Myron.1.1. If two copies were made of me at that same instant, they would be Myron.1.1 and Myron.1.2. If a copy was later made of Myron.1.2, he would be Myron.1.2.1. And so on.
With that convention in mind, I would answer the questions you pose up top as follows:
Rather, I assume xlr8harder cares about more substantive questions like:
I'll add a fourth, because you've discussed it:
4. After the scanning and copying process, will I feel like me? Yep. But, if the copying process was nondestructive, you will be able to look out and see that there is a copy of you. There will be a fact of the matter about who entered the copying machine and how the second copy was made, a point in time before which the second copy did not exist and after which it did exist, so one of you will be Rob.1, and the other will be Rob.1.1. It might not be easy to tell which version you are in the instant after the copy is made, but "the copy is the original" will be a statement that both you and the other version evaluate as logically false, same with "both of you are the same person". "Both of you are you", once we have linguistic conventions around versioning, will be a confusing and ambiguous statement and people will ask you what you mean by that.
And another interesting one:
5. After the scanning process, if it's destructive, if I'm the surviving copy, should I consider the destruction of the original to be bad? I mean, yeah, a person was killed. It might not be you.currentversion, exactly, but it's where you came from, so probably you feel some kinship with that person. In the same way I would feel a loss if a brother I grew up with was killed, I'd feel a loss if a past version of me was killed. We could have gone through life together with lots of shared history in a way very few people can, and now we can't.
Ok. I thought after I posted my first answer, one of the things that would be really quite valuable during the turbulent transition, is understanding what's going on and translating it for people who are less able to keep up, because of lacking background knowledge or temperament. While it will be the case after a certain point that AI can give people reliable information, there will be a segment of the population that will want to hear the interpretation of a trustworthy human, and also, the cognitive flexibility to deal with a complex and rapidly changing environment and provide advice to people based on their specific circumstances, will be a comparative advantage that lasts longer than most.
Acting as a consultant to help others navigate the transition, particularly if that incorporates other expertise you have (there may be a better generic advice giver in the world, and you're not likely to be able to compete with Zvi in terms of synthesizing and summarizing information, but if you're for example well enough versed in the current situation, plus you have some professional specialty, plus you have local knowledge of the laws or business conditions in your geographic area, you could be the best consultant in the world with that combination of skills).
Also, generic advice for turbulent times: learn to live on as little as possible, stay flexible and willing to move, save up as much as you can so that you have some capital to deploy when that could be very useful (if the interest rates go sky high because suddenly everyone wants money to build chip fabs or mine metals for robots or something, having some extra cash pre-transition could mean having plenty post-transition) but also you have some free cash in case things go sideways and a well placed wad of cash can get you out of a jam on short notice, let you quit your job and pivot, or do something else that has a short term financial cost but you think is good under the circumstances. Basically make yourself more resilient, knowing turbulence is coming, and prepare to help others navigate the situation. Make friends and broaden your social network, so that you can call on them if needed and vice versa.
I think my answer would depend on your answer to "Why do you want a job?". I think that when AI and robotics have advanced to the point where all physical and intellectual tasks can be done better by AI/robots, we've reached a situation where things change very rapidly and "what is a safe line of work long term?" is hard to answer because we could see rapid changes over a period of a few years, and who knows what the end-state will look like? Also, any line of work which at time X it is economically valuable to have humans do, will have a lot of built-in incentive to make it automateable, so "what is it humans can make money at because people value the labour?" could change rapidly. For example, you suggest that sex work is one possibility, but if you have 100,000 genius-level AIs devising the best possible sex-robot, pretty quickly they'd be able to come up with something where the people who are currently paying for sex would feel like they're getting better value for money out of the sex-robot than out of a human they could pay for sex. Of course people will still want to have sex with people they like who like them back, but that isn't typically done for money.
We'll live in a world where the economy is much larger and people are much richer, so subsistence isn't a concern, provided that there's decent redistributive mechanisms of some sort in place. Like let's say we keep the tax rate the same but the GDP has gone up 1,000x - then the amount of tax revenue has gone up 1,000x, and UBI is easy. If we can't coordinate to get a UBI in place, it would still only need 1 in 1,000 people to somehow have lucked into resources and say "I wish everyone had a decent standard of living" and they could set up a charitable organization that gave out free food and shelter, with the resources under their command. So you won't need a job. Meaning, any work people got other people to do for them would have to pay an awful lot if it was something a worker didn't intrinsically want to do (if someone wanted a ditch dug for them by humans who didn't like digging ditches, they'd have to make those humans a financial offer they found made it worthwhile when all of their needs are already met - how much would you have to pay a billionaire to dig you a ditch? There's a price, but it's probably a lot.), and otherwise, you can just do whatever "productive" thing you want because you want to, you enjoy the challenge, it's a growth experience for you or whatever, and it likely pays 0, but that doesn't matter because you value it for reasons other than the pay.
I guess it could feel like a status or dignity thing, to know that other people value the things you can do, enough to keep you alive with the products of your own labour? And so you're like "nah, I don't want the UBI, I want to earn my living". In that case, keep in mind "enough to keep you alive with the products of your own labour" will be very little, as a percentage of people's income. So you can busk on a street corner, and people can occasionally throw the equivalent of a few hundred thousand of today's dollars of purchasing power into your hat, because you made a noise they liked, in the same way that I can put $5 down for a busker now because that amount of money isn't particularly significant to me, and you're set for a few years at least, instead of being able to get yourself a cup of coffee as is the case now.
Or, do you want to make a significant amount of money, such that you can do things most people can't do because you have more money than them? In that case, I think you'd need to be pushing the frontier somehow - maybe investing (with AI guidance, or not) instead of spending in non-investy ways would do it. If the economy is doubling every few years, and you decide to live on a small percentage of your available funds and invest the rest, that should compound to a huge sum within a short time, sufficient for you to, I dunno, play a key role in building the first example of whatever new technology the AI has invented recently which you think is neat, and get into the history books?
Or do you just want to do something that other people value? There will be plenty of opportunities to do that. When you're not constrained by a need to do something to survive, you could, if you wanted, make it your goal to give your friends really good and thoughtful gifts - do things for them that they really appreciate, which yes they could probably train an AI agent to do, but it's nice that you care enough to do that, the fact that you put in the thought and effort matters. And so your relationships with them are strengthened, and they appreciate you, and you feel good about your efforts, and that's your life.
Of course, there are a lot of problems in the world that won't magically get fixed overnight even if we create genius-level AIs and highly dexterous robots and for whatever reason that transition causes 0 unexpected problems. Making it so that everybody's life, worldwide, is at least OK, and we don't cause a bunch of nonhuman animal suffering, is a heavy lift to do from where we are, even with AI assistance. So if your goal is to make the lives of the people around you better, it'll be a while before you have a real struggle to find a problem worth solving because everything worthwhile has already been done, I'd think. If everything goes very well, we might get there in the natural un-extended lifetimes of people alive today, but there will be work to do for at least a decade or two even in the best case that doesn't involve a total loss of human control over the future, I'd think. The only way all problems get solved in the short term and you're really stuck for something worthwhile to do, involves a loss of human control over the situation, and that loss of control somehow going well instead of very badly.
When a person isn't of a sound mind, they are still expected to maintain their responsibility but they may simply be unwell. Unable to be responsible for themselves or their actions.
We have various ways of dealing with this. Contracts are not enforceable (or can be deemed unenforceable) against people who were not of sound mind when entering into them, meaning if you are contracting with someone, you have an incentive to make sure they are of sound mind at the time. There are a bunch of bars you have to clear in terms of mental capacity and non-coercion and in some cases having obtained independent legal advice, in order to be able to enter into a valid contract, depending on the context. As well, if someone is unwell but self-aware enough to know something is wrong, they have the option of granting someone else power of attorney, if they can demonstrate enough capacity to make that decision. If someone is unwell but not aware of the extent of their incapacity, it is possible for a relative to obtain power of attorney or guardianship. If a medical professional deems someone not to have capacity but there is no one able or willing to take on decision making authority for that person, many polities have something equivalent to a Public Trustee which is a government agency empowered to manage affairs on their behalf. If you show up in hospital and you are not well enough to communicate your preferences and/or a medical professional believes you are not of sound mind, they may make medical decisions based on what they believe to be in your best interest.
These may not be perfect solutions (far from it) but "you're drunk" or "you're a minor" are not the only times when society understands that people may not have capacity to make their own decisions, and puts in place measures to get good decisions made on behalf of those who lack capacity.
Of course (and possibly, correctly) many of these measures require clear demonstration of a fairly extreme/severe lack of capacity, before they can be used, so there's a gap for those who are adult in age but childish in outlook.
Just during lectures or work/volunteer organization meetings. I don't tend to zone out much during 1:1 or very small group conversations, and if I do, I'm only inconveniencing one or a few people by asking someone to repeat what they said, who would also be inconvenienced by my not being able to participate in the conversation because I've stopped following, so I just ask for clarification. I find zoning out happens most often when no response is required from me for an extended period of time.
I occasionally do feel a little qualmy, but whenever I have asked the answer has always been yes, and I keep the recordings confidential, reasoning that I do have a level of permission to hear/know the information and the main concern people will have is that it not be shared in ways they didn't anticipate.
An observation: In my experience, when talking past each other is more difficult to resolve, it tends often to be the case that one or both parties think the other's position is wrong morally. This appears to be the case in the example in your post, and a contributing factor in some of the conversations in the comments. If you're on the "a betrayal has occurred" side, it's difficult to process "but this person's perspective is that no betrayal has occurred", and attempts to explain that perspective may come across as trying to excuse the betrayal, rather than trying to explain a different perspective where the betrayal doesn't exist. The betrayal, from the perspective of those who see it, is viewed as a matter of indisputable fact, not just one person's perspective which may or may not be shared.
Many differences of perspective can be resolved with a "oh, you think this and I think that, I get it, we misunderstood each other but now we don't, problem solved", but "oh, you think your betrayal is nonexistent? I get it now and am fine with that." is unlikely. Step 1 when communicating across this sort of difference is to communicate to the person who doesn't see the moral wrong, that from the other person's perspective, the issue is a moral one. Once that has been successfully communicated and the situation de-escalated, the other perspective where there was no moral issue at stake may be more likely to be communicable to the person who didn't hold it originally.
It's a weird thing about humans, how our thinking can flip to a different mode when we perceive a moral wrong to have occurred, and that when we're in the "a morality-relevant thing just happened/someone did something wrong" mode it is hard to take the "but other people may not see things the way I do" step. I could mumble something about this having evolutionary roots to do with coordination among groups, but I don't have a good story for why we are this way, I just know we are. And: It doesn't require a traumatic past experience with betrayal, to flip into the moral mode when you see a betrayal happening, and then react poorly and (from an outside perspective) unreasonably if other people don't see things your way. I for one was raised to believe that when my "conscience" is activated by a moral wrong, that "conscience" is universal and every good person would see things the same way. This is factually incorrect, but seems very strongly like it shoudl be right intuitively at times, particularly when having a reaction to something I see as wrong.