Myron Hedderson

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

We can't steer the future

What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we're 100% screwed because I can't do that. But I do have some influence. A great deal of influence over my own actions (I'm resisting the temptation to go down a sidetrack about determinism, assuming you're modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word "we", but I don't know who the "we" is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we're not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to "we can't steer the future" is "not yet we can't, at least not very well"?
 

  • it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
    • if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change,  AI x-risk, and socially-conservative cultural reform.

Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity's future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of "goodness", rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of "steering" a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku - very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can't hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying "in this instance, you're stifling the individual" and "in this instance you're harming the group/long-term future" wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.

This is fun! I don't know which place I'm a citizen of, though, it just says "hello citizen"... I feel John Rawls would be pleased...

I think maybe the root of the confusion here might be a matter of language. We haven't had copier technology, and so our language doesn't have a common sense way of talking about different versions of ourselves. So when one asks "is this copy me?", it's easy to get confused. With versioning, it becomes clearer. I imagine once we have copier technology for a while, we'll come up with linguistic conventions for talking about different versions of ourselves that aren't clunky, but let me suggest a clunky convention to at least get the point across:

I, as I am currently, am Myron.1. If I were copied, I would remain Myron.1, and the copy would be Myron.1.1. If two copies were made of me at that same instant, they would be Myron.1.1 and Myron.1.2. If a copy was later made of Myron.1.2, he would be Myron.1.2.1. And so on.

With that convention in mind, I would answer the questions you pose up top as follows:

Rather, I assume xlr8harder cares about more substantive questions like:

  1. If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? No. Maybe similarly to a close relative, 
  2. Should I anticipate experiencing what my upload experiences? No. I should anticipate experiencing a continuation of Myron.1's existence if the process is nondestructive, or the end of my (Myron.1)'s existence. Myron.1.1's experiences will be separate and distinct from Myron.1's.
  3. If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure? Depends. Sometimes suicide is OK, and you could value the continuation of a mind like your own even if your mind goes away. Or, not. That's a values question, not a fact question.

I'll add a fourth, because you've discussed it:

4. After the scanning and copying process, will I feel like me? Yep. But, if the copying process was nondestructive, you will be able to look out and see that there is a copy of you. There will be a fact of the matter about who entered the copying machine and how the second copy was made, a point in time before which the second copy did not exist and after which it did exist, so one of you will be Rob.1, and the other will be Rob.1.1. It might not be easy to tell which version you are in the instant after the copy is made, but "the copy is the original" will be a statement that both you and the other version evaluate as logically false, same with "both of you are the same person". "Both of you are you", once we have linguistic conventions around versioning, will be a confusing and ambiguous statement and people will ask you what you mean by that.

And another interesting one: 

5. After the scanning process, if it's destructive, if I'm the surviving copy, should I consider the destruction of the original to be bad? I mean, yeah, a person was killed. It might not be you.currentversion, exactly, but it's where you came from, so probably you feel some kinship with that person. In the same way I would feel a loss if a brother I grew up with was killed, I'd feel a loss if a past version of me was killed. We could have gone through life together with lots of shared history in a way very few people can, and now we can't.

Ok. I thought after I posted my first answer, one of the things that would be really quite valuable during the turbulent transition, is understanding what's going on and translating it for people who are less able to keep up, because of lacking background knowledge or temperament. While it will be the case after a certain point that AI can give people reliable information, there will be a segment of the population that will want to hear the interpretation of a trustworthy human, and also, the cognitive flexibility to deal with a complex and rapidly changing environment and provide advice to people based on their specific circumstances, will be a comparative advantage that lasts longer than most.

Acting as a consultant to help others navigate the transition, particularly if that incorporates other expertise you have (there may be a better generic advice giver in the world, and you're not likely to be able to compete with Zvi in terms of synthesizing and summarizing information, but if you're for example well enough versed in the current situation, plus you have some professional specialty, plus you have local knowledge of the laws or business conditions in your geographic area, you could be the best consultant in the world with that combination of skills).

Also, generic advice for turbulent times: learn to live on as little as possible, stay flexible and willing to move, save up as much as you can so that you have some capital to deploy when that could be very useful (if the interest rates go sky high because suddenly everyone wants money to build chip fabs or mine metals for robots or something, having some extra cash pre-transition could mean having plenty post-transition) but also you have some free cash in case things go sideways and a well placed wad of cash can get you out of a jam on short notice, let you quit your job and pivot, or do something else that has a short term financial cost but you think is good under the circumstances. Basically make yourself more resilient, knowing turbulence is coming, and prepare to help others navigate the situation. Make friends and broaden your social network, so that you can call on them if needed and vice versa.

Answer by Myron Hedderson30

I think my answer would depend on your answer to "Why do you want a job?". I think that when AI and robotics have advanced to the point where all physical and intellectual tasks can be done better by AI/robots, we've reached a situation where things change very rapidly and "what is a safe line of work long term?" is hard to answer because we could see rapid changes over a period of a few years, and who knows what the end-state will look like? Also, any line of work which at time X it is economically valuable to have humans do, will have a lot of built-in incentive to make it automateable, so "what is it humans can make money at because people value the labour?" could change rapidly. For example, you suggest that sex work is one possibility, but if you have 100,000 genius-level AIs devising the best possible sex-robot, pretty quickly they'd be able to come up with something where the people who are currently paying for sex would feel like they're getting better value for money out of the sex-robot than out of a human they could pay for sex. Of course people will still want to have sex with people they like who like them back, but that isn't typically done for money.

We'll live in a world where the economy is much larger and people are much richer, so subsistence isn't a concern, provided that there's decent redistributive mechanisms of some sort in place. Like let's say we keep the tax rate the same but the GDP has gone up 1,000x - then the amount of tax revenue has gone up 1,000x, and UBI is easy. If we can't coordinate to get a UBI in place, it would still only need 1 in 1,000 people to somehow have lucked into resources and say "I wish everyone had a decent standard of living" and they could set up a charitable organization that gave out free food and shelter, with the resources under their command. So you won't need a job. Meaning, any work people got other people to do for them would have to pay an awful lot if it was something a worker didn't intrinsically want to do (if someone wanted a ditch dug for them by humans who didn't like digging ditches, they'd have to make those humans a financial offer they found made it worthwhile when all of their needs are already met - how much would you have to pay a billionaire to dig you a ditch? There's a price, but it's probably a lot.), and otherwise, you can just do whatever "productive" thing you want because you want to, you enjoy the challenge, it's a growth experience for you or whatever, and it likely pays 0, but that doesn't matter because you value it for reasons other than the pay.

I guess it could feel like a status or dignity thing, to know that other people value the things you can do, enough to keep you alive with the products of your own labour? And so you're like "nah, I don't want the UBI, I want to earn my living". In that case, keep in mind "enough to keep you alive with the products of your own labour" will be very little, as a percentage of people's income. So you can busk on a street corner, and people can occasionally throw the equivalent of a few hundred thousand of today's dollars of purchasing power into your hat, because you made a noise they liked, in the same way that I can put $5 down for a busker now because that amount of money isn't particularly significant to me, and you're set for a few years at least, instead of being able to get yourself a cup of coffee as is the case now.

Or, do you want to make a significant amount of money, such that you can do things most people can't do because you have more money than them? In that case, I think you'd need to be pushing the frontier somehow - maybe investing (with AI guidance, or not) instead of spending in non-investy ways would do it. If the economy is doubling every few years, and you decide to live on a small percentage of your available funds and invest the rest, that should compound to a huge sum within a short time, sufficient for you to, I dunno, play a key role in building the first example of whatever new technology the AI has invented recently which you think is neat, and get into the history books?

Or do you just want to do something that other people value? There will be plenty of opportunities to do that. When you're not constrained by a need to do something to survive, you could, if you wanted, make it your goal to give your friends really good and thoughtful gifts - do things for them that they really appreciate, which yes they could probably train an AI agent to do, but it's nice that you care enough to do that, the fact that you put in the thought and effort matters. And so your relationships with them are strengthened, and they appreciate you, and you feel good about your efforts, and that's your life.

Of course, there are a lot of problems in the world that won't magically get fixed overnight even if we create genius-level AIs and highly dexterous robots and for whatever reason that transition causes 0 unexpected problems. Making it so that everybody's life, worldwide, is at least OK, and we don't cause a bunch of nonhuman animal suffering, is a heavy lift to do from where we are, even with AI assistance. So if your goal is to make the lives of the people around you better, it'll be a while before you have a real struggle to find a problem worth solving because everything worthwhile has already been done, I'd think. If everything goes very well, we might get there in the natural un-extended lifetimes of people alive today, but there will be work to do for at least a decade or two even in the best case that doesn't involve a total loss of human control over the future, I'd think. The only way all problems get solved in the short term and you're really stuck for something worthwhile to do, involves a loss of human control over the situation, and that loss of control somehow going well instead of very badly.

When a person isn't of a sound mind, they are still expected to maintain their responsibility but they may simply be unwell. Unable to be responsible for themselves or their actions. 

 

We have various ways of dealing with this. Contracts are not enforceable (or can be deemed unenforceable) against people who were not of sound mind when entering into them, meaning if you are contracting with someone, you have an incentive to make sure they are of sound mind at the time. There are a bunch of bars you have to clear in terms of mental capacity and non-coercion and in some cases having obtained independent legal advice, in order to be able to enter into a valid contract, depending on the context. As well, if someone is unwell but self-aware enough to know something is wrong, they have the option of granting someone else power of attorney, if they can demonstrate enough capacity to make that decision. If someone is unwell but not aware of the extent of their incapacity, it is possible for a relative to obtain power of attorney or guardianship. If a medical professional deems someone not to have capacity but there is no one able or willing to take on decision making authority for that person, many polities have something equivalent to a Public Trustee which is a government agency empowered to manage affairs on their behalf. If you show up in hospital and you are not well enough to communicate your preferences and/or a medical professional believes you are not of sound mind, they may make medical decisions based on what they believe to be in your best interest.

These may not be perfect solutions (far from it) but "you're drunk" or "you're a minor" are not the only times when society understands that people may not have capacity to make their own decisions, and puts in place measures to get good decisions made on behalf of those who lack capacity.

Of course (and possibly, correctly) many of these measures require clear demonstration of a fairly extreme/severe lack of capacity, before they can be used, so there's a gap for those who are adult in age but childish in outlook.

Just during lectures or work/volunteer organization meetings. I don't tend to zone out much during 1:1 or very small group conversations, and if I do, I'm only inconveniencing one or a few people by asking someone to repeat what they said, who would also be inconvenienced by my not being able to participate in the conversation because I've stopped following, so I just ask for clarification. I find zoning out happens most often when no response is required from me for an extended period of time.

I occasionally do feel a little qualmy, but whenever I have asked the answer has always been yes, and I keep the recordings confidential, reasoning that I do have a level of permission to hear/know the information and the main concern people will have is that it not be shared in ways they didn't anticipate.

My solution is to use the voice recorder app on my phone, so I can review any points I missed after the fact, and take notes about where I zoned out with timestamps so that I don't have to review the whole thing. If you have a wristwatch you can use the watch-time rather than recorder-time and synch up later, and it's not very obvious.

It would be cool if we could get more than 1% of the working population into the top 1% of earners, for sure. But, we cannot. The question then becomes, how much of what a top 1% earner earns is because they are productive in an absolute sense (they generate $x in revenue for their employer/business), vs. being paid to them because they are the (relative) best at what they do, and so they have more bargaining power?

Increasing people's productivity will likely raise earnings. Helping people get into the top 1% relative to others, just means someone else is not in the top 1%, who counterfactually would have been. Your post conflates the two a bit, and doesn't make the distinction between returns to relative position vs. returns to absolute productivity, measuring a "home run" as getting into the top x%, rather than achieving a specified earnings level.

If I look past that, I agree with the ideas you present for increasing productivity and focusing on making sure high-potential individuals achieve more of their potential. But... I am somewhat leery of having a government bureaucracy decide who is high potential and only invest in them. It might make more sense, given the returns on each high potential individual and the relatively small costs to making sure everyone has access to the things they would need to realize their potential, to just invest in everyone, as a strategy aimed at not missing any home runs.

Load More