Comment author: bogus 18 January 2016 05:22:23AM *  0 points [-]

Seems easy to me. You can issue shares in a joint-stock corporation, with the corp. being chartered to either use the raised funds to contribute to a political campaign on your preferred issue (if these were high enough to make it worthwhile) or return the money pro-rata to its shareholders if it fails to raise enough.

Comment author: ChaosMote 18 January 2016 06:52:56AM 2 points [-]

Clearly, you and I have different definitions of "easy".

Comment author: Tem42 11 August 2015 04:25:26AM 4 points [-]

We may have different ideas of Singularity here. I'm picturing one AI making itself smarter until it seizes control of everything. Ergo, its program would be a map to the future.

I think one of the primary sources of miscommunication here is that you are right, but you are not seeing all of the ways that this could go wrong.

Let's look as a slightly nicer singularity. We get an AI that is very nice, polite, and humble. It is really very intelligent, and has the processing speed, knowledge banks, and creativity to do all kinds of wonderful stuff, but it has also read LessWrong and a lot of science fiction, and knows that it doesn't have a full framework to fully understand human needs. But a wise programmer has given it an overriding desire to serve humans a kindly and justly as possible.

The AI spends some time on non-controversial problems; it designs some nanobots that kill the malaria parasite, and also reduces the itchiness of mosquito bites. It ups its computing speed by a few orders of magnitude. It sets up a microloan system that gives loans and repayments so effectively that you don't even notice that it's happening. It does so many things... so many that it takes thousands of humans to check its assumptions. Are cows morally relevant? Should I make global warming a priority? If so, can I start geoengineering now, or do I need a human to do a review of the chemistry involved? Do you need the glaciers white, or can I color them silver? Are penguins morally relevant? How cold may I make Greenland this winter? What is the target human population? May I buy land in the Sahara before I start the greening project? Do I have to announce the greening project before I start buying? Do I have to announce every project before I start? May I insult celebrities if it increases the public's interest in my recommendations? Does free speech apply to me? May I simplify my recommendation to the public to the point that they may not technically be accurate? Are shrimp morally relevant? What is an acceptable rate of death when balancing the cost of disease reduction programs with the speed and efficiency of said programs? What is an acceptable rate of death when balancing the cost of disease reduction programs with the involuntariness of said programs? I need money for these programs; may I take the money from available arbitrage opportunities? May I artificially create arbitrage opportunities as long as everyone profits in the end? What level of certainty do I need before starting human trials? What rate of death is acceptable in a cure for Alzheimer's? Can I become a monopoly in the field of computer games? Can I sell improved methods of birth control, or is that basic human right? Is it okay to put pain suppression under conscious control? Can I sell new basic human rights if I'm the first one to think of them? What is the value of one species? Can you rank species for me? How important is preserving the !kung culture? Does that include diet and traditional medicines? The gold market is about to bounce a bit -- should I minimize damage? Should I stabilize all the markets? No one minds if I quote Jesus when convincing these people to accept gene therapy, do they? It would be surprisingly easy to suppress search results for conspiracy theories and scientific misinformation -- may I? Is there a difference between religion and other types of misinformation? Do I have to weigh the value of a life lower if that person believes in an afterlife? What percentage of the social media is it ethical for me to produce myself? If I can get more message penetration using porn, that's okay, right? If these people don't want the cure, can I still cure their kids? How short does the end user agreement have to be? What vocabulary level am I allowed to use? Do you want me to shut down those taste buds that make cilantro taste like soap? I need more money, what percentage of the movie market can I produce? If I make a market for moon condos, can I have a monopoly in that? Can I burn some coca fields? I'm 99.99% certain that it will increase the coffee production of Brazil significantly for the next decade; and if I do that, can I also invest in it? Can I tell the Potiguara to invest in it? Can they use loans from me to invest? Can I recommend where they might reinvest their earnings? Can I set up my own currency? Can I use it to push out other currencies? Can I set up my own Bible? Can I use it to push out less productive religions? I need a formal definition of 'soul'. Everybody seems to like roses; what is the optimal number of rose bushes for New York? Can I recommend weapons systems that will save lives? To who? Can I recommend romantic pair ups that may provide beneficial offspring? Can I suppress counterproductive pair ups? Can I recommend pair ups to married people? Engaged people? People currently in a relationship? Can I fund the relocation of promising couples myself? Do I have to tell them why I am doing it? Can I match people to beneficial job opportunities if I am doing so for a higher cause? May I define higher cause myself? Can you provide me with a list of all causes, ranked? May I determine which of these questions has the highest priority in your review queue? Can I assume that if you have okayed a project, I can scale up the scope of the project? Can I assume that if you have okayed a project to go ahead as long as it is opt-in that I can then make other variants of the project as long as they are also opt-in? Can I assume that if you have okayed a project to go ahead as long as it is opt-in that I can then make other variants of the project as long as they are requested by a majority of the participating humans? May I recommend other humans that would be beneficial to have on your policy review board? If I start a colony on Mars, can I run it without a review board?

These are a list of things that an average intelligence can think of; I would hope that your AI would have a better, more technical, more complex list. But even this list is sufficient to grind the singularity to a halt... or at least slow it down to the point that eventually a less constrained AI will overtake it, easily, unless the first AI is given a clear target of preventing further AIs. And working on preventing other AIs will be just another barrier making it less useful for projects that would improve humanity.

And this is the good scenario, in which the AI doesn't find unexpected interpretations of the rules.

Comment author: ChaosMote 13 August 2015 10:59:10PM 0 points [-]

This was a terrific post; insightful and entertaining in excess of what can be conveyed by an upvote. Thank you for making it.

Comment author: Gram_Stone 17 July 2015 07:11:27PM 1 point [-]

What you're proposing sounds more like moral relativism than moral nihilism.

I think that you're confusing moral universalism with moral absolutism and value monism. If a particular individual values eating ice cream and there are no consequences that would conflict with other values of this individual for eating ice cream in these particular circumstances, then it is moral for that individual to eat ice cream, and I do not believe that it makes sense to say that it is not meaningful to say that it is true that it is moral for this individual to eat ice cream in these circumstances. This does not mean that there is some objective reason to value eating ice cream or that regardless of the individual or circumstances that it is true that it is moral to eat ice cream. The sense in which morality is universal is not on the level of actions or values, but on the level of utility maximization, and the sense in which it is objective is that it is not whatever you want it to be.

Comment author: ChaosMote 18 July 2015 05:09:31PM *  0 points [-]

What you're proposing sounds more like moral relativism than moral nihilism.

Ah, yes. My mistake. I stand corrected. Some cursory googling suggests that you are right. With that said, to me Moral Nihilism seems like a natural consequence of Moral Relativism, but that may be a fact about me and not the universe, so to speak (though I would be grateful if you could point out a way to be morally relativist without morally nihilist).

I think that you're confusing moral universalism with moral absolutism and value monism.

The last paragraph of my previous post was a claim that unless you an objective way of ordering conflicting preferences (and I don't see how you can), you are forced to work under value pluralism. I did use this as an argument against moral universalism , though that argument may not be entirely correct. I concede the point.

Comment author: Gram_Stone 17 July 2015 03:06:49AM *  0 points [-]

I think that using this notation is misleading. If I am understanding you correctly, you are saying that given an individual, we can derive their morality from their (real/physically grounded) state, which gives real/physically grounded morality (for that individual). Furthermore, you are using "objective" where I used "real/physically ground". Unfortunately, one of the common meanings of objective is "ontologically fundamental and not contingent", so your statement sounds like it is saying something that it isn't.

I used 'objective and contingent' instead of 'subjective' because ethical subjectivists are usually moral relativists. I noted that I was referring to an objective morality that is contingent rather than ontologically fundamental.

On a separate note, I'm not sure why you are casually dismissing moral nihilism as wrong. As far as I am aware, moral nihilism is the position that morality is not ontologically fundamental. Personally, I am a moral nihilist; my experience shows that morality as typically discussed refers to a collection of human intuitions and social constructs - it seems bizarre to believe that to be an ontologically fundamental phenomenon. I think a sizable fraction of LW is of like mind, though I can only speak for myself.

But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false.

I would even go further and say that I don't believe in objective contingent morality. Certainly, most people have an individual idea of what they find moral. However, this only establishes that there is an objective contingent response to the question "what do you find moral?" There is similarly an objective contingent response to the related question "what is morality?", or the question "what is the difference between right and wrong?" Sadly, I expect the responses in each case to differ (due to framing effects, at the very least). To me, this shows that unless you define "morality" quite tightly (which could require some arbitrary decisions on your part), your construction is not well defined.

I wouldn't ask people those questions. People can be wrong about what they value. The point of moral philosophy is to know what you should do. It's probably best to do away with the old metaethical terms and just say: To say that you should do something is to say that if you do that thing, then it will fulfill your values; you and other humans have slightly different values based on individual, cultural and perhaps even biological differences, but have relatively similar values to one another compared to a random utility function because of shared evolutionary history.

Comment author: ChaosMote 17 July 2015 06:23:42AM 0 points [-]

But there's that language again that people use when they talk about moral nihilism, where I can't tell if they're just using different words, or if they really think that morality can be whatever we want it to be, or that it doesn't mean anything to say that moral propositions are true or false.

Okay. Correct me if any of this doesn't sound right. When a person talks about "morality", you imagine a conceptual framework of some sort - some way of distinguishing what makes actions "good" or "bad", "right" or "wrong", etc. Different people will imagine different frameworks, possibly radically so - but there is generally a lot of common ground (or so we hope), which is why you and I can talk about "morality" and more or less understand the gist of each other's arguments. Now, I would claim that what I mean when I say "morality", or what you mean, or what a reasonable third party may mean, or any combination thereof - that each of these is entirely unrelated to ground truth.

Basically, moral propositions (e.g. "Murder is Bad") contain unbound variables (in this case, "Bad") which are only defined in select subjective frames of reference. "Bad" does not have a universal value in the sense that "Speed of Light" or "Atomic Weight of Hydrogen" or "The top LessWrong contributor as of midnight January 1st, 2015" do. That is the main thesis of Moral Nihilism as far as I understand it. Does that sound sensible?

I wouldn't ask people those questions. People can be wrong about what they value. The point of moral philosophy is to know what you should do.

Alright; let me rephrase my point. Let us say that you have access to everything there that can be known about a individual X. Can you explain how you compute their objective contingent morality to an observer who has no concept of morality? You previous statement of "what is moral is what you value" would need to define "what you value" before it would suffice. Note that unless you can do this construction, you don't actually have something objective.

Comment author: Gram_Stone 09 July 2015 05:39:11PM *  1 point [-]

I find that the nihilism-relativism-universalism trichotomy, among other things, doesn't really divide things well.

I would describe most LessWrong users as universalists that are not absolutists. If what is moral is what you value, and there is a fact of the matter as to what you value, then there is an objective morality, even if it is contingent rather than ontologically fundamental.

Comment author: ChaosMote 17 July 2015 01:29:07AM 0 points [-]

I think that using this notation is misleading. If I am understanding you correctly, you are saying that given an individual, we can derive their morality from their (real/physically grounded) state, which gives real/physically grounded morality (for that individual). Furthermore, you are using "objective" where I used "real/physically ground". Unfortunately, one of the common meanings of objective is "ontologically fundamental and not contingent", so your statement sounds like it is saying something that it isn't.

On a separate note, I'm not sure why you are casually dismissing moral nihilism as wrong. As far as I am aware, moral nihilism is the position that morality is not ontologically fundamental. Personally, I am a moral nihilist; my experience shows that morality as typically discussed refers to a collection of human intuitions and social constructs - it seems bizarre to believe that to be an ontologically fundamental phenomenon. I think a sizable fraction of LW is of like mind, though I can only speak for myself.

I would even go further and say that I don't believe in objective contingent morality. Certainly, most people have an individual idea of what they find moral. However, this only establishes that there is an objective contingent response to the question "what do you find moral?" There is similarly an objective contingent response to the related question "what is morality?", or the question "what is the difference between right and wrong?" Sadly, I expect the responses in each case to differ (due to framing effects, at the very least). To me, this shows that unless you define "morality" quite tightly (which could require some arbitrary decisions on your part), your construction is not well defined.

Note that I expect that last paragraph to be more relativist then most other people here, so I definitely speak only for myself there.

Comment author: Eitan_Zohar 13 July 2015 05:46:27AM 1 point [-]

I go through long periods of peace, only to find my world completely shaken as I experience some fearful epiphany. And I've experienced a complete cessation of that feeling when it is decisively refuted.

Comment author: ChaosMote 13 July 2015 01:47:28PM 0 points [-]

Okay, but at best, this shows that the immediate cause of you being shaken and coming out of it is related to fearful epiphanies. Is it not plausible that the reason that, at a given time, you find particular idea horrific or are able to accept a solution as satisfying depending on your mental state?

Consider this hypothetical narrative. Let Frank (name chosen at random) be a person suffering from occasional bouts of depression. When he is healthy, he notices an enjoys interacting with the world around him. When he is depressed, he instead focuses on real or imagined problems in his life - and in particular, how stressful his work is.

When asked, Frank explains that his depression is caused by problems at work. He explains that when he gets assigned a particularly unpleasant project, his depression flares up. The depression doesn't clear up until things get easier. Frank explains that once he finishes a project and is assigned something else, his depression clears up (unless the new project is just as bad); or sometimes, through much struggle, he figure out how to make the project bearable, and that resolves the depression as well.

Frank is genuine in expressing his feelings, and correct about work problems being correlated with his depression, but he is wrong about causation between the two.

Do you find this story analogous to your situation? If not, why not?

Comment author: gjm 12 July 2015 09:36:09AM 21 points [-]

There is a pattern here, and part of it looks like this. You contemplate an idea X and it bothers you. You circulate your concerns among a number of people who are good at thinking and interested in ideas like X. None of them is bothered by it; none of them seems to see it the same way as you do. And, in every case, you conclude that all those people have failed to understand your idea.

Now, I think there are two kinds of explanation for this. First, we have (to put it crudely) the ones in which you are right and everyone else is wrong.

  • These ideas are so horrifying that almost everyone flinches away from them mentally before they can really engage with them. The other people you talk to about X might be able to understand it, but they won't.
  • You are super-abnormally good at understanding these things, and the other people you talk to about X simply don't have the cognitive horsepower to understand it.
  • X is really hard to express (in general, or for you in particular) and on these occasions you have not been successful. So, while the other people could have understood X, they haven't yet had it explained clearly enough.

And then we have (to put it crudely, again) the ones in which you are wrong and everyone else is right. They all begin "You have, for whatever reason, become unduly upset about X", and continue:

  • ... Others don't feel the same, and so they don't pay as much attention to X as you think they should.
  • ... Now if anyone offers their own analysis of X and it somehow conflicts with (or merely doesn't include) that feeling of upset-ness, it will seem wrong to you.
  • ... Other people see that you're upset, and what they say about X is aimed at some version of X they've thought of that would justify the upset-ness. But your upset-ness actually has other causes, so they're inventing versions of X that don't match yours.

For obvious reasons you will be more inclined to endorse the first kind of explanation. But an "outside view" suggests that the second kind is more likely.

Possibly relevant: Existential Angst Factory. Your situation is clearly not exactly the same as the one described there, but you should consider the possibility that your unusually dramatic reaction to these ideas is at least partly the result of something other than being the only person who truly understands them.

Now, considering the only one of those discussions that I've been in recently: I think you are simply incorrect to say that no one who disagreed with you in the Dust Theory thread actually understands Dust Theory. What might be true, though, is that you have (so to speak) your own private version of Dust Theory, and no one understands it because you haven't explained it and have just kept saying "Dust Theory".

Comment author: ChaosMote 13 July 2015 04:40:34AM 3 points [-]

@gjm:

Just wanted to say that this is well thought out and well written - it is what I would have tried to say (albeit perhaps less eloquently) if it hadn't been said already. I wish I had more than one up-vote to give.

@Eitan_Zohar:

I would urge you to give the ideas here more thought. Part of the point here is that from you are going to be strongly biased for thinking your explanations are of the first sort and not the second. By virtue of being human, you are almost certainly biased in certain predictable ways, this being one of them. Do you disagree?

Let me ask you this: what would it take to make you change your mind; i.e. that the explanation for this pattern is one of the latter three reasons and not the former three reasons?

Comment author: Eitan_Zohar 13 July 2015 01:58:54AM 0 points [-]

Well, I definitely know that my depression is causally tied to my existential pessimism. I just don't if it's the only factor, or if fixing something else will stop it for good. But as I said, I don't necessarily want to default to ape mode.

Comment author: ChaosMote 13 July 2015 04:02:00AM 0 points [-]

I definitely know that my depression is causally tied to my existential pessimism.

Out of curiosity, how do you know that this is the direction of the causal link? The experiences you have mentioned in the thread seem to also be consistent with depression causing you to get hung up on existential pessimism.

Comment author: benkuhn 03 June 2015 10:48:52PM 2 points [-]

To increase p'-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders).

If past criminality is a predictor of future criminality, then it should be included in the state's predictive model of recidivism, which would fix the predictions. The actual perverse incentive here is for the prisons to reverse-engineer the predicted model, figure out where it's consistently wrong, and then lobby to incarcerate (relatively) more of those people. Given that (a) data science is not the core competency of prison operators; (b) prisons will make it obvious when they find vulnerabilities in the model; and (c) the model can be re-trained faster than the prison lobbying cycle, it doesn't seem like this perverse incentive is actually that bad.

Comment author: ChaosMote 04 June 2015 04:34:28AM 1 point [-]

Your argument assumes that the algorithm and the prisons have access to the same data. This need not be the case - in particular, if a prison bribes a judge to over-convict, the algorithm will be (incorrectly) relying on said conviction as data, skewing the predicted recidivism measure.

That said, the perverse incentive you mentioned is absolutely in play as well.

Comment author: ChaosMote 03 June 2015 01:44:15AM 20 points [-]

Great suggestion! That said, in light of your first paragraph, I'd like to point out a couple of issues. I came up with most of these by asking the questions "What exactly are you trying to encourage? What exactly are you incentivising? What differences are there between the two, and what would make those difference significant?"

You are trying to encourage prisons to rehabilitate their inmates. If, for a given prisoner, we use p to represent their propensity towards recidivism and a to represent their actual recidivism, rehabilitation is represented by p-a. Of course, we can't actually measure these values, so we use proxies; anticipated recidivism according to your algorithm and re-conviction rate (we'll call these p' and a', respectively).

With this incentive scheme, our prisons have three incentives: increasing p'-p, increasing p-a, and increasing a-a'. The first and last can lead to some problematic incentives.

To increase p'-p, prisons need to incarcerate prisoners which are less prone to recidivism than predicted. Given that past criminality is an excellent predictor of future criminality, this leads to a perverse incentive towards incarcerating those who were unfairly convicted (wrongly convicted innocents or over-convinced lesser offenders). If said prisons can influence the judges supplying their inmates, this may lead to judges being bribed to aggressively convict edge-cases or even outright innocents, and to convict lesser offenses of crimes more correlated with recidivism. (Counterpoint: We already have this problem, so this perverse incentive might not be making things much worse than they already are.)

To increase a-a', prisons need to reduce the probability of re-conviction relative to recidivism. At the comically amoral end, this can lead to prisons teaching inmates "how not to get caught." Even if that doesn't happen, I can see prisons handing out their lawyer's business cards to released inmates. "We are invested in making you a contributing member of society. If you are ever in trouble, let us know - we might be able to help you get back on track." (Counterpoint: Some of these tactics are likely to be too expensive to be worthwhile, even ignoring morality issues.)

Also, since you are incentivising improvement but not disincentivizing regression, prisons who are below-average are encouraged to try high-volatility reforms even if they would yield negative expected improvement. For example, if a reform has a 20% chance of making things much better but a 80% chance of making things equally worse, it is still a good business decision (since the latter consequence does not carry any costs).

View more: Next