Filter Today

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: turchin 14 October 2016 03:58:53PM 1 point [-]

White house also relized a pdf with concrete recommendations: http://barnoldlaw.blogspot.ru/2016/10/intelligence.html

Some interesting lines:

Recommendation 13: The Federal government should prioritize basic and long-term AI research. The Nation as a whole would benefit from a steady increase in Federal and private-sector AI R and D, with a particular emphasis on basic research and long-term, high-risk research initiatives. Because basic and long-term research especially are areas where the private sector is not likely to invest, Federal investments will be important for R and D in these areas.

Recommendation 18: Schools and universities should include ethics, and related topics in security, privacy, and safety, as an integral part of curricula on AI, machine learning, computer science, and data science.

Comment author: scarcegreengrass 14 October 2016 02:11:15PM 1 point [-]

Both of those Ito remarks referenced supposedly widespread perspectives. But personally, i have almost never encountered these perspectives before.

Comment author: hairyfigment 14 October 2016 11:09:12AM 1 point [-]

I'm getting really sick of this claim that Eliezer says all humans would agree on some morality under extrapolation. That claim is how we get garbage like this. At no point do I recall Eliezer saying psychopaths would definitely become moral under extrapolation. He did speculate about them possibly accepting modification. But the paper linked here repeatedly talks about ways to deal with disagreements which persist under extrapolation:

In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. (emphasis added)

Coherence is not a simple question of a majority vote. Coherence will reflect the balance, concentration, and strength of individual volitions. A minor, muddled preference of 60% of humanity might be countered by a strong, unmuddled preference of 10% of humanity. The variables are quantitative, not qualitative.

(Naturally, Eugine Nier as "seer" downvoted all of my comments.)

The metaethics sequence does say IMNSHO that most humans' extrapolated volitions (maybe 95%) would converge on a cluster of goals which include moral ones. It furthermore suggests that this would apply to the Romans if we chose the 'right' method of extrapolation, though here my understanding gets hazier. In any case, the preferences that we would loosely call 'moral' today, and that also survive some workable extrapolation, are what I seem to mean by "morality".

One point about the ancient world: the Bhagavad Gita, produced by a warrior culture though seemingly not by the warrior caste, tells a story of the hero Arjuna refusing to fight until his friend Krishna convinces him. Arjuna doesn't change his mind simply because of arguments about duty. In the climax, Krishna assumes his true form as a god of death with infinitely many heads and jaws, saying, 'I will eat all of these people regardless of what you do. The only deed you can truly accomplish is to follow your warrior duty or dharma.' This view seems plainly environment-dependent.

Comment author: SithLord13 15 October 2016 02:50:53AM 0 points [-]

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Can you expand on this a bit? (Full disclosure I'm still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they're honest answers and that people are actually bound by their morals (or are at least answering as though they are, which I believe to be implicit in the question).

For example, I'm also a duster, and that "would you rather" taught me a great deal about my morality. (Although to be fair what it taught me is certainly not what was intended, which was that my moral system is not strictly multiplicative but is either logarithmic or exponential or some such function where a non-zero number that is sufficiently small can't be significantly increased simply by having it apply to significantly multiple people.)

Comment author: Houshalter 15 October 2016 01:44:37AM 0 points [-]

Why do I need to recognize Friendliness to build an FAI? I only need to know that the process used to construct it results in a friendly AI. Trying to inspect the weights of a complex neural network (or whatever) is pointless as I stated earlier. We haven't the slightest idea how alphaGo's net really works, but we can trust it to beat the best Go champions.

Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values.

Evolution also taught humans to be cooperative, empathetic, and kind.

Really your objection seems to be the whole point of CEV. A CEV wouldn't just include the values of ISIS members, but also their victims. And it would be extrapolated, to not just be their current opinions on things, but what their opinions would be if they knew more. Their values if they had more time to think about and consider issues. With those two conditions, the negative parts of human values are entirely eliminated.

Comment author: username2 14 October 2016 09:58:31PM *  0 points [-]

We've now delved beyond the topic -- which is okay, I'm just pointing that out.

I think it's okay for one person to value some lives more than others, but not that much more.

I'm not quite sure what you mean by that. I'm a duster, not a torturer, which means that there are some actions I just won't do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?

I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don't on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.

Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other's happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It's more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.

To finally circle back to your question, I'm not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I'm saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don't exactly share our values (I value my kids, they value theirs).

I've previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao's Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style "shut up and multiply" utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless "save the world" work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow's hierarchy that leaves him feeling guilty and thinking he's a bad person.

My own opinion and advice? Work your way up up Maslow's hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.

Comment author: username2 14 October 2016 06:07:12PM 0 points [-]

Ah, then I look forward to reading your article :)

Comment author: philh 14 October 2016 04:33:07PM 0 points [-]

I think it's okay for one person to value some lives more than others, but not that much more. ("Okay" - not ideal in theory, maybe a good thing given other facts about reality, I wouldn't want to tear it down for multiple reasons.)

Btw, you say the mother should protect her child, but it's okay to value some lives more than others - these seem in conflict. Do you in fact think it's obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?

Comment author: Lumifer 14 October 2016 02:28:44PM 0 points [-]

white supremacism

That's actually Sino-Judaic supremacism, you white gweilo untermenschen!

Comment author: Lumifer 14 October 2016 02:26:41PM 0 points [-]

What are you even trying to say?

I'm saying that if you can't recognize Friendliness (and I don't think you can), trying to build a FAI is pointless as you will not be able to answer "Is it Friendly?" even when looking at it.

I think an AI will easily be able to learn human values from observations.

So if you can't build a supervised model, you think going to unsupervised learning will solve your problems? The quote I gave you is part of human values -- humans do value triumph over their enemies. Evolution taught humans to eliminate competition, it taught them to be aggressive and greedy -- all human values. Why do you think your values will be preferred by the AI to values of, say, ISIS or third-world Maoist guerrillas? They're human, too.

Comment author: scarcegreengrass 14 October 2016 02:03:01PM 0 points [-]

No, i don't. One possible explanation for the bug is that the successful time i used the dropdown to post the link directly to Discussion, rather than first to Drafts.

In response to comment by MrMind on Quantum Bayesianism
Comment author: hairyfigment 14 October 2016 10:15:08AM 0 points [-]

No, that is not the question I asked. The question I asked was what the god-damned imaginary numbers mean, if they aren't describing reality. Because they don't look like subjective probability.

Comment author: MrMind 14 October 2016 09:13:26AM 0 points [-]

That's the basic, some say the only, mystery of MWI: why the world operates according to subjective probability?
You'll find this question posed in the Sequence in some places.

Comment author: MrMind 14 October 2016 09:11:09AM *  0 points [-]

As far as I know, neoEverett is the smallest realist interpretation: Eliezer argued not only against anti-realism, but also in favor of the smallest theory that falls out of the formalism.

Comment author: MrMind 14 October 2016 08:24:02AM *  0 points [-]

Ah, as it happens, I have none of those conflicts. I asked because I'm preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory.
But I can understand that someone who believes this might have a lot of internal struggles.

Full disclosure: I'm a Duster, not a Torturer. But I'm trying to steelman Torture.