Filter This month
Comment author: CCC 13 October 2016 01:49:46PM 2 points [-]

"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.

Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.

AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.

Comment author: TheOtherDave 13 October 2016 04:05:12AM 2 points [-]

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment author: ChristianKl 10 October 2016 09:17:40AM 1 point [-]

thereby creating a clearer distinction between religious and secular.

Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.

You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.

To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.

Comment author: So8res 04 October 2016 08:41:49PM 2 points [-]

Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.

Comment author: Good_Burning_Plastic 29 September 2016 08:03:41AM 2 points [-]

Computing can't harm the environment in any way

Well...

Comment author: RobbBB 24 October 2016 02:36:31AM 1 point [-]

There's a discussion post that mentions the fundraiser here, along with other news: http://lesswrong.com/r/discussion/lw/o0d/miri_ama_plus_updates/

Comment author: mirefek 22 October 2016 12:55:14AM 1 point [-]

I see. It seemed to me that it was about the experimental method which did not fit to a mathematical statement. I understand the possibility of being mistaken. I was mistaken many times, I am not sure with some proofs and I know some persuasive fake proofs... Despite this, I am not very convinced that I should do such things with my probability estimates. After all, it is just an estimate. Moreover it is a bit self-referencing when the estimate uses a more complicated formula then the statement itself. If I say that I am 1-sure, that 1 is not 1/2, it is safe, isn't it? :-D Well, it does not matter :-) I think that I got the point, "I know that I know nothing" is a well known quote.

Comment author: Document 20 October 2016 05:06:03AM 1 point [-]

Initial reaction: "That's news?".

That said, your link seems to be dead, with no archive. Do you have it saved?

Comment author: So8res 19 October 2016 11:21:01PM 1 point [-]

Fixed, thanks.

Comment author: DanArmak 18 October 2016 07:39:39PM 1 point [-]

Thank you, your point is well taken.

Comment author: TheAncientGeek 18 October 2016 08:33:02AM *  1 point [-]

The rule as usually understood is that fewer relates to discrete quantities, fewer apples, and less to continuous quantities, less milk. It's possibly rather artificial, and noticeably lacking a counterpart in "more".

Comment author: username2 14 October 2016 12:04:43AM *  1 point [-]

Survey assumed a consequentialist utilitarian moral framework. My moral philosophy is neither, so there was no adequate answer.

Comment author: TheAncientGeek 13 October 2016 09:03:47PM 1 point [-]

If I don't use "moral" as a rubber stamp for all and any human value, you don't run into CCCs problem of labeling theft and murder as moral because some people value them. That's the upside. Whats the downside?

Comment author: TheAncientGeek 13 October 2016 01:32:08PM *  1 point [-]

I see morality as fundamentally a way of dealing with conflicts between values/goals, so I cant answer questions posed in terms of "our values", because I don't know whether that means a set of identical values, a set of non-identical but non conflicting values, or a set of conflicting values. One of the implications of that view is that some values/goals are automatically morally irrelevant , since they can be satisfied without potential conflict. Another implication is that my view approximates to "morality is society's rules", but without the dismissive implication..if a society as gone through a process of formulating rules that are effective at reducing conflict, then there is a non-vacuous sense in which that society's morality is its rules. Also AI and alien morality are perfectly feasible, and possibly even necessary.

Comment author: DanArmak 12 October 2016 02:02:14PM *  1 point [-]

I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Submitting...

Comment author: ozziegooen 10 October 2016 10:38:57PM *  1 point [-]
Comment author: ChristianKl 08 October 2016 06:22:55PM 1 point [-]

As a general query to other readers: Is it bad form to just ignore comments like this? I'm apt to think it unwise to try to talk about this topic here if it is just going to invoke Godwin's Law.

In general you can ignore comments when you don't like a productive discussion will follow.

LW by it's nature has people who argue a wide array of positions and in a case like this you will get some criticism like this. Don't let that turn you off LW or take it as suggestion that your views are unwelcome here.

Comment author: hairyfigment 08 October 2016 05:58:56AM 1 point [-]

We're not talking about all of science. (Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand.) We're talking about whether or not anthropic reasoning tells us to expect to see people building the LHC, at a cost of $1 billion per year.

Thatcher apparently rejected the idea as presented, and rightly too if the Internet accurately reported the pitch they made to her. (In this popular account, the Higgs mechanism doesn't "explain mass," it replaces one arbitrary number with another! I still don't know the actual reasons for believing in it!) So we don't need to imagine humanity dying out, and we don't need to assume that civilization collapses after using up irreplaceable fossil fuels. (Though that one seems somewhat plausible.) I don't think we even need to assume religious tyranny crushes respect for science. Slightly less radical changes to the culture of a small fraction of the world seem sufficient to prevent the LHC expenditure for the foreseeable future. Add in uncertainty about various risks that fall short of total annihilation, and this certainty starts to look ridiculous.

Now as I said, one could make a different anthropic argument based on population in various 'worlds'. But as I also said, I don't think we know enough to get a high probability from that either.

Comment author: ChristianKl 07 October 2016 07:50:18PM 1 point [-]

In these spheres people generally understand that heuristics optimize for something. Frequently people think they optimize for some ancestral environment that's quite unlike the world we are living in at the moment. I think that's a question where a well written post would be very useful.

This is probably not a novel analogy, but the surprising thing to me is that social psychology tends to frame any "reticle adjustment" as a bias against which we must fight without testing its performance in the contexts under which the adjustment was made.

I would think that many sociologists would say that many people who are racist and look down on Blacks are racists because they don't interact much with Blacks. If the adjustment was made during a time where the person was at an all-White school, the interesting question isn't whether the adjustment performs well within the context of the all-White school but whether it also performs well at decisions made later outside of that heterogeneous environment.

In response to comment by CCC on Humans in Funny Suits
Comment author: CynicalOptimist 07 October 2016 02:29:57AM 1 point [-]

Yup! I agree completely.

If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.

View more: Next