All of ArisC's Comments + Replies

ArisC-2-3

Successful attacks would buy more time though

ArisC10

I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.

3Timothy Underwood
If you think P(doom) is 1, you probably don't believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.
ArisC10

I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...

2the gears to ascension
No, but intentional malice is much harder to dissuade nonviolently.
ArisC10

Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence? 

They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)

Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
 

3niplav
Some people definitely say they believe climate change will kill all humans.
3Mitchell_Porter
OK, well, if people want to discuss sabotage and other illegal or violent methods of slowing the advance of AI, they now know to contact you. 
ArisC00

The assassination of Archduke Ferdinand certainly coerced history, and it wasn't state-backed. So did that of Julius Ceasar, as would have Hitler's, had it been accomplished. 

ArisC00

Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?

3Steven Byrnes
I strongly disagree with "clearly not true" because there are indirect effects too. It is often the case that indirect effects of violence are much more impactful than direct effects, e.g. compare 9/11 with the resulting wars in Afghanistan & Iraq.
ArisC10

But  AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.

3Shmi
Eliezer discussed it multiple times, quite recently on Twitter and on various podcasts. Other people did, too. 
ArisC10

That's not true  - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.

ArisC10

Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?

4simon
I think you are overestimating the efficacy and underestimating the side effects of such things. How much do you expect a cyber attack to slow things down? Maybe a week if it's very successful? Meanwhile it still stirs up opposition and division, and puts diplomatic efforts back years. As the gears to ascension notes, non-injurious acts of aggression share many game theoretic properties as physical violence. I would express the key issue here as legitimacy; if you don't have legitimacy, acting unilaterally puts you in conflict with the rest of humanity and doesn't get you legitimacy, but once you do have legitimacy you don't need to act unilaterally, you can get a ritual done that causes words to be written on a piece of paper where people with badges and guns will come to shut down labs that do things forbidden by those words. Cool huh? But if someone just goes ahead and takes illegitimate unilateral action, or appears to be too willing to do so, that puts them into a conflict position where they and people associated with them won't get to do the legitimate thing. 
2the gears to ascension
Everyone has been replying as though you mean physical violence; non-injurious acts of aggression don't qualify as violence unambiguously, but share many game theoretic properties. If classical liberal coordination can be achieved even temporarily it's likely to be much more effective at preventing doom.
ArisC10

If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun?

You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.

2the gears to ascension
I refer back to the first sentence of the message you're replying to. I'm not having it both ways, you're confusing different people's opinions. My view is the only thing remarkable about labs is that they get to this slightly sooner by having bigger computers; even killing everyone at every big lab wouldn't undo how much compute there is in the world, so it at most buys a year at an intense cost to rule morality and to knowledge of how to stop disaster. If you disagree with an argument someone else made, lay it out, please. I probably simply never agreed with the other person's doom model anyway.
ArisC-1-2

Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?

2the gears to ascension
Do you think this comparison to be a good specific exemplar for the ai case, such that you'd suggest they should have the same answer, or do you bring it up simply to check calibration? I do agree that it's a valid calibration to check, but I'm curious whether you're claiming capabilities research is the same order of magnitude horrific.
ArisC4-13

Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).

ArisC10

But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.

3the gears to ascension
Well... Yeah? Would any of us care to build knowledge that improves our odds if our odds were immovably terrible?
ArisC10

OK, so then AI doomers admit it's likely they're mistaken?

(Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)

Raemon129

You're assuming "the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)". But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn't work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.

To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don't. Either way, they didn't actually st... (read more)

3simon
I am not an extreme doomer, but part of that is that I expect that people will face things more realistically over time - something that violence, introducing partisanship and division, would set back considerably. But even for an actual doomer, the "make things better through violence" option is not an especially real option. You may have a fantasy of choosing between these options: * doom * heroically struggle against the doom through glorious violence But you are actually choosing between: * a dynamic that's likely by default to lead to doom at some indefinite time in the future by some pathway we can't predict the details of until it's too late * make the situation even messier through violence, stirring up negative attitudes towards your cause, especially among AI researchers but also among the public, making it harder to achieve any collective solution later, sealing the fate of humanity even more thoroughly Let me put it this way. To the extent that you have p(doom) = 1 - epsilon, where is epsilon coming from? If it's coming from "terrorist attacks successully stop capability research" then I guess violence might make sense from that perspective but I would question your sanity. If relatively more of that epsilon is coming from things like "international agreements to stop AI capabilities" or "AI companies start taking x-risk more seriously", which I would think would be more realistic, then don't ruin the chances of that through violence.
2the gears to ascension
Even in a crowd of ai doomers, no one person speaks for ai doomers. But plenty think it likely they're mistaken somehow. I personally just think the big labs aren't disproportionately likely to be the cause of an extinction strength ai, so violence is overdeterminedly off the table as an effective strategy, before even considering whether it's justified, legal, or understandable. The only way we solve this is by constructing the better world.
ArisC80

This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?

1Waldvogel
You asked why this sort of violence is taboo, not whether we should break that taboo or not. I'm merely answering your question ("Why is violence in this specific context taboo?"). The answer is because it's illegal. Everyone understands, either implicitly or explicitly, that the state has a monopoly on violence. Therefore all extralegal violence is taboo. This is a separate issue from whether that violence is moral, just, necessary, etc.
ArisC10

Isn't the prevention of the human race one of those exceptions?

2the gears to ascension
I think you accidentally humanity
4Shmi
You don't know enough to accurately decide whether there is a high risk of extinction. You don't know enough to accurately decide whether a specific measure you advocate would increase or decrease it. Use epistemic modesty to guide your actions. Being sure of something you cannot derive from first principles, as opposed to from parroting select other people's arguments is a good sign that you are not qualified.  One classic example is the environmentalist movement accelerating anthropogenic global climate change by being anti-nuclear energy. If you think you are smarter now about AI dangers than they were back then about climate, it is a red flag.
2Waldvogel
If you have perfect foresight and you know that action X is the only thing that will prevent the human race from going extinct, then maybe action X is justified. But none of those conditions apply.
ArisC21

Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)

1simon
If you hypothetically have a situation where it's a 100% clear that the human race will go extinct unless a violent act is committed, and it's seems likely that the violent act would prevent human extinction, then, in that hypothetical case, that would be a strong consideration in favour of committing the violent act. In reality though, this clarity is extremely unlikely, and unilateral actions are likely to have negative side effects. Moreover, even if you think you have such clarity, it's likely that you are mistaken, and the negative side effects still apply no matter how well justified you personally thought your actions were, if others don't agree.
ArisC10

Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence).

So why is it that violence in this specific context is taboo?

1Waldvogel
Because it's illegal.
ArisC00

So, you would have advocated against war with Nazi Germany?

1lmaowell
I'm sorry if my point wasn't made clearly. Things are taboos because of social customs & contexts, my point wasn't meant to be normative — just point out that the taboo isn't against violence against ai labs, it's against violence more broadly. 
ArisC22

To be fair, I'm not saying it's obviously wrong; I'm saying it's not obviously true, which is what many people seem to believe!

1Anon User
And Gordon Seidoh Worley is not saying there can't be good arguments against orthogonality thesis that would deserve uovotes, just that this one is not one of those.
ArisC10

But that's not general intelligence; general intelligence requires considering a wider range of problems holistically, and drawing connections among them. 

ArisC10

Not an explicit map; I'm raising the possibility that capability leads to malleable goals.

ArisC10

I don't see how this relates to the Orthogonality Thesis.

It relates to it because it's an explicit component of it, no? The point being that if there is only one way of general cognition to work, perhaps that way by default involves self-reflection, which brings us to the second point...

Do you believe that an agent which terminally values tiny molecular squiggles would "question its goals and motivations" and conclude that creating squiggles is somehow "unethical"?

Yes, that's what I'm suggesting; not saying it's definitely true; but it's not obviously wron... (read more)

ArisC00

Of course they are wrong. Because if you examine everything at the meta-level, and forget about being pragmatic, you will starve.

ArisC00

I haven't posted the question there.

ArisC00

For the love of... problem solved = the problem I asked for people to help me solve. I.e. finding metrics. If you don't want to help, fine. But as I said, being inane in attempt to appear smart is just stupid, counterproductive and frankly annoying.

Look, someone asks for your help with something. There are two legitimate responses: a) you actually help them achieve their goal or b) you say, "sorry, not my problem". Your response is to be pedantic about the question itself. What good does that do?

0Lumifer
Nope. There are more, e.g. (c) You misunderstand your problem, it's actually this (d) Your problem is not solvable because of that (e) Solving this problem will not help you (achieve a more terminal goal)
ArisC20

My metrics are likely to be quite different from yours

And that's fine! If everyone here gave me a list of 5-10 metrics instead of pedantic responses, I'd be able to choose a few I like, and boom, problem solved.

0Lumifer
A problem? Which problem? I don't have a problem. Are you, by any chance, upset that people didn't hop to solving your problem?
ArisC00

The job was, evaluate a presidency. What metrics would you, as an intelligent person, use to evaluate a presidency. How much simpler can I make it? I didn't ask you to read my mind or anything like that.

0Lumifer
My metrics are likely to be quite different from yours since I expect to have axes of evaluation which do no match yours. A good starting point is recalling that POTUS is not a king and his power is quite constrained. For example, he doesn't get to control the budget. Judging a POTUS on, say, unemployment, is silly because he just doesn't have levers to move it. In a similar way, attributing shifts in culture wars to POTUS isn't all that wise either.
ArisC20

It's easy to generate tons of metrics, what's hard is generating a relatively small list that does the job. If you are too lazy to contribute to the discussion, fine. But contributing just pedantic remarks is a waste of everyone's time.

2Lumifer
And since, as I've pointed out, you failed to specify the job, the task changes from hard to impossible. But I don't know if it was a waste of everyone's time. Your responses were... illuminating.
ArisC20

My parents always told me "we only compare ourselves to the best". I am only making these criticisms because rationalists self-define as, well, rational. And to be, rationality also has to do with achieving something. Pedantry, sophistry &c are unwelcome distractions.

0moridinamael
I actually agree. I think one issue is that the kind of mind that is attraction to "rationality" as a topic also tends to be highly sensitive to perceived errors, and to be fond of taking things to the meta-level. These combine to lead to threads where nobody talks about the object-level questions. I frankly don't even try to bring up object-level problems on Less Wrong.
ArisC00

I apologize for assuming you meant something semi-reasonable by what you wrote, I will refrain from making that assumption in the future.

Okay, let's go into "talking to a 5yo mode". We have these facts: a) the vast majority of people use "gender inequality" to refer to the fact that women are disadvantaged. b) terms like this are defined by common usage. c) since common usage means "women are disadvantaged", the reasonable think to do is that when a random person utters the phrase, they refer to that. Whether women are in f... (read more)

ArisC00

I was being facetious, of course I still believe in rationality. But you know, I was reading Slate Star Codex, which basically represents the rationalist community as an amazing group of people committed to truth and honesty and the scientific approach - and though I appreciate how open these discussions are, I am a bit disappointed at how pedantic some of the comments are.

5Connor_Flexman
It seems important to be extremely clear about the criticism's target, though. I agree overanalysis is a failure mode of certain rationalists, and statistically more so for those who comment more on LW and SSC (because of the selection effect specifically for those who nitpick). But rationality itself is not the target here, merely naive misapplication of it. The best rationalists tend to cut through the pedantry and focus on the important points, empirically.
0Dagon
What metrics did the SSC commentariat propose, and was your question received better there?
ArisC20

Jesus Christ. This is beyond derailed. For what it's worth, gjm is right, people are either purposefully misrepresenting what I wrote (in which case they are pedantic and juvenile) or they didn't understand what I meant (in which case, you know, go out and interact with people outside your bubble).

And anyway - the reason I want to measure progress towards closing the gap where women have it worse is so that I can fairly evaluate feminist arguments about Trump in 4 years time. If in 4 years time it turns out that women earn more than men across the board, t... (read more)

ArisC10

Guys, come on. I am not setting up a formal tribunal for Trump. I want your measured opinions. Don't let's be pedantic.

2The_Jaded_One
Well I would honestly start by doing a literature review of what the relevant academic fields have already studied. If I had to guess on the spot what makes a government good, I woild caution that a lot of what one sees in outcomes in the short term is determined by economics. On top of that there are broader political processes that are just gping to happen. Maybe one thing I feel fairly confident about is that starting expeditionary wars of aggression has a very bad track record.
ArisC00

Unfortunately, I cannot read minds.

But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]". I didn't say "I want suggestions on how to go about deciding the suitability of a metric".

0Lumifer
I guess I can read, kinda-sorta. How about you? I answered: and y'know, I'm a bit lazy to type it all up...
ArisC50

And I am not saying that I agree with that majority view. All I am saying is that since you know that, to sort of pretend that it's not the case is a bit strange.

ArisC60

You in particular did provide metrics, so I am not complaining! Although, to be perfectly honest, I do think your delivery is sort of passive aggressive or disingenuous... you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged. You provide metrics to evaluate improvement in areas where men are disadvantaged - i.e. your underlying assumption/hypothesis is the opposite of everyone else, but you don't acknowledge it.

2James_Miller
Not on LessWrong, but in general yes. But this is in part because most people assume that on almost all important metrics women are disadvantaged.
ArisC00

Regardless of what I do, I expect the program to provide a response at the end. Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".

Even worse, imagine your compsci professor asks you to write code to simulate objects falling from a skyscraper. What you are doing then here is telling me "aaah, but you are trying to simulate this using gravity! That is, of course, not a universal solution, so you should try relativity instead".

2TheAncientGeek
I dont think that is literally true. What is that supposed to be analogous to? Which ethics is uniquely picked out by your criteria? I dont think any are. I think there are obviously a countable infinity of consistent ethical systems.
ArisC00

Of course, you have the right to do whatever you want. But, if someone new to a group of rationalists asks a question with a clear expectation for a response, and gets philosophising as an answer, don't be surprised if people get a perhaps unflattering view of rationalists.

1moridinamael
What websites are you using where pedantry, sophistry, tangents, and oblique criticism aren't the default? Are you using the same Internet as me?
ArisC00

This is actually the correct response.

And this is what I mean when I say rationalists often seem to be missing the point. Fair enough if you want to say "here is the right way to think about it... and here are the metrics this method produces, I think".

But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.

2ChristianKl
Imaging going to a Trump forum and asking them for advice on how to get Trump impeached. Then the answer: "Trump shouldn't be impeached." Did they give you the answer that you were looking for? No, they didn't. The disagree on principles. Here there's also disagreement on principles. Let's say you went to the homeopath. Afterwards you got cured. You go to a friend and ask him for metrics of the treatment you received. You suggest possible things to measure: * Improvement in my well-being. * Less sick days. * Whether the homeopath felt warm and emphatic. * The cost of the treatment. But you have a problem. Measuring sick days and cost is easy but you really want help with proper metrics for well-being and the homeopath being warm and emphatic. That's roughly the quality of your original post and you don't want to hear that n=1 evidence is not enough to do a good judgment.
1Lumifer
Unfortunately, I cannot read minds. I said that it depends on what do you want and I actually do not know what do you want.
ArisC00

because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns

Yes, I meant I couldn't find grounds for disapproval of defamation under a libertarian system.

On discrimination, your argument is very risky. For example, in a racist society, a person's race will impact how well they do at their job. Besides, on a practical level, it's very hard to determine what characteristics actually correlate with performance.

Are you quite sure you aren't just saying this because it's something that doesn't fit

... (read more)
0gjm
Ah, OK. Then what I want to suggest is that you should probably see this as a reason to be dissatisfied with libertarianism. (Though of course it might turn out that actually there's nothing you can do to stop massive defamation campaigns that wouldn't have worse adverse consequences in practice. I doubt that, though.) It might. I think there are two sorts of mechanism. The first is that a racist society might mess up some people's education and other opportunities, leading them to end up worse at things than if they belonged to a favoured rather than a disfavoured group. The second is that some jobs (most, in fact) involve interacting with other people, and if the others are racist then members of disfavoured groups might be less effective because others won't cooperate with them. Both of these mean that the principle "don't discriminate on the basis of things that don't make an actual difference" isn't enough on its own to prevent all harmful-seeming discrimination, so appealing only to that principle probably justifies less anti-discrimination law than actually exists in many places. I'm OK with that; you're saying that there shouldn't be any anti-discrimination law because of its arbitrariness, and I'm pointing out that at least some has pretty good and non-arbitrary justification. I'm not trying to convince you that all the anti-discrimination measures currently in existence are good; only that some might be :-). I agree, but my argument was that "this characteristic correlates with performance" generally isn't good grounds for discrimination in hiring etc. You did (for which, well done!) but someone can be epistemically virtuous on one occasion but not another :-). And I did admit that maybe I was being uncharitable. But I really don't see how it's plausible that negative externalities are just Too Much for the human race's mental tools to cope with. You say the problem is that it's difficult to draw boundary lines (if I'm understanding you right); yeah, i
ArisC00

(And since this is a rationalist forum, let me just point out that...

  1. Personal opinion, everything else pertains to politics, and is kind of pointless if not;
  2. Yeah, so? Unless lesswrong.com is specifically designed for you, that's a bizarre comment;
  3. Again, very specious argument. You can apply it to literally everything ever written anywhere on the internet.
  4. Anecdotal evidence, inadmissible.)
ArisC20

I am actually looking for criteria to evaluate any president. I only wrote Trump because it's whom I had in mind, obviously. Can I edit my own article?

2whpearson
Yes. The pen icon underneath your post will allow you to do that.
ArisC00

I was exaggerating a bit - but I am sure you agree that your criteria are too few and unimportant to judge a whole presidency...

ArisC00

I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...

All this does is weaken my argument for libertarianism, not my model for evaluating moral theories! Let's not conflate the two.

the evils of government coercion / starving to death... To be clear - it's not exactly the government coercion that bothers me. It's that criminalising discrimination is... just a bit random. As an employer, I can show preference for thousands of characteristics, and rationalise them (e.g. for extroverts - "I

... (read more)
0gjm
Really? Then maybe I misunderstood what you said before, because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns. That seems to me like a defect not in some particular argument but in what counts for you as grounds for moral disapproval. [Meta-note: If you want to quote already-quoted material, you can use two ">" characters.] I understand, but I think it's less random than you may think, in two ways. (1) What picks out gender, race, age, and other things that put people in "protected classes" (as I think the terminology in some jurisdictions has it) is that they are things that have been widely used for unfair discrimination. History does produce effects that in isolation look random: you get laws saying "don't do X" but no laws saying "don't do Y" even though X and Y are about equally bad, because X is a thing that actually happened and Y isn't. It looks random but I'm not sure it's actually a problem. (2) There is, I think, a more general and less random principle underlying this: When hiring (or whatever), don't discriminate on the basis of characteristics that are not actually relevant to how well someone will do the job. If you're employing a chemistry teacher, someone with blue eyes won't on that account teach any worse; so don't refuse to employ blue-eyed people as chemistry teachers. (Artificial example because real examples might be too distracting.) What makes this a little more difficult is that in some cases the "irrelevant" attributes may correlate with relevant ones; e.g., suppose blue-eyed people are shorter than brown-eyed people on average and you're putting together a basketball team, then you will mostly not choose blue-eyed people. But in this case you should measure their height rather than looking at their eyes, and so I think it goes for other characteristics that correlate with things that matter. OK, but I do want to emphasize that (though I'm prepared to be convinced otherw
ArisC00

Question - how do you do this thing with the blue line indicating my quote?

For L1: well, I am not sure how to say this - if we agree there are no universal values, by definition there is no value that permits you to infringe on me, right?

On your examples...

1 ==> okay, here you have discovered a major flaw in my theory which I had just taken for granted: property rights. I just (arbitrarily!) assumed their existence, and that to infringe on my property rights is to commit violence. This will take some thinking on my behalf.

2 ==> I am genuinely ambiva... (read more)

0gjm
Greater-than sign at the start of the paragraph. (When you're composing a comment, clicking the button that says "Show help" will tell you about some of these things. It won't throw away the comment you're editing.) I did wonder :-). For what it's worth, I think that's pretty much an indefensible position, but I know it's popular in libertarian circles and maybe there are ways to defend it that haven't occurred to me. I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in... That was what I expected. But if you do that, there are possible scenarios where people literally starve to death because of it. Of course nothing forces you to care more about that than you do about the evils of government coercion, but I want it to be clear what the tradeoffs actually are here. (And I suggest that starving to death is as clear a loss of liberty as any.) OK, but see where we've now ended up. An action involving no direct violence is being classified as "violence" because, over a period of years, it is statistically likely to cause physical harm. But this same description covers an enormous number of other things that I bet you don't want to class as violence or infringement of liberty. One example: If a factory emits a lot of pollution, it injures the health of people around it; some of them will die. Yup, that's a really tough problem, and its toughness is one reason why many people (including me) are inclined to think that in fact there aren't any objectively right values. Some believers in objectively right values hold that they can be found in revelations from a god or gods. Some believe that they can be found by careful consideration of what it could mean for humans to flourish. Etc. Personally, I'm pessimistic about the prospects of all these approaches. Including, I'm afraid, yours :-).
ArisC00

First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.

Second, this whole this is pertaining to the second criterion. My point is that rejecting this criterion, for whatever reason, is saying that you are willing to admit arbitrary principles - but these are by definition subjective, random, not grounded in anything. So you are then saying that it's oka... (read more)

0TheAncientGeek
It starts "Every question of major concern" so,straight off, it allows facts of minor concern. But concern to whom? Postmodernists do not, I contend, deny the existence of basic physical facts as regard them as rather uninteresting. When Derrida is sitting in a rive gauche cafe stirring his coffee, he does not dispute the existence of the coffee, the cafe or the spoon, but he is not going to write a book about them either. Postmodernists are, I think, more interested in questions of wide societal and political concern. (perhaps you are, if your comment "everything else pertains to politics, and is kind of pointless if not;" i anything to go by). And those complex questions have evaluative components (in the sense of the fact/value divide). Which is compatible with the existence of factual components as well, whcih is another wayin which I am not denying the existence of facts. But what I am proposing is a kind of on drop rule by which a question that is partly evaluative cannot be solved on a straightforward factual basis. For instance, there are facts to the efect that a fetus that is so many weeks old is capable of independent existence, but they don't tetll you whether abortion is right or wrong by themselves.
ArisC20

OK that's not a well thought out response. So if Trump launches a nuclear war, or tanks the economy, or deports all Muslims &c, that's fine as long as he meets these 3 criteria?!

I am trying to list criteria by which to evaluate any president. I am not trying to set up Trump to fail - else I could just have "appoint a liberal Justice".

2WalterL
Certainly, I'd agree that Trump would be a failure (nay, THE failure) if the world ends in nuclear fire. It sort of seems like at that point we don't need itemized lists though? I don't buy that presidents can affect the economy in a big enough way to tank it. At least, none that I've seen have done so. Grading a prez on what the econ does feels like grading him on the weather. I wouldn't have a problem with deporting all illegal immigrants. All Muslims, on the other hand, would involve deporting a lot of Americans. I'm not sure where you'd deport them too? But, sure, I'll certainly agree that deporting American citizens would be grounds for failure.
ArisC00

OK, serious response: if you don't want to admit the existence of facts, then the whole conversation is pointless - morality comes down to personal preference. That's fine as a conclusion - but then I don't want to see anyone who holds it calling other people immoral.

0TheAncientGeek
I didn't say anything amounting to "there are no facts"...and furthermore I wasnt even citing my own views, but those of postmodernists...and furthermore wasn't attributing wholesale rejection of facts to them either. You seem to have rounded off my comment to "yay pomo". Please read more carefully in the future.
Load More