I don't know! I've certainly seen people say P(doom) is 1, or extremely close. And anyway, bombing an AI lab wouldn't stop progress, but would slow it down - and if you think there is a chance alignment will be solved, the more time you buy the better.
I am bringing it up for calibration. As to whether it's the same magnitude of horrific: in some ways, it's higher magnitude, no? Even Nazis weren't going to cause human extinction - of course, the difference is that the Nazis were intentionally doing horrific things, whereas AI researchers, if they cause doom, will do it by accident; but is that a good excuse? You wouldn't easily forgive a drunk driver who runs over a child...
Do you ask the same question of opponents of climate change? Opponents of open borders? Opponents of abortion? Opponents of gun violence?
They're not the same. None of these are extinction events; if preventing the extinction of the human race doesn't legitimise violence, what does? (And if you say nothing, does that mean you don't believe in the enforcement of laws?)
Basically, I can't see a coherent argument against violence that's not predicated either on a God, or on humanity's quest for 'truth' or ideal ethics; and the latter is obviously cut short if humans go extinct, so it wouldn't ban violence to prevent this outcome.
The assassination of Archduke Ferdinand certainly coerced history, and it wasn't state-backed. So did that of Julius Ceasar, as would have Hitler's, had it been accomplished.
Well, it's clearly not true that violence would not prevent progress. Either you believe AI labs are making progress towards AGI - in which case, every day they're not working on it, because their servers have been shut down, or more horrifically, because some of their researchers have been incapacitated is a day that progress is not being made - or you think they're not making progress anyway, so why are you worried?
But AI doomers do think there is a high risk of extinction. I am not saying a call to violence is right: I am saying that not discussing it seems inconsistent with their worldview.
That's not true - we don't make decisions based on perfect knowledge. If you believe the probability of doom is 1, or even not 1 but incredibly high, then any actions that prevent it or slow it down are worth pursuing - it's a matter of expected value.
Except that violence doesn't have to stop the AI labs, it just has to slow them down: if you think that international agreements yada yada have a chance of success, and given this takes time, then things like cyber attacks that disrupt AI research can help, no?
If it's true AI labs aren't likely to be the cause of extinction, why is everyone upset at the arms race they've begun?
You can't have it both ways: either the progress these labs are making is scary - in which case anything that disrupts them (and hence slows them down even if it doesn't stop them) is good - or they're on the wrong track, in which case we're all fine.
Is all non-government-sanctioned violence horrific? Would you say that objectors and resistance fighters against Nazi regimes were horrific?
Here's my objection to this: unless ethics are founded on belief in a deity, they must step from humanity. So an action that can wipe out humanity makes any discussion of ethics moot; the point is, if you don't sanction violence to prevent human extinction, when do you ever sanction it? (And I don't think it's stretching the definition to suggest that law requires violence).
But when you say extinction will be more likely, you must believe that the probability of extinction is not 1.
OK, so then AI doomers admit it's likely they're mistaken?
(Re side effects, no matter how negative they are, they're better than the alternative; and it doesn't even have to be likely that violence would work: if doomers really believe P(doom) is 1, then any action with a non-zero probability of success is worth pursuing.)
You're assuming "the violence might or might not stop extinction, but then there will be some side-effects (that are unrelated to extinction)". But, my concrete belief is that most acts of violence you could try to commit would probably make extinction more likely, not less, because a) they wouldn't work, and b) they destroy the trust and coordination mechanisms necessary for the world to actually deal with the problem.
To spell out a concrete example: someone tries bombing an AI lab. Maybe they succeed, maybe they don't. Either way, they didn't actually st...
This is a pedantic comment. So the idea is you should obey the law even when the law is unjust?
Isn't the prevention of the human race one of those exceptions?
Er, yes. AI risk worriers think AI will cause human extinction . Unless they believe in God, surely all morality stems from humanity, so the extinction of the species must be the ultimate harm - and preventing it surely justifies violence (if it doesn't, then what does?)
Yes but what I'm saying is that this isn't true - few people are absolute pacifists. So violence in general isn't taboo - I doubt most people object to things like laws (which ultimately rely on the threat of violence).
So why is it that violence in this specific context is taboo?
So, you would have advocated against war with Nazi Germany?
To be fair, I'm not saying it's obviously wrong; I'm saying it's not obviously true, which is what many people seem to believe!
But that's not general intelligence; general intelligence requires considering a wider range of problems holistically, and drawing connections among them.
Not an explicit map; I'm raising the possibility that capability leads to malleable goals.
I don't see how this relates to the Orthogonality Thesis.
It relates to it because it's an explicit component of it, no? The point being that if there is only one way of general cognition to work, perhaps that way by default involves self-reflection, which brings us to the second point...
Do you believe that an agent which terminally values tiny molecular squiggles would "question its goals and motivations" and conclude that creating squiggles is somehow "unethical"?
Yes, that's what I'm suggesting; not saying it's definitely true; but it's not obviously wron...
Of course they are wrong. Because if you examine everything at the meta-level, and forget about being pragmatic, you will starve.
I haven't posted the question there.
For the love of... problem solved = the problem I asked for people to help me solve. I.e. finding metrics. If you don't want to help, fine. But as I said, being inane in attempt to appear smart is just stupid, counterproductive and frankly annoying.
Look, someone asks for your help with something. There are two legitimate responses: a) you actually help them achieve their goal or b) you say, "sorry, not my problem". Your response is to be pedantic about the question itself. What good does that do?
My metrics are likely to be quite different from yours
And that's fine! If everyone here gave me a list of 5-10 metrics instead of pedantic responses, I'd be able to choose a few I like, and boom, problem solved.
The job was, evaluate a presidency. What metrics would you, as an intelligent person, use to evaluate a presidency. How much simpler can I make it? I didn't ask you to read my mind or anything like that.
It's easy to generate tons of metrics, what's hard is generating a relatively small list that does the job. If you are too lazy to contribute to the discussion, fine. But contributing just pedantic remarks is a waste of everyone's time.
My parents always told me "we only compare ourselves to the best". I am only making these criticisms because rationalists self-define as, well, rational. And to be, rationality also has to do with achieving something. Pedantry, sophistry &c are unwelcome distractions.
I apologize for assuming you meant something semi-reasonable by what you wrote, I will refrain from making that assumption in the future.
Okay, let's go into "talking to a 5yo mode". We have these facts: a) the vast majority of people use "gender inequality" to refer to the fact that women are disadvantaged. b) terms like this are defined by common usage. c) since common usage means "women are disadvantaged", the reasonable think to do is that when a random person utters the phrase, they refer to that. Whether women are in f...
I was being facetious, of course I still believe in rationality. But you know, I was reading Slate Star Codex, which basically represents the rationalist community as an amazing group of people committed to truth and honesty and the scientific approach - and though I appreciate how open these discussions are, I am a bit disappointed at how pedantic some of the comments are.
Jesus Christ. This is beyond derailed. For what it's worth, gjm is right, people are either purposefully misrepresenting what I wrote (in which case they are pedantic and juvenile) or they didn't understand what I meant (in which case, you know, go out and interact with people outside your bubble).
And anyway - the reason I want to measure progress towards closing the gap where women have it worse is so that I can fairly evaluate feminist arguments about Trump in 4 years time. If in 4 years time it turns out that women earn more than men across the board, t...
Guys, come on. I am not setting up a formal tribunal for Trump. I want your measured opinions. Don't let's be pedantic.
Unfortunately, I cannot read minds.
But you can read, right? Because I wrote "I'd like to ask for suggestions on proxies for evaluating [...]". I didn't say "I want suggestions on how to go about deciding the suitability of a metric".
And I am not saying that I agree with that majority view. All I am saying is that since you know that, to sort of pretend that it's not the case is a bit strange.
You in particular did provide metrics, so I am not complaining! Although, to be perfectly honest, I do think your delivery is sort of passive aggressive or disingenuous... you know that nearly everyone, when discussing gender inequality, use the term to mean that women are disadvantaged. You provide metrics to evaluate improvement in areas where men are disadvantaged - i.e. your underlying assumption/hypothesis is the opposite of everyone else, but you don't acknowledge it.
Regardless of what I do, I expect the program to provide a response at the end. Like I said in response to another comment - if you want to "debug" my thinking process, absolutely fair enough; but provide the result. What you are doing, to carry on your analogy, is to say "hmm there may be a bug there. But I won't tell you what the program will give as an output even if you fix it".
Even worse, imagine your compsci professor asks you to write code to simulate objects falling from a skyscraper. What you are doing then here is telling me "aaah, but you are trying to simulate this using gravity! That is, of course, not a universal solution, so you should try relativity instead".
Of course, you have the right to do whatever you want. But, if someone new to a group of rationalists asks a question with a clear expectation for a response, and gets philosophising as an answer, don't be surprised if people get a perhaps unflattering view of rationalists.
This is actually the correct response.
And this is what I mean when I say rationalists often seem to be missing the point. Fair enough if you want to say "here is the right way to think about it... and here are the metrics this method produces, I think".
But if all I get is "hmmm it might be like this or it might be like that - here are some potential flaws in our logic" and no metrics are given... that doesn't do any good.
Done! Thanks.
because I thought you were saying that you can't find any grounds for moral disapproval of massive defamation campaigns
Yes, I meant I couldn't find grounds for disapproval of defamation under a libertarian system.
On discrimination, your argument is very risky. For example, in a racist society, a person's race will impact how well they do at their job. Besides, on a practical level, it's very hard to determine what characteristics actually correlate with performance.
...Are you quite sure you aren't just saying this because it's something that doesn't fit
(And since this is a rationalist forum, let me just point out that...
I am actually looking for criteria to evaluate any president. I only wrote Trump because it's whom I had in mind, obviously. Can I edit my own article?
I was exaggerating a bit - but I am sure you agree that your criteria are too few and unimportant to judge a whole presidency...
I will gently suggest that you should maybe see this as a deficiency in the ethical framework you're working in...
All this does is weaken my argument for libertarianism, not my model for evaluating moral theories! Let's not conflate the two.
...the evils of government coercion / starving to death... To be clear - it's not exactly the government coercion that bothers me. It's that criminalising discrimination is... just a bit random. As an employer, I can show preference for thousands of characteristics, and rationalise them (e.g. for extroverts - "I
Question - how do you do this thing with the blue line indicating my quote?
For L1: well, I am not sure how to say this - if we agree there are no universal values, by definition there is no value that permits you to infringe on me, right?
On your examples...
1 ==> okay, here you have discovered a major flaw in my theory which I had just taken for granted: property rights. I just (arbitrarily!) assumed their existence, and that to infringe on my property rights is to commit violence. This will take some thinking on my behalf.
2 ==> I am genuinely ambiva...
First, you wrote "Every question of major concern contains some element of evaluation, and therefore cannot be settled as a matter of objective fact" - if this does not mean to say "there are no facts", I am not sure what it is trying to say.
Second, this whole this is pertaining to the second criterion. My point is that rejecting this criterion, for whatever reason, is saying that you are willing to admit arbitrary principles - but these are by definition subjective, random, not grounded in anything. So you are then saying that it's oka...
OK that's not a well thought out response. So if Trump launches a nuclear war, or tanks the economy, or deports all Muslims &c, that's fine as long as he meets these 3 criteria?!
I am trying to list criteria by which to evaluate any president. I am not trying to set up Trump to fail - else I could just have "appoint a liberal Justice".
OK, serious response: if you don't want to admit the existence of facts, then the whole conversation is pointless - morality comes down to personal preference. That's fine as a conclusion - but then I don't want to see anyone who holds it calling other people immoral.
Successful attacks would buy more time though