It's pretty standard to respond to the suicides of Y victims by rallying to reduce Y.
Making a commitment not to notice when something drives a person to suicide seems like it would probably be a monumental mistake.
I don't think so - I think Eliezer's just being sloppy here. "God did a miracle" is supposed to be an example of something that sounds simple in plain English but is actually complex:
...One observes that the length of an English sentence is not a good way to measure "complexity". [...] An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, "Maybe a really powerful agent was angry and threw a lightning bolt." The human brain is the most complex artifact in the known universe. [...
Will this "Arbital 2.0" be an entirely unrelated microblogging platform, or are you simply re-branding Arbital 1.0 to focus on the microblogging features?
Off the top of my head: Fermat's Last Theorem, whether slavery is licit in the United States of America, and the origin of species.
It's almost like having a third sex. In fact the winged males look far more like females than they look like wingless males.
That sounds like exactly the kind of situation Eliezer claims as the exception - the adaptation is present in the entire population, but only expressed in a subset based on the environmental conditions during development, because there's a specific advantage to polymorphism.
There's the whole phenomenon of frequent-dependent selection. Most people are familiar with this from blood types, and sickle-cell anaemia.
Those are single...
Psy-Kosh: Hrm. I'd think "avoid destroying the world" itself to be an ethical injunction too.
The problem is that this is phrased as an injunction over positive consequences. Deontology does better when it's closer to the action level and negative rather than positive.
Imagine trying to give this injunction to an AI. Then it would have to do anything that it thought would prevent the destruction of the world, without other considerations. Doesn't sound like a good idea.
No more so, I think, than "don't murder", "don't st...
Well, that and the differences in the setting/magic (there's no Free Transfiguration in canon, for instance, and the Mirror is different - there are less Mysterious Ancient Artefacts generally - and Horcruxes run on different mechanics ... stuff like that.)
And Voldemort is just inherently smarter than everyone else, too, for no in-story reason I can discern; he just is, it's part of the conceit. (Although maybe that was Albus' fault too, somehow?)
To be fair, we don't know when he wrote the note.
I don't like the idea of it happening. But if it does, I can certainly disclaim responsibility since it is by definition impossible that I can affect that situation if it exists.
Actually, with our expanding universe you can get starships far enough away that the light from them will never reach you.
But I see we agree on this.
...That appears to me to be an insoluble problem. Once intelligence (not a particular person but the quality itself) can be impersonated in quantity, how can any person or group know he/they are behaving fairly? They can't. This is a
Actually, they mention every so often that the Cold War turned hot in the Star Trek 'verse and society collapsed. They're descended from the civilization that rebuilt.
I'm no expert, but even Kurzweil - who, from past performance, is usually correct but over-optimistic by maybe five, ten years - doesn't expect us to beat the Turing Test until (checks) 2030, with full-on singularity hitting in 2045.
2020 is in five years. The kind of progress that would seem to imply - from where we are now to full-on human-level AI in just five years - seems incredible.
We rolled randomly the ethics of the AI, rolled random events with dice and the AI offered various solutions to those problems... You lost points if you failed to deal with the problems and lost lots of points if you freed the AI and they happened to have goals you disagreed with like annihilation of everything.
many now believe that strong AI may be achieved sometime in the 2020s
Yikes, but that's early. That's a lot sooner than I would have said, even as a reasonable lower bound.
Yikes, you're right. I had noticed something odd, but forgot to look into it. Dangit.
I'm pretty sure this is somebody going to the trouble of downvoting every comment of mine, which has happened before.
It's against the rules, so I'll ask a mad to look into it; but obviously, if someone cares enough about something I'm doing or wrong about this much, please, PM me. I can't interpret you through meaningless downvotes, but I'll probably stop whatever is bothering you if I know what it is.
I can give you a little more data - this has happened before, which is why I'm in the negatives. Which I guess makes it more likely to happen again, if I'm that annoying :/
It turned out to be a different person to the famous case, they were reasonable and explained their (accurate) complaint via PM. Probably not the same person this time, but if it happened once ...
Yup, definitely. Interested amateur here.
There's also the problem of people taking things meant to be metaphorical as literal, simply because, well, it's right there, right?
For example (just ran into this today):
Early in the morning, as Jesus was on his way back to the city, he was hungry. Seeing a fig tree by the road, he went up to it but found nothing on it except leaves. Then he said to it, “May you never bear fruit again!” Immediately the tree withered. Matthew 21:18-22 NIV
This is pretty clearly an illustration. "Like this tree, you'd better actually give results, not just give the ...
I don't see how this is a good example: if anything this is one where the fundamentalists are actually reading the text closer to what a naive reading means, without any stretched attempts to claim a metaphorical intent that is hard to see in the text. The problem of trying to read the Genesis text in a way that is consistent with the evidence is something that smart people have been trying for a very long time now, so that leads to a lot of very well done apologetics to choose from, but that doesn't mean it is actually what the text intended.
Well, I'm ...
both groups are convinced that this applies to the other group.
Oh, it does apply, generally. That's mindkilling for you.
USian fundamentalist-evangelical Christianity, however, is ... exceptionally bad at reading their supposedly all-important sacred text, though. And, indeed, facts in general. We're talking about the movement that came up with and is still pushing "creationism", here.
I'm Irish, and we seem to have pretty much no equivalent movement in Europe; our conservative Christians follow a different, traditionalist-Catholic model. The i...
I don't know nearly as many Muslims as I do Christians, but I generally get the impression that liberal Muslims don't have unusually strong reactions to atheism and other religions? Whereas they are, if anything, more threatened by Muslim terrorists - because of the general name-blackening, in addition to the normal fear response to your tribe being attacked.
Has this not been your experience?
You have noticed, he says, that the new German society also has a lot of normal, "full-strength" Nazis around. The "reformed" Nazis occasionally denounce these people, and accuse them of misinterpreting Hitler's words, but they don't seem nearly as offended by the "full-strength" Nazis as they are by the idea of people who reject Nazism completely.
This part of the metaphor doesn't work.
Religious people generally condemn heretics even more strongly than nonbelievers. Liberal Christians, specifically, are generally more oppos...
The obvious next question would be to ask if you're OK with your family being tortured under the various circumstances this would suggest you would be.
I've lost the context to understand this question.
How would you react to the idea of people being tortured over the cosmological horizon, outside your past or future light-cone? Or transferred to another, undetectable universe and tortured?
I mean, it's unverifiable, but strikes me as important and not at all meaningless. (But apparently I had misinterpreted you in any case.)
...The usual version of this
Really? I honestly found it pretty unfunny.
Another thing I find interesting is that such a argument would never be set up using the example of piss Christ or a desecrated Talmud.
Interestingly, I have seen (less well-written) versions of this argument used for anti-Christian blasphemy, including "Piss Christ".
I live in Ireland, which is known for it's strong Catholic values. So ... yup, this seems to fit with your theory.
To hold that speech is interchangeable with violence is to hold that a bullet can be the appropriate answer to an argument.
I wouldn't consider a picture of Muhammad to be an "argument", would you?
What if they claimed to experience benefits from the implants? For example, they might cure certain neurological conditions.
Would you then expect them to remove the implants or be jolted?
This analysis seems be assuming that Muslims will deconvert if only they're shown a sufficient number of pictures of Muhammad.
Got another potential b) here.
That's a good point. Humans are disturbingly good at motivated reasoning and compartmentalization on occasion.
To better approximate a perfectly-rational Bayesian reasoner (with your values.)
Which, presumably, would be able to model the universe correctly complete with large numbers.
That's the theory, anyway. Y'know, the same way you'd switch in a Monty Haul problem even if you don't understand it intuitively.
I think this is the OP's point - there is no (human) mind capable of caring, because human brains aren't capable of modelling numbers that large properly. If you can't contain a mind, you can't use your usual "imaginary person" modules to shift your brain into that "gear".
So - until you find a better way! - you have to sort of act as if your brain was screaming that loudly even when your brain doesn't have a voice that loud.
I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner.
This strikes me as backward reasoning - if your moral intuitions about large numbers of animals dying are broken, isn't it much more likely that you made a mistake about vegetarianism?
(Also, three dollars isn't that high a value to place on something. I can definitely believe you get more than $3 worth of utility from eating a chicken. Heck, the chicken probably cost a good bit more than $3.)
Thank you!
So ... I suspect someone might be doing that mass-downvote thing again. (To me, at least.)
Where do I go to inform a moderator so they can check?
Hey, I've listened to a lot of ideas labelled "dangerous", some of which were labeled "extremely dangerous". Haven't gone crazy yet.
I'd definitely like to discuss it with you privately, if only to compare your idea to what I already know.
I'm saying that if Sleeping Beauty's goal is to better understand the world, by performing a Bayesian update on evidence, then I think this is a form of "payoff" that gives Thirder results.
From If a tree falls on Sleeping Beauty...:
...Each interview consists of one question, “What is your credence now for the proposition that our coin landed heads?”, and the answer given will be scored according to a logarithmic scoring rule, with the aggregate result corresponding to the number of utilons (converted to dollars, let’s say) she will be penalized a
I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)
OK, well by analogy, what's the "payoff structure" for nuclear anthropics?
Obviousl...
So do regular playthroughs, though; it's a video game. The first paragraph still remarks on "how different optimal play can be from normal play."
The trouble is, anthropic evidence works. I wish it didn't, because I wish the nuclear arms race hadn't come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.
But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor's Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.
The winning solution, that gives the right answer, is to use "anthropic" evidence.
If this confuses you, then I (seriously) suggest yo...
If you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true.
We have math for calculating these things, based on the probability different options are true.
For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons.
But, in fact, we can reason about this uncertainty - ...
Ah, interesting! I didn't know that. Props to Limbaugh et al.
(Nationalizing airport security seems orthogonal to the TSA search issue, though.)
Oh, a failed Friendly AI might well do that. But it would probably realize that life would develop elsewhere, and take steps to prevent us.
I'm curious, how do you know they were sociopaths? You seem to imply your evidence was that they were unfaithful and generally skeevy individuals besides, but was there anything else?
(Actually, does anyone know how we know that sociopaths are better at manipulating people? I've absorbed this belief somehow, but I don't recall seeing any studies or anything.)
Firstly, I just want to second the point that this is way too interesting for, what, a fifth-level recursion?
Secondly:
One recipe for being a player is to go after lower-status (less-attractive) people, fulfill their romantic needs with a mix of planned romance, lies and bravado, have lots of sex, and then give face-saving excuses when abandoning them.
Is this ... a winning strategy? In any real sense?
I mean, yes, it's easier to sleep with unattractive people. But you don't want to sleep with unattractive people. That is what "attractiveness" r...
You can't fit billions of people in the UK.
You can, actually. It's called "the British Empire".
It was widely considered a bad idea the last time it was tried, but it is possible. The United Kingdom is not defined by it's current set of borders or locations.
If Monroe was a hero, then Monroe's personality really doesn't fit with some of Quirrel's actions.
Also, the Defence Professor lied-with-truth about having stolen Quirrel's body outright "using incredibly Dark magic" when questioned on the real Quirrel's whereabouts.
... hmm. You know, depending on how separate the personalities are, it's possible the original ("zombie") Quirrel was simply stressed out of his mind from Voldemort essentially holding him prisoner in his own body.
When does the opposition to the Left ever respond with a little tit for tat? In the US, there are all sorts of people mouthing off big words about fighting government tyranny, while meekly standing by while their children are sexually assaulted by the TSA purportedly looking for nuclear weapons in their underwear.
Ah, I'm no expert in US politics, but I thought that was a Right-supported program? With what little of the Overton Window that covers "this is an absurd overreaction" lying on the metaphorical left-hand side?
if wizards were public about their abilities, a higher proportion of wizards (even low-powered wizards) in the muggle population would be identified and trained
There's no such thing as a "low-powered wizard", and all wizards in Britain are automatically detected magically (at birth?)
It is implied that in HPMOR there are - presumably third-world? - countries where they "receive no letters of any kind". So potentially a complete breakdown of the masquerade might allow the least sane Muggle governments to track down and kidnap wizarding...
Any suicide in general, and this one in particular, definitely has multiple causes. I'm really sorry if I gave the opposite impression.
But I think it's reasonable and potentially important to respond to a suicide by looking into those causes and trying to reduce them.
To be more object-level:
- Kathy was obviously mentally ill, and her particular brand of mental illness seems to have been well-known. I don't know what efforts were made to help her with that (I do get the impression some were made), but I've seen people claim her case was a
... (read more)I think we do disagree on if it's a good idea to widely spread as a message "HEY SUICIDAL PEOPLE HAVE YOU REALIZED THAT IF YOU KILL YOURSELF EVERYONE WILL SAY NICE THINGS ABOUT YOU AND WORK ON SOLVING PROBLEMS YOU CARE ABOUT LET’S MAKE SURE TO HIGHLIGHT THIS EXTENSIVELY".
I think we agree on this and we only miscommunicated with each other. Aumann points for both of us, I guess.