Thanks! Now, if only someone linked that Egan story :)
Are there any users of the spaced repetition software Mnemosyne that could help me with a technical issue? I just got the software for my Mac, and I've read in multiple places that you can import plain text files as a card deck. But on my version of Mnemosyne, I see no button saying "import files," and in fact no way at all to add more than one flashcard at a time.
My text editor is Word, and while I can save my vocabulary as a .txt file with Unicode encoding, I don't see any way to export it to Mnemosyne from there. Just to test if I understood the download/import concept at all, I tried downloading one of the free flashcard decks on the site, chose Mnemosyne as the application to open it with, and just got an error message. What am I missing here? Do I need to download a plug-in for importing to work?
We know stuff that our ancestors didn't know; we have capabilities that they didn't have.
I'm more than willing to agree that our ancestors were factually confused, but I think it's important to distinguish between moral and factual confusion. Consider the following quote from C.S. Lewis:
I have met people who exaggerate the differences [between the morality of different cultures], because they have not distinguished between differences of morality and differences of belief about facts. For example, one man said to me, Three hundred years ago people in England were putting witches to death. Was that what you call the Rule of Human Nature or Right Conduct? But surely the reason we do not execute witches is that we do not believe there are such things. If we did-if we really thought that there were people going about who had sold themselves to the devil and received supernatural powers from him in return and were using these powers to kill their neighbors or drive them mad or bring bad weather, surely we would all agree that if anyone deserved the death penalty, then these filthy quislings did. There is no difference of moral principle here: the difference is simply about matter of fact. It may be a great advance in knowledge not to believe in witches: there is no moral advance in not executing them when you do not think they are there. You would not call a man humane for ceasing to set mousetraps if he did so because he believed there were no mice in the house.
I think our ancestors were primarily factually, rather than morally, confused. I don't see strong reasons to believe that humans over time have made moral, as opposed to factual, progress, and I think attempts to convince me and people like me that we should care about animals should rest primarily on factual, rather than moral, arguments (e.g. claims that smarter animals like pigs are more psychologically similar to humans than I think they are).
If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures
Cool. Then we're in agreement about the practical consequences (humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead), which is fine with me.
If I write a computer program with a variable called isSuffering that I set to true, is it suffering?
(I have no idea how consciousness works, so in general, I can't answer these sorts of questions, but) in this case I feel extremely confident saying No, because the variable names in the source code of present-day computer programs can't affect what the program is actually doing.
humans, right now, who are spending time and effort to fight animal suffering should be spending their time and effort to fight human suffering instead
That doesn't follow if it turns out that preventing animal suffering is sufficiently cheap.
Why would the suffering of one species be more important than the suffering of another?
Because one of those species is mine?
I'm not sure I can offer a good argument against "human suffering is more important", because it strikes me as so completely arbitrary and unjustified that I'm not sure what the arguments for it would be.
Historically, most humans have viewed a much smaller set of (living, mortal) organisms as being the set of (living, mortal) organisms whose suffering matters, e.g. human members of their own tribe. How would you classify these humans? Would you say that their morality is arbitrary and unjustified? If so, I wonder why they're so similar. If I were to imagine a collection of arbitrary moralities, I'd expect it to look much more diverse than this. Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now? If so, have you read gwern's The Narrowing Circle (which is the reason for the living and mortal qualifiers above)?
There is something in human nature that cares about things similar to itself. Even if we're currently infected with memes suggesting that this something should be rejected insofar as it distinguishes between different humans (and I think we should be honest with ourselves about the extent to which this is a contingent fact about current moral fashions rather than a deep moral truth), trying to reject it as much as we can is forgetting that we're rebelling within nature.
I care about humans because I think that in principle I'm capable of having a meaningful interaction with any human: in principle, I could talk to them, laugh with them, cry with them, sing with them, dance with them... I can't do any of these things with, say, a fish. When I ask my brain in what category it places fish, it responds "natural resources." And natural resources should be conserved, of course (for the sake of future humans), but I don't assign them moral value.
Would you also say that they were all morally confused and that we have made a great deal of moral progress from most of history until now?
Yes! We know stuff that our ancestors didn't know; we have capabilities that they didn't have. If pain and suffering are bad when implemented in my skull, then they also have to be bad when implemented elsewhere. Yes, given bounded resources, I'm going to protect me and my friends and other humans before worrying about other creatures, but that's not because nonhumans don't matter, but because in this horribly, monstrously unfair universe, we are forced to make tradeoffs. We do what we must, but that doesn't make it okay.
I take it the name is a coincidence.
nazgulnarsil: "What is bad about this scenario? the genie himself [sic] said it will only be a few decades before women and men can be reunited if they choose. what's a few decades?"
That's the most horrifying part of all, though--they won't so choose! By the time the women and men re誰nvent enough technology to build interplanetary spacecraft, they'll be so happy that they won't want to get back together again. It's tempting to think that the humans can just choose to be unhappy until they build the requisite technology for re端nification--but you probably can't sulk for twenty years straight, even if you want to, even if everything you currently care about depends on it. We might wish that some of our values are so deeply held that no circumstances could possibly make us change them, but in the face of an environment superinelligently optimized to change our values, it probably just isn't so. The space of possible environments is so large compared to the narrow set of outcomes that we would genuinely call a win that even the people on the freak planets (see de Blanc's comment above) will probably be made happy in some way that their preSingularity selves would find horrifying. Scary, scary, scary. I'm donating twenty dollars to SIAI right now.
Hey, Z. M., you know the things people in your native subculture have been saying about most of human speech being about signaling and politics rather than conveying information? You probably won't understand what I'm talking about for another four years, one month, and perhaps you'd be wise not to listen to this sort of thing coming from anyone but me, but ... the parent is actually a nice case study.
I certainly agree that the world of "Failed Utopia 4-2" is not an optimal future, but as other commenters have pointed out, well ... it is better than what we have now. Eternal happiness in exchange for splitting up the species, never seeing your other-sex friends and family again? Certainly not a Pareto improvement amongst humane values, but a hell of a Kaldor-Hicks improvement. So why didn't you notice? Why am I speaking of this in such a detached manner, whereas you make a (not very plausible, by the way---you might want to work on that) effort to appear as horrified as possible?
Because politics. You and I, we're androgyny fans: we want to see a world without strict gender roles and with less male/female conflict, and we think it's sad that so much of humanoid mindspace goes unexplored because of the whole sexual dimorphism thing, and all of this seems like something worth protecting, so whenever you read something that your brain construes as "sexist," your brain makes sure to get offended and outraged. Why does that happen? I don't know: high IQ, high Openness boy somehow picks up a paraphilia, falls hard for the late-twentieth-century propaganda about human equality and nondiscrimination, learns about transhumanism, feminism, evolutionary psychology, and rationality in that order? But look. However it happened, there are probably better strategies for protecting whatever-it-is we should protect than feigning shock. Especially in this venue, where people should know better.
I imagine a Friendly AI, I imagine a hands-off benefactor who permits people to do anything they wish to which won't result in harm to others.
Yeah, I like personal freedom, too, but you have to realize that this is massively, massively underspecified. What exactly constitutes "harm", and what specific mechanisms are in place to prevent it? Presumably a punch in the face is "harm"; what about an unexpected pat on the back? What about all other possible forms of physical contact that you don't know how to consider in advance? If loud verbal abuse is harm, what about polite criticism? What about all other possible ways of affecting someone via sound waves that you don't know how to consider in advance? &c., ad infinitum.
Does anybody envisage a Friendly AI which doesn't correspond more or less directly with their own political beliefs?
I'm starting to think this entire idea of "having political beliefs" is crazy. There are all sorts of possible forms of human social organization, which result in various outcomes for the humans involved; how am I supposed to know which one is best for people? From what I know about economics, I can point out some reasons to believe that market-like systems have some useful properties, but that doesn't mean I should run around shouting "Yay Libertarianism Forever!" because then what happens when someone implements some form of libertarianism, and it turns out to be terrible?
Implicit in Szabo's argument is that you may be doing the equivalent of picking up pennies on railroad tracks.
I like that metaphor, but, you know, decision under uncertainty: we're on the railroad tracks already, and I'm going to pick up as much free money as I think I can get away with, because I no longer trust the schoolteachers and cops who taught me to sit still and wait for the train.
So, akrasia is not longer a significant problem or obstacle in your life?
No, sorry, that's not what I meant. It's more like---previously, I must have been implicitly thinking of "rationality" as being about verbal intellectual discourse, like the sort of thing we do here. Whereas now it's as if I'm finally starting to glimpse this idea of probability and decision theory as constraints on coherent behavior, with speaking and writing merely being particular types of human behavior that happen to be particularly salient to us, even though the real world is made out of simpler parts that we don't usually think about.
the tools of epistemic rationality, as they're taught in the Sequences, can improve your health, your career, your love life, the causes you care about, your psychological well-being, and so on.
I'm skeptical. The Less Wrong canon is great for training a particular set of widely-applicable abstract thinking skills, but that's not the same thing as domain-general awesomeness. See Yvain's 2009 post "Extreme Rationality: It's Not That Great." The sort of people who are receptive to this material aren't primarily being held back by insufficient rationality: the problem is akrasia, the lack of motivation to carry our gloriously rational plans.
One might argue that it is by means of rationality that we will discover and implement effective anti-akrasia techniques. Yes, I hope so, too. But I haven't gotten it to work yet.
Never mind; I was doing it wrong.

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
---Count to a Trillion by John C. Wright