All of scientism's Comments + Replies

scientism110

Right, but it's that sort of transition from the descriptive and the prescriptive that I'm highlighting. In liberal philosophy the issue is much more subtle, but there has been a constant interchange between the descriptive and the prescriptive. So if you look at society as sovereign individuals engaged in contractual relationships with one another, that's essentially descriptive. It was intended to be descriptive. But then your model for why individuals give up some of their rights to have a state doesn't look right and the answer to that isn't to change ... (read more)

-1[anonymous]
IMHO the issue is that this kind of individualism in Western society, for wealthy white males, was created really long ago. Roughly late 18th century. So anyone without an explicit interest in history, esp. from the angle of questioning the whole modern epoch, will see this individualism already as an old, established, traditional stuff, i.e. pretty much conservative stuff. In the West, pretty much every step of progressivism, leftism or liberalism since that was largely about expanding it to other people, poor white males, non whites, women etc. So you have the problem here that once one group of individuals got it, it is hard to defend why others should not. The issue is with having the first group have it, but that is a really old story, and so old that it looks downright conservative.
1Capla
Can I leave society If I don't like it? Can I free myself from it's constraints and take advantage of it as an outsider? If not, why not?
scientism210

If you care about culture, (traditional) values and intact families, then democracy is empirically very bad (far from being "the worst form of gov­ern­ment, except for all the oth­ers" it would place among the very worst). The question is then how you come to care about these things. For me it proceeded negatively: from a critical reading of political philosophy, I came to believe that the foundations of liberalism are incoherent; that what liberalism sees as constraints on individual freedom are nothing of the sort. That many of the norms, value... (read more)

2[anonymous]
This is all fine, but let's move one level higher. What is the reason that almost everybody who reasoned like this was religious? Why does it seem like this kind of thinking is fairly impossible to defend without some reference to religion e.g. https://bonald.wordpress.com/the-conservative-vision-of-authority/ ? (I am assuming we all agree here that ideas that cannot be defended on a secular ground are not worth defending) Basically it sounds a lot like the conflict between human volition vs. actual happiness or good lives. A lot of modern liberalism reduces to "if you get what you personally want, you will be happy / OK". So it is all about moving people from inborn roles to roles they want and choose. And a lot of religious thought is all about trying to convince people to reduce or give up their self-centered volition, desires, viewpoints, whatevers, basically to convince them to find happiness through other means than following their own wills. I am aware of this because I practiced a lot of Buddhism which uniquely focuses on it, on how the ego, the will, volition, vanity, is the source of suffering itself. Much of Christianity sounds like a half-assed version of a Buddhist ego reduction therapy - when people get down on their knees and pray "your will be done" it essentially means "NOT my will be done, I will train by brain to accept that the world does not revolve around me". The core idea in Buddhism, Christianity etc. is that there is true happiness to be found in surrendering your will. THIS is the psychological basis from which we can understand the difference between traditional and modern societies. This is why reactionaries are religious, mostly. The question is, just why cannot we justify this non-egocentric psychology on a scientific basis? Why do we need religion for this? Why cannot we figure it out naturalistically? And if we cannot figure it out naturalistically, scientifically, isn't it likely this is at some level wrong?
0TheAncientGeek
......even by the individuals affected? If they tell y8u that they hate being forced into a particular role, you're going to tell them that their feelings don't matter, because you can prove logically that it is non voluntary, and that you can't rebel against your identity? It's worth noting both that the oppressed were often denied a voice, in the sense of leaving a written record, as part of their oppression....and that there is plenty of evidence of dissent , in the form of popular revolt. It's also worth noting the difference between far left and classically liberal versions of this argument. (A perennial problem with Moldbug is the way he conflates progressivism qua the leftmost 10% of the spectrum with progressivism qua the leftmost 90%). The classical liberal does not regard traditional societies as morally wrong so much as instrumentally wrong, unsuited to economic and technological progress. Where you have a traditional, hierarchical society, the rulers of that society are under a set of incentives to defend their relative position, which is to say they are not incentivised to promote innovation. On the contrary, even technological developments can sunset them, as the English aristocracy was disrupted by the Industrial revolution. Liberal democracies, by contrast, are so good at reaping the benefits of progress that, they are able attract queues of would be immigrants from more traditional societies.
0Ixiel
Wow. This makes a lot of sense. I had previously thought of the term neoreactionary as just an insult, as similar to any view as "asshole" is to any anatomy. Now I think it is at least in the top half of similar ideologies. Thank you for the answer and thanks to the original poster for the question.
8Nornagest
It's not just society. It's more like he looked at Marx's (flawed, yes) model, thought "that's cool and all, but I want to feed the Tsar his yarbles now", and hit it with a wrench until it gave him some half-assed philosophical justification for starting a revolution (and later for running a totalitarian state, though not as totalitarian as Stalin would make it). See, orthodox Marxism isn't really a blueprint for revolution. Insofar as it's even a call to revolution, it's saying -- to the industrial workers of the entire world, and that's important -- that revolution is inevitable, it's going to happen anyway, the only thing holding it back from happening is self-delusion. Instead, it's better understood as a future history: it purports to lay out the historical forces that drive large-scale social changes and to predict what the next one's going to look like. Now, there are a number of ways you could challenge that in light of the real history that's happened in the century and a half since Marx wrote. But Lenin had bigger problems than that. By Marx's lights, Russia in 1917 wasn't ready for a communist revolution: it was at the time the least industrialized major European country (relative to its population), with most of the economy still running on a semi-feudal agrarian system. Its serfs had been emancipated less than a century before. Worse, the rest of the world looked like it wasn't going to be getting on the revolution train anytime soon. This ran completely counter to Marx's future history, but Lenin, in essence, said "fuck it, we'll do it anyway".
scientism450

[Please read the OP before voting. Special voting rules apply.]

Superintelligence is an incoherent concept. Intelligence explosion isn't possible.

D_Malik110

How smart does a mind have to be to qualify as a "superintelligence"? It's pretty clear that intelligence can go a lot higher than current levels.

What do you predict would happen if we uploaded Von Neumann's brain onto an extremely fast, planet-sized supercomputer? What do you predict would happen if we selectively bred humans for intelligence for a couple million years? "Impractical" would be understandable, but I don't see how you can believe superintelligence is "incoherent".

As for "Intelligence explosion isn't possible", that's a lot more reasonable, e.g. see the entire AI foom debate.

scientism100

All three projects - liberalism, socialism and progressivism - are related by common commitments that have their origins in Enlightenment political philosophy. Because progressives believe in systemic oppression, they have to alleviate systemic oppression in order to achieve liberty: we won't be truly free until we're free from racism, sexism, etc. They're still committed to value pluralism. All three projects faced the (paradoxical) issue of having to attain state power in order to enforce their vision. Liberal democracy was often created on the back of v... (read more)

1Matthew_Opitz
Yes, the common thread is the Enlightenment. See my response to Lumifer regarding who are the "progressives" I am talking about. They are not necessarily the people in America who flock to the Democratic Party. I don't think neoreactionaries are just complaining about democrats when they go after "progressivism." They have a far more broad target in mind—the Enlightenment, I guess, more or less.
-6Lumifer

It's an apt description of liberalism, of which progressivism is a species, which is defined by an open pluralism regarding what counts as the good. Progressives add a belief in systemic oppression - i.e., oppression by cultural norms and values, which they try to alleviate, but the goal is the same as classical liberalism: liberty from perceived oppression. Regardless, if you conceive of society as a power-structure, whether you take the classical liberal belief that we're oppressed by state and church, the socialist belief that we're oppressed by class s... (read more)

7Lumifer
That's classical liberalism and I don't count contemporary progressives, at least in the US, as belonging to it. The contemporary progressives have very... fixed ideas about what counts as good and are quite intolerant of people who dare to think otherwise. Not to mention that they have a love affair with state power.
4Viliam_Bur
Hawking: Tyson: Seems to me they are saying that inferential distance between philosophers and modern physics is too big, so unless philosophers start seriously studying physics, they will no longer be able to contribute meaningfully using their intuition only. (Of course they can still provide some Deep Wisdom, or post-facto explanations.)
3advancedatheist
If philosophy had actually converged onto an agreed upon view of the world, today's physicists might regard it differently.

Yes, when I gave up consequentialism for virtue ethics it was both a huge source of personal insight and led to insights into politics, economics, management, etc. I'm of the belief that the central problem in modern society is that we inherited a bad moral philosophy and applied it to politics, management, the economy, personal relationships, etc.

5mwengler
So you gave up consequentialism because virtue ethics had better consequences?
3blacktrance
I don't think that "modern society" has anything coherent enough to be called a moral philosophy. Scattered moral intuitions, perhaps, but not a philosophy. Also, virtue ethics and consequentialism are orthogonal. I'm a virtue ethicist and a consequentialist.
-3Shmi
It's not either/or and no, you haven't, not completely.

I'm not sure why you're dismissing building character as an explanation. Something builds character if it helps a person develop virtues such as patience, perseverance, humility, temperance, etc. Committing to a difficult activity can obviously do this, perhaps more so if it is not instrumental. There's also the sense in which an activity can be a test of character, so that completing it reveals (to oneself and others) virtues (or room for improvement). I find "direct hedonic value" far more suspicious, since most "rewarding" activities... (read more)

5Viliam_Bur
Perhaps because the character could be built just as well doing something else, equally difficult but more important? It's not just a dilemma between doing a character-building thing and slacking off. It's also why you chose this specific character-building thing instead of another. It's not just a choice between playing piano and watching TV, but also between playing piano and learning programming or building a startup. Character building answers only the first part of the question.

It depends what you mean by transformative. Perhaps there aren't many innovations left that would change the lives of ordinary people, but there are plenty that would change the scale of our civilisation: space industry, robotics, fusion, etc.

scientism220

Here's an interesting contrast: When I first moved from a small town to a big city I was fascinated by the fact that people cannot perform the simple task of walking down the street. Their attention is constantly being drawn to other things, they apparently have no awareness of or concern for other people, etc. They're constantly stopping dead in front of you, even though they're certainly aware they're on a busy street. They talk on their phones, text, play games, they even walk along reading novels. If they meet someone they know, they'll stop and have a... (read more)

scientism170

I try to view problems as opportunities. If it's raining outside, that's training in the rain. Snowing? Awesome, snow running! Too hot? High-temperature training. Too cold? Low-temperature training. I'm too tired? Fatigue training. I also try to look at things from what I call a "mediative" point of view. So let's say I'm out running my regular route but it's cold, windy, raining, etc, and I feel miserable. I try to remember how I felt running the same route on a beautiful day and bring my mind back to that state. Or if I'm fatigued, I try to rem... (read more)

1maia
There's also the option of trying to find a method of exercise that avoids as many of those willpower-draining hassles as possible. If you can, that is.
1DataPacRat
I remember once playing around with 'choosing to be happy', some years ago, but had completely forgotten about the trick until now. Certainly worth a shot.

Mere stipulation secures very little though. Consider the following scenario: I start wearing a medallion around my neck and stipulate that, so long as these medallion survives intact, I am to be considered alive, regardless of what befalls me. This is essentially equivalent to what you'd be doing in stipulating survival in the uploading scenario. You'd secure 'survival', perhaps, but the would-be uploader has a lot more work to do. You need also to stipulate that when the upload says "On my 6th birthday..." he's referring to your 6th birthday, e... (read more)

0torekp
No, and that is the point. There are serious drawbacks of the usual notions of welfare, at least in the high-tech future we are discussing, and they need serious correcting. Although, as I mentioned earlier, coining new words for the new concepts would probably facilitate communication better, especially when revisionaries and conservatives converse. So maybe "Yi" could be the pronoun for "miy" branching future, in which Yi go to Mars as well as staying home, to be merged later. There is no contradiction, either: my welfare is what I thought I cared about in a certain constellation of cares, but now Yi realize that was a mistake. Misconceptions of what we truly desire or like are, of course, par for the course for human beings; and so are corrections of those conceptions.

I agree that uploading is copying-then-death. I think you're basically correct with your thought experiment, but your worries about vagueness are unfounded. The appropriate question is what counts as death? Consider the following two scenarios: 1. A copy of you is stored on a supercomputer and you're then obliterated in a furnace. 2. A procedure is being performed on your brain, you're awake the entire time, and you remain coherent throughout. In scenario 1 we have a paradigmatic example of death: obliteration in a furnace. In scenario 2 we have a paradigm... (read more)

2Jonathan Paulson
You are dodging the question by appealing to the dictionary. The dictionary will not prove for you that identity is tied to your body, which is the issue at hand (not "whether your body dies as the result of copying-then-death", which as you point out is trivial)
2torekp
All true, but it just strengthens the case for what you call "stipulating a new meaning for the words 'survival', 'death', etc". Or perhaps, making up new words to replace those. Contemplating cases like these makes me realize that I have stopped caring about 'death' in its old exact meaning. In some scenarios "this will kill you" becomes a mere technicality.

Move something eye-catching into an odd place where you'll see it shortly after waking up in the morning. Whenever you see it say to yourself, "I put that there."

I'm not sure about introspectionism, but I'm sure you could find theories that have produced bad outcomes and had mainstream acceptance, particularly in medicine. I suppose the alternative is to remain noncommittal.

0irrational
That's very useful, actually. I think I have a tendency to just accept the latest medical theory/practice as being the best guess that the most qualified people made with the current state of evidence. Which may be really suboptimal if they don't have a lot of evidence for it, and perhaps it should be independently examined if it concerns you personally. I am not sure what degree of belief to assign such things, though, because I have no experience with them. Do you, or anyone, have an idea of how trustworthy such things generally are, in the modern age? Are there statistics about how often mainstream approaches are later proven to be harmful (and how often merely suboptimal)?

Look at something like psychology. If you'd deferred to the leading authorities over the past 100 years, you would have been an introspectionist, then a behaviourist, then a cognitive scientist and now you'd probably be a cognitive neuroscientist. Note that these paradigms primarily differ on what they think counts as evidence, rather than quality or quantity of evidence. They all performed experiments. They share many of the same experimental methods. They all had numerous results they could point to and a neat story about how the same method could be car... (read more)

0irrational
I think you are right, but is it so bad? If I were living at the time of the introspectionists, was there a better alternative for me? I suspect that unless I personally worked out some other theory (unlikely), I'd have to either take that one or something equally bad. Maybe it's slightly different around boundaries of these paradigm shifts where I could possibly adopt the new ideas before the mainstream did, but most of the time it wouldn't happen. I am far from being confident that I'd do a better job personally then the general consensus, even if that tends to be very conservative.

"I'm in the habit of talking about my original's experiences as though they're mine, because I experience them as though they were" appears to be a form of delusion to me. If somebody went around pretending to be Napoleon (answering to the name Napoleon, talking about having done the things Napoleon did, etc) and answered all questions as if they were Napoleon but, when challenged, reassured you that of course they're not Napoleon, they just have the habit of talking as if they are Napoleon because they experience life as Napoleon would, would yo... (read more)

2TheOtherDave
It has nothing to do with being content. If someone believes they are Napoleon, I consider them deluded, whether they are content or not. Conversely, if they don't believe they are Napoleon, I don't consider them deluded, whether they are content or not. In the example you give, I would probably suspect the person of lying to me. More generally: before I call something a delusion, I require that someone actually believe it's true. At this moment, you and I both know that I wrote this comment... we both have knowledge of what I did. In addition to that, I can remember writing it, and you can't. I can have the experience of that memory; you can't. The experience of memory isn't the same thing as the knowledge of what I did.

If the duplicate says "I did X on my nth birthday" it's not true since it didn't even exist. If I claim that I met Shakespeare you can say, "But you weren't even born!" So what does the duplicate say when I point out that it didn't exist at that time? "I did but in a different body" (or "I was a different body")? That implies that something has been transferred. Or does it say, "A different body did, not me"? But then it has no relationship with that body at all. Or perhaps it says, "The Original did X... (read more)

0TheOtherDave
Correct. When talking to you, or someone who shares your attitude, my duplicate probably says something like "You're right, of course. I'm in the habit of talking about my original's experiences as though they're mine, because I experience them as though they were, and both I and my original are perfectly happy talking that way and will probably keep doing so. But technically speaking you're quite correct... I didn't actually do X on my 9th birthday, nor did I have a 9th birthday to do anything on in the first place. Thanks for pointing that out." Which is closest to your last option, I suppose. Incidentally, my duplicate likely does this in roughly the same tone of voice that an adoptive child might say analogous things when someone corrects their reference to "my parents" by claiming that no, their parents didn't do any of that, their adoptive parents did. If you were to infer a certain hostility from that tone, you would not be incorrect. It's not difficult for me to call this a memory at all... it's the original's memory, which has been copied to and is being experienced by the duplicate. But if you'd rather come up with some special word for that to avoid confusion with a memory experienced by the same body that formed it in the first place, that's OK with me too. (I choose not to refer to it as "knowledge of what the original did", both because that's unwieldy and because it ignores the experiential nature of memory,, which I value.) Sufficient similarity to the original. Which is what we typically mean when we say that X is a duplicate of Y.

The preferences aren't symmetrical. Discovering that you're a duplicate involves discovering that you've been deceived or that you're delusional, whereas dying is dying. From the point of view of the duplicate, what you're saying amounts to borderline solipsism; you don't care if any of your beliefs, memories, etc, match up with reality. You think being deluded is acceptable as long as the delusion is sufficiently complete. From your point of view, you don't care about your survival, as long as somebody is deluded into thinking they're you.

0TheOtherDave
There's no delusion or deception involved in any of the examples I gave. In each example the duplicate knows it's the duplicate, the original knows it's the original; at no time does the duplicate believe it's the original. The original knows it's going to die. The duplicate does not believe that its memories reflect events that occurred to its body; it knows perfectly well that those events occurred to a different body. Everyone in each of those examples knows everything relevant. No, this isn't true. There are lots of scenarios in which I would greatly prefer my survival to someone being deluded into thinking that they're me after my death. And, as I said above, the scenarios I describe don't involve anyone being deluded about anything; the duplicate knows perfectly well that it's the duplicate and not the original.

Well, I would say the question of whether ball had the "same" bounciness when you filled it back up with air would either mean just that it bounces the same way (i.e., has the same amount of air in it) or is meaningless. The same goes for your faculties. I don't think the question of whether you're the same person when you wake up as when you went to sleep - absent your being abducted and replaced with a doppelgänger - is meaningful. What would "sameness" or "difference" here mean? That seems to me to be another case of concei... (read more)

2TheOtherDave
I'm content to say that, though I'd also be content to say that sufficient loss of faculties (e.g., due to a stroke while I slept) can destroy my identity, making me no longer the same person. Ultimately I consider this a question about words, not about things. Well, physical change is constant in living systems, so the whole notion of "without physical change" is somewhat bewildering. But I'm not assuming the absence of any particular physical change. Sure, that's fine. I don't insist otherwise. I just don't think the condition you refer to as "being the same person" is a condition that matters. I simply don't care whether they're the same person or not, as long as various other conditions obtain. Same-person-ness provides no differential value on its own, over and above the sum of the value of the various attributes that it implies. I don't see any reason to concern myself with it, and I think the degree to which you concern yourself with it here is unjustified, and the idea that there's some objective sense in which its valuable is just goofy. Again: so what? Why should I care? I don't claim that your understanding of sameness is false, nor do I claim it's meaningless, I just claim it's valueless. OK, he's not the same person. So what? What makes sameness important? To turn it around: suppose I am informed right now that I'm not the same person who did X on Dave's 9th birthday, that person died in 2012 and I'm a duplicate with all the same memories, personality, etc. I didn't actually marry my husband, I didn't _actually_buy my house, I'm not actually my dog's owner, I wasn't actually hired to do my job. This is certainly startling, and I'd greet such a claim with skepticism, but ultimately: why in the world should I care? What difference does it make? Prefer to what? So, as above, I'm informed that I'm actually a duplicate of Dave. Do I prefer this state of affairs to the one where Dave didn't die in 2012 and I was never created? No, not especially... I

It's the loss of faculties that constitutes the loss of identity, but faculties aren't transferable. For example, a ball might lose its bounciness if it is deflated and regain it if it is reinflated, but there's no such thing as transferring bounciness from one ball to another or one ball having the bounciness of another. The various faculties that constitute my identity can be lost and sometimes regained but cannot be transferred or stored. They have no separate existence.

1TheOtherDave
Ah, gotcha. Yeah, here again, I just can't imagine why I ought to care. I mean, I agree that the attributes can't be "stored" if I understand what you mean by that. When I remove the air from a ball, there is no more bounciness; when I add air to a ball, there is bounciness again; in between, there is no bounciness. If I do that carefully enough, the bounciness now is in-principle indistinguishable from the bounciness then, but that's really all I can say. Sure. That said, while I can imagine caring whether my ball bounces or not, and I can imagine caring whether my ball bounces in particular ways, if my ball bounces exactly the way it did five minutes ago I can't imagine caring whether what it has now is the same bounciness, or merely in-principle indistinguishable bounciness. To me, this seems like an obvious case of having distinctions between words that simply don't map to distinctions between states of the world, and getting too caught up in the words. By contrast, I can imagine caring whether I have the same faculties that constitute my identity as the guy who went to bed in my room last night, or merely in-principle indistinguishable faculties, in much the same way that I can imagine caring about whether my immortal soul goes to Heaven or Hell after I die. But it pretty much requires that I not think about the question carefully, because otherwise I conclude pretty quickly that I have no grounds whatsoever for caring, any more than I do about the ball. So, yeah... I'd still much rather be survived by something that has memories, personality, and other identity-constituting faculties which are in-principle indistinguishable from my own, but doesn't share any of my cells (all of which are now tied up in my rapidly-cooling corpse), than by something that shares all of my cells but loses a significant chunk of those faculties. Which I suppose gets us back to the same question of incompatible values we had the other day. That is, you think the above is clear

I wouldn't say that a brain transplant is nothing at all like a heart transplant. I don't take the brain to have any special properties. However, this is one of those situations where identity can become vague. These things lie on a continuum. The brain is tied up with everything we do, all the ways in which we express our identity, so it's more related to identity than the heart. People with severe brain damage can suffer a loss of identity (i.e., severe memory loss, severe personality change, permanent vegetative state, etc). You can be rough and ready w... (read more)

2TheOtherDave
Though, now that I think about it... Wait up. On your account, why should we call those things (memory loss, personality change, loss of cognitive ability) "loss of identity"? If something that has my memories, personality, and cognitive abilities doesn't have my identity, then it seems to follow that something lacking those things doesn't lack my identity. It seems that on your account those things are no more "loss of identity" than losing an arm or a kidney.
1TheOtherDave
Lots of things survive. They just don't happen to be part of the original body. Yes, I think given your understanding of those words, that's entirely correct. My life with that "description" deleted is not worth very much to me; the continued development of that "description" is worth a lot more. Right back atcha.

I don't see how using more detailed measurements makes it any less a cultural practice. There isn't a limit you can pass where doing something according to a standard suddenly becomes a physical relationship. Regardless, consider that you could create as many copies to that standard as you wished, so you now have a one-to-many relationship of "identity" according to your scenario. Such a type-token relationship is typical of norm-based standards (such as mediums of representation) because they are norm-based standards (that is, because you can make as many according to the standard as you wish).

1Cyan
I'm not saying it's not a cultural practice. I'm saying that the brute fact of the extreme degree of similarity (and resulting reproducibility of functionality) of the pre- and post-process brains seems like a much more relevant fact. I don't know why I should care that the process is a cultural artifact if the pre- and post-process brains are so similar that for all possible inputs, they produce the same outputs. That I can get more brains out than I put in is a feature, not a bug, even though it makes the concept of a singular identity obsolete.

That's essentially correct. Preservation of your brain is preservation of your brain, whereas preservation of a representation of your brain (X) is not preservation of your brain or any aspect of you. The existence of a representation of you (regardless of detail) has no relationship to your survival whatsoever. Some people want to be remembered after they're dead, so I suppose having a likeness of yourself created could be a way to achieve that (albeit an ethically questionable one if it involved creating a living being).

6TheOtherDave
OK., I think I understand your position. So, suppose I develop a life-threatening heart condition, and have the following conversation with my cardiologist: Her: We've developed this marvelous new artificial heart, and I recommend installing it in place of your damaged organic heart. Me: Oh, is it easier to repair my heart outside of my body? Her: No, no... we wouldn't repair your heart, we'd replace it. Me: But what would happen to my heart? Her: Um... well, we typically incinerate it. Me: But that's awful! It's my heart. You're proposing destroying my heart!!! Her: I don't think you quite understand. The artificial heart can pump blood through your body just as well as your original heart... better, actually, given your condition. Me: Sure, I understand that, but that's mere function. I believe you can replicate the functions of my heart, but if you don't preserve my heart, what's the value of that? I infer that on your account, I'm being completely absurd in this example, since the artificial heart can facilitate my survival just as well (or better) as my original one, because really all I ought to value here is the functions. As long as my blood is pumping, etc., I should be content. (Yes? Or have I misrepresented your view of heart replacement?) I also infer that you would further say that this example is nothing at all like a superficially similar example where it's my brain that's injured and my doctor is proposing replacing it with an artificial brain that merely replicates the functions of my brain (representation, information storage, computation and so forth). In that case, I infer, you would not consider my response absurd at all, since it really is the brain (and not merely its functions) that matter. Am I correct? If so, I conclude that I just have different values than you do. I don't care about my brain, except insofar that it's the only substrate I know of capable of implementing my X. If my survival requires the preservation of my brain, then

The brain constructed in your likeness is only normatively related to your brain. That's the point I'm making. The step where you make a description of the brain is done according to a practice of representation. There is no causal relationship between the initial brain and the created brain. (Or, rather, any causal relationship is massively disperse through human society and history.) It's a human being, or perhaps a computer programmed by human beings, in a cultural context with certain practices of representation, that creates the brain according to a s... (read more)

0Cyan
My response to this is very simple, but it's necessary to know beforehand that the brain's operation is robust to many low-level variations, e.g., thermal noise that triggers occasional random action potentials at a low rate. Suppose our standard is that we get it right when the reconstructed brain is more like the original brain just before cryonic preservation than a brain after a good night's sleep is like that same brain before sleeping -- within the subset of brain features that are not robust to variation. Further suppose that that standard is achieved through a process that involves a representation of the structure of the brain. Albeit that the representation is indeed a "cultural practice", the brute fact of the extreme degree of similarity of the pre- and post-process brains would seem much more relevant to the question of preservation of any aspect of the brain worthy of being called "identity". ETA: Thinking about this a bit more, I see that the notion of "similarity" in the above argument is also vulnerable to the charge of being a mere cultural practice. So let me clarify that the kind of similarity I have in mind basically maps to reproducibility of the input-output relation of a low-level functional unit, up to, say, the magnitude of thermal noise. Reproducibility in this sense has empirical content; it is not merely culturally constructed.

In the example being discussed we have a body. I can't think of a clearer example of death than one where you can point to the corpse or remains. You couldn't assert that you died 25 minutes ago - since death is the termination of your existence and so logically precludes asserting anything (nothing could count as evidence for you doing anything after death, although your corpse might do things) - but if somebody else asserted that you died 25 minutes ago then they could presumably point to your remains, or explain what happened to them. If you continued t... (read more)

0TheOtherDave
OK, I think I understand what you mean by "clear" now. Thanks.

I take it that my death and the being's ab initio creation are both facts. These aren't theoretical claims. The claim that I am "really" a description of my brain (that I am information, pattern, etc) is as nonsensical as the claim that I am really my own portrait, and so couldn't amount to a theory. In fact, the situation is analogous to someone taking a photo of my corpse and creating a being based on its likeness. The accuracy of the resulting being's behaviour, its ability to fool others, and its own confused state doesn't make any difference... (read more)

1Cyan
In the hypothetical, your brain has stopped functioning. Whether this is sufficient to affirm that you died is precisely the question at issue. Personally, it doesn't matter to me if my brain's current structure is the product of biological mechanisms operating continuously by physical law or is the product of, say, a 3D printer and a cryonically-created template -- also operating by physical law. Both brains are causally related to my past self in enough detail to make the resulting brain me in every way that matters to me. Curious that she used the transmission+reconstruction module while committing "suicide", innit? She didn't have to -- it was a deliberate choice.
0TheOtherDave
I don't know what the word "clear" in that sentence actually means. If you're simply asserting that what has occurred in this example is your death, then no, it isn't clear, any more than if I assert that I actually died 25 minutes ago, that's clear evidence that Internet commenting after death is possible. I'm not saying you're necessarily wrong... I mean, sure, it's possible that you're correct, and in your hypothetical scenario you actually are dead, despite the continued existence of something that acts like you and believes itself to be you. It's also possible that in my hypothetical scenario I'm correct and I really did die 25 minutes ago, despite the continued existence of something that acts like me and believes itself to be me. I'm just saying it isn't clear... in other words, that it's also possible that one or both of us is confused/mistaken about what it means for us to die and/or remain alive.

It would have false memories, etc, and having my false memories, it would presumably know that these are false memories and that it has no right to assume my identity, contact my friends and family, court my spouse, etc, simply because it (falsely) thinks itself to have some connection with me (to have had my past experiences). It might still contact them anyway, given that I imagine its emotional state would be fragile; it would surely be a very difficult situation to be in. A situation that would probably horrify everybody involved.

I suppose, to put myse... (read more)

2Cyan
!!... I hope you mean explicit memory but not implicit memory -- otherwise there wouldn't be much of a being left afterwards... For a certain usage of "tricking" this is true, but that usage is akin to the way optical illusions trick one's visual system rather than denoting a falsehood deliberately embedded in one's explicit knowledge. I would point out that the source of all the hypothetical suffering in this situation would the being's (and your) theory of identity rather than the fact of anyone's identity (or lack thereof). If this isn't obvious, just posit that the scenario is conceivable but hasn't actually happened, and some bastard deceives you into thinking it has -- or even just casts doubt on the issue in either case. Of course that doesn't mean the theory is false -- but I do want to say that from my perspective it appears that the emotional distress would come from reifying a naïve notion of personal identity. Even the word "identity", with its connotations of singleness, stops being a good one in the hypothetical. Have you seen John Weldon's animated short To Be? You might enjoy it. If you watch it, I have a question for you: would you exculpate the singer of the last song?

I was referring cryonics scenarios where the brain is being scanned because you cannot be revived and a new entity is being created based on the scan, so I was assuming that your brain is no longer viable rather than that the scan is destructive.

The resulting being, if possible, would be a being that is confused about its identity. It would be a cruel joke played on those who know me and, possibly, on the being itself (depending on the type of being it is). I am not my likeness.

Consider that, if you had this technology, you could presumably create a being ... (read more)

3Cyan
Fair enough. I'm positing that the being has been informed about how it was created; it knows that it is not the being it remembers, um, being. So it has the knowledge to say of itself, if it were so inclined, "I am a being purposefully constructed ab initio with all of the memories and cognitive capacities of scientism, RIP." Would it be so inclined? If so, what would it do next? (Let us posit that it's a reconstructed embodied human being.) For example, would it call up your friends and introduce itself? Court your former spouse (if you have one), fully acknowledging that it is not the original you? Ask to adopt your children (if you have any)?
scientism-30

The problem with the computationalist view is that it confuses the representation with what is represented. No account of the structure of the brain is the brain. A detailed map of the neurons isn't any better than a child's crude drawing of a brain in this respect. The problem isn't the level of detail, it's that it makes no sense to claim a representation is the thing represented. Of course, the source of this confusion is the equally confused idea that the brain itself is a sort of computer and contains representations, information, etc. The confusions ... (read more)

2Shmi
Suppose a small chunk of your brain is replaced with its functional equivalent, is the resulting chimera less "you"? If so, how can one tell?
4TheOtherDave
Just to make sure I'm following... your assertion is that my brain is not itself a sort of computer, does not contain representations, and does not contain information, my brain is some other kind of a thing, and so no amount of representations and information and computation can actually be my brain. They might resemble my brain in certain ways, they might even be used in order to delude some other brain into thinking of itself as me, but they are not my brain. And the idea that they might be is not even wrong, it's just a confusion. The information, the representations, the belief-in-continuity, all that stuff, they are something else altogether, they aren't my brain. OK. Let's suppose all this is true, just for the sake of comity. Let's call that something else X. On your account, should I prefer the preservation of my brain to the preservation of X, if forced to choose? If so, why?
0passive_fist
The point of cryonics is that it could lead to revival. Obviously. That's not what Mitchell_Porter's post was about, though.
0Cyan
You seem to think that creating a description of the structure of a brain is necessarily a destructive process. I don't know of any reason to assume that. If a non-destructive scan exists and is carried out, then there's no "death", howsoever defined. Right? But anyway, let's grant your implicit assumption of a destructive scan, and suppose that this process has actually occurred to your brain, and "something that functions like [your] brain" has been created. Who is the resulting being? Who do they think they are? What do they do next? Do they do the sorts of things you would do? Love the people you love? I grant that you do not consider this hypothetical being you -- after all, you are hypothetically dead. But surely there is no one else better qualified to answer these questions, so it's you that I ask.

I'm not quite sure what you're saying. I don't think there's a way to identify whether a goal is meaningless at a more fundamental level of description. Obviously Bob would be prone to say things like "today I did x in pursuit of my goal of time travel" but there's no way of telling that it's meaningless at any other level than that of meaning, i.e., with respect to language. Other than that, it seems to me that he'd be doing pretty much the same things, physically speaking, as someone pursuing a meaningful goal. He might even do useful things, like make breakthroughs in theoretical physics, despite being wholly confused about what he's doing.

You're right that a meaningless goal cannot be pursued, but nor can you be said to even attempt to pursue it - i.e., the pursuit of a meaningless goal is itself a meaningless activity. Bob can't put any effort into his goal of time travel, he can only confusedly do things he mistakenly thinks of as "pursuing the goal of time travel", because pursuing the goal of time travel isn't a possible activity. What Bob has learned is that he wasn't pursuing the goal of time travel to begin with. He was altogether wrong about having a terminal value of travelling back in time and riding a dinosaur because there's no such thing.

3linkhyrule5
That seems obviously wrong to me. There's nothing at all preventing me from designing an invisible-pink-unicorn maximizer, even if invisible pink unicorns are impossible. For that matter, if we allow counterfactuals, an invisible-pink-unicorn maximizer still looks like an intelligence designed to maximize unicorns - in the counterfactual universe where unicorns exist, the intelligence takes actions that tend to maximize unicorns.
2bokov
Thank you, I think you articulated better than anybody so far what I mean by a goal turning out to be meaningless. Do you believe that a goal must persist down the the most fundamental reductionist level in order to really be a goal? If not, can/should methods be employed in the pursuit of a goal such that the methods exist at a lower level than the goal itself?

I'd be willing to give this a shot, but his thesis, as stated, seems very slippery (I haven't read the book):

"Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe."

This needs to be reworded but appears to be straightforwardly true and uncontroversial: morality is connected to well-being and suffering.

"Conscious minds and their states are natural phenomena, fully constrained by the laws of Nature (whatever these tur... (read more)

-1jmmcd
You can't go from an is to an ought. Nevertheless, some people go from the "well-being and suffering" idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it "obvious" begs the whole question.
2buybuydandavis
So if we couldn't suffer, we wouldn't have any values? I don't think so.
0[anonymous]
He skips the qualifier in his FAQ:

I think the told/meant distinction is confused. You're conflating different uses of "meant." When somebody misunderstands us, we say "I meant...", but it doesn't follow that when they do understand us we didn't mean what we told them! The "I meant..." is because they didn't get the meaning the first time. I can't do what I'm told without knowing what you meant; in fact, doing what I'm told always implies knowing what you meant. If I tried to follow your command, but didn't know what you meant by your command, I wouldn't be doi... (read more)

2private_messaging
Well, one of my points is that there's no actual distinction. People make a distinction though, because firstly there's cognitive effort on both the saying and listening side to make communication clear, and there's a distinction between things that one side is responsible for and things that other side is responsible for. Secondarily, it is often selfish-optimal to misunderstand commands to some extent which can be attributed to an alternate understanding. Particularly prominent in lawyering.
scientism-40

There's only two options here. Either the universe is made of atoms and void and a non-material Cartesian subject who experiences the appearance of something else or the universe is filled with trees, cars, stars, colours, meaningful expressions and signs, shapes, spatial arrangements, morally good and bad people and actions, smiles, pained expressions, etc, all of which, under the appropriate conditions, are directly perceived without mediation. Naturalism and skeptical reductionism are wholly incompatible: if it was just atoms and void there would be nothing to be fooled into thinking otherwise.

scientism-30

I think it helps to look at statements of personal narrative and whether they're meaningful and hence whether they can be true or false. So, for example, change is part of our personal narrative; we mature, we grow old, we suffer injuries, undergo illness, etc. Any philosophical conception of personal identity that leads to conclusions that make change problematic should be taken as a reductio ad absurdum of that conception and not a demonstration of the falsity of our common sense concepts (that is, it shows that the philosopher went wrong in attempting t... (read more)

I think they're all examples of compliance - i.e., in each example he gets them to go along with something that isn't true. The creepy clown is the most obvious. He has put her in a confusing situation and then makes her confusion look like agreement. He also appears to be mirroring and then provoking her body language. He manages to get her to not walk away and to say he's right, but most of the time she appears to be completely baffled. With the pet name, I suspect the main part of the trick is making the man wait an extremely long time and making him sy... (read more)

Wittgenstein advanced philosophy to the point where it could have become an applied discipline, having solved many philosophical problems once and for all, but philosopher's balked at the idea of an ultimate resolution to philosophical problems.

I think the view that automation is now destroying jobs, the view that the economy always re-allocates the workforce appropriately and the views defended in this anti-FAQ all rest on a faulty generalisation. The industrial revolution and the early phases of computerisation produced jobs for specific reasons. Factories required workers and computers required data entry. It wasn't a consequence of a general law of economics, it was a fortuitous consequence of the technology. We are now seeing the end of those specific reasons, but not because of a general tr... (read more)

I'd consider interactive graphics. Nothing else has such instant feedback; it's very obvious if something is working or not and you can easily figure out what's wrong. Using Javascript and Canvas in a web browsing you can get up and running with 2D interactive graphics very quickly and you just have to hit refresh to see the changes you make (I don't know what Python offers in this area). I think it's a great way to learn various programming abstractions too. By working at a low level, you're not forced to use abstractions, but you can see why they're usef... (read more)

scientism120

I think this is just a limitation of comic book superheroes. They desire public recognition. In other traditions with analogous figures, particularly religion, being reviled is just another burden to be taken on by the hero. (Although this sometimes happens in comic books too. See the recent Batman movies.) I especially like the Tibetan Buddhist concept of "crazy wisdom." Tibetan folk heroes spend a lot of time shocking people out of their complacency and generally acting like supervillains. But it's all in the name of universal compassion. (Google "Drukpa Kunley" for a particularly entertaining example.)

  • I can compare the colour of a surface to the colour of a standardised colour chip, which is as objective as, say, measuring something using a ruler. Colours may not participate in any phenomena found in the physical scientist's laboratory, but they do participate in the behaviour of organisms found in the psychologist's laboratory. So I fail to see a problem here.

  • Indirect realism requires two mechanisms for veridical and non-veridical perception, the same as direct realism: one for when an object is seen and one for when it isn't. Direct realism is more

... (read more)
0A1987dM
Not quite. Colour is a three-dimensional subspace of the infinite-dimensional space of possible light spectra, but which subspace it is depends on the spectral sensitivities of your cone cells. OTOH I do think that the cone cells of the supermajority of all humans use the exact same molecules as photoreceptors, but I'm not quite sure of that.

There's no such thing as my red or different reds that are individuated by perceiver. Different types of sensory organ allow us to see different aspects of the world. I'm blind to some aspects other animals can perceive and other animals are blind to some aspect I can perceive, and the same goes for various perceptual deficiencies.

1whowhowho
Ten problems with direct realism: * if perceived qualities exist in external object, they have never been detected by science, as opposed to the 650nm reflectance characteristic, so it is a form of dualism (or rather pluralism: see below). It requires non-physical properties. * If perceived qualities exist in external objects, they need external objects to exist in. If some perceived qualities (dreams, after images) do not exist in external objects, then a sui generis method of projecting them into the world is assumed, which is required for non-veridical perception only. (An example is Everett Hall's exemplification) Indirect realism requires only one mechanism for veridical and non veridical perception,(the difference between the two being down to the specific circumstances) * One of the motivation for direct realism is the idea that to perceive truly is to perceive things as they are. However, other kinds of truth don't require such a criterion at all. A true sentence is generally not at all like situation it refers to. Truth in most contexts is the following of a set of normative rules which can themselves be quite arbitrary (as in rules linking symbols to their referents). Thus Direct Realism posits a sui generis kind of veridicality applying to perception only, along with several sui generis mechanisms to support it. * Another motivation for direct realism is a linguistic analysis : the argument goes that since sensory terms must have external criteria, then what is sensed and the way it is sensed are entirely external as a matter of metaphysics. However, a criterion is not, strictly speaking, a meaning. Smoke is a criterion of fire, but fire does not mean smoke — definitionally or linguistically. he questions "where are perceived qualities" and "how are sensation-words defined" just aren't the same. It's quite possible that we use reference to properties of external objects to "triangulate" what are actually inner sensations, since we don't have publi

It's very useful feedback. I have 82% positive. Going through my old comments, I found that a lot of comments I've made that I thought would be controversial actually had 100%. The comments that had a low percentage tended to be the ones where I hadn't expressed myself well. Given that I have a lot of unorthodox views, I found this reassuring.

I think there are potential examples of "suppressed" innovation due to our ideology. Our political ideology is based on a particular view of individual psychology and sociology. I mentioned the view of the state as an antagonistic actor and the idea that society doesn't transcend the individual. Both of these assumptions are absent from other traditions (pre-Englightenment West, Confucian, etc) and both appear to set the bounds of how we reason about people and society. I would add to this the idea that morality is problematic in that it doesn't ... (read more)

It depends how you define weirdness, I think. What I'm claiming, by use of examples, is that we have a very specific out-group/in-group separation. What we usually label "weird" is harmless in-group stuff. We might even use it to signal our tolerance/freedom/etc. What is actually weird to us, we tend not to define explicitly at all, but to separate by exclusion and by favouring in-group stuff without argument. Sometimes we consider it offensive. The examples in the original article are not great, I think, since our society is tolerant of people w... (read more)

0Emile
I agree that it depends of what is meant by "weirdness", and that if by that you mean out-group behavior then yes we are intolerant of it. However, Caplan's argument was that signaling conformity discouraged innovation, so the important question becomes how many potential innovations get discouraged - how many fall under 'harmless in-group stuff", and how many fall under "actually weird out-group stuff". You could conceivably have an out-group/in-group separation such that the "out-group" is a restricted set of characteristics, and the in-group is anything else (it's defined by what it's not, which isn't that rare), in which case most innovations wouldn't be hindered. Or more generally, in-group borders can vary in how restrictive they are; some groups (catholic housewives) can be such that any innovation is likely to fall outside the group, and others (hippies, geeks), while still having borders, may be broad enough to allow a lot more potential innovations to fall in.
scientism190

Most intolerance doesn't announce itself. It usually dresses itself up as something positive.

The cynic in me would say the so-called tolerant people within our society aren't actually tolerant, rather they've adopted a potpourri of non-traditional behaviours in order to signal their faux tolerance, and then act with intolerance to so-called traditionalists (who are racist, homophobic, misogynist, authoritarian, etc). It all depends on how you value the liberal project. Personally I think it rests on shaky foundations, so I have some sympathy for this cynic... (read more)

2fubarobfusco
I think it would be interesting to analyze in more detail what you mean by "intolerance". We might distinguish the sort of "intolerance" that is expressed by mass violence against its target, and the sort that is expressed by simply not taking the target's claims very seriously.
4Emile
Your examples are mostly intolerance of specific things, and I agree that there's plenty of that going on; and I also agree that a good deal of our public praise of "tolerance" is probably not completely true. My question was however whether we really disliked weirdness itself, not specific weird behaviors. If we irrationally dislike 20% of weird behaviors, than we may not be living up to our ideals of tolerance, but it's also unlikely that that intolerance is slowing our progress down much. (edit) To take an example from another branch of this thread, a video game startup may claim to not have any dress code like those boring stuffy banks, but anybody wearing a suit will be sneered at anyway. SO, they are still intolerant, despite their claims to the contrary, BUT, a norm of accepting anything but a suit allows for a lot more variance than a norm of only accepting suits, so in practice you'll still get the benefits of tolerance (in terms of finding comfortable clothes).
scientism120

Buddhism merely states that there's a psychological continuum in which there is nothing unchanging. The "self" that's precluded is just an unchanging one. (That said, in the Abhidharma there are unchanging elements from which this psychological continuum is constituted.) The Mahayana doctrine of emptiness (which isn't common to all Buddhism, just the schools that are now found in the Himalayas and East Asia) essentially states that everything is without inherent existence; things only exist as conditioned phenomena in relation to other things, no... (read more)

scientism-20

It's extremely important to realise what Luke is doing here, even if you agree with it. Cognitive science is a sub-discipline of psychology established to reflect a particular philosophical position. Cognitive neuroscience is a sub-discipline of neuroscience established to reflect a particular philosophical position. In both cases the philosophical position, within that sub-discipline, is assumed rather than defended. What Luke is doing is: (1) denying the legitimacy of other parts of behavioural and neural science, thus misrepresenting the diversity of sc... (read more)

-6Peterdjones
Load More