All of torekp's Comments + Replies

torekp20

Given the disagreement over what "causality" is, I suspect that different CDT's might have different tolerances for adding precommitment without spoiling the point of CDT.  For an example of a definition of causality that makes interesting impacts on decision theory, see Douglas Kutach, Causation and its Basis in Fundamental Physics.  There's a nice review here.  Defining "causation" Kutach's way would allow both making and keeping precommitments to count as causing good results.  It would also at least partly collapse the divergence between CDT and EDT.  Maybe completely - I haven't thought that through yet.

torekp146

Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person.  Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.

torekp41

I have a closely related objection/clarification.  I agree with the main thrust of Rob's post, but this part:

Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you"...

Rather, I assume xlr8harder cares about more substantive questions like:  (1) If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? (2) Should I anticipate experiencing what my upload experiences? (3) If the scannin

... (read more)
torekp42

I'm not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase.  After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model.  But (a) this is post-hoc and potentially ad-hoc, and (b) you've given us little reason to expect that there will always be such a family of concepts.  It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.

Also, y... (read more)

4Steven Byrnes
Thanks for your comment! I explicitly did not explain or justify the §2.4.1 (and especially §2.4.1.1) thing, so you have every right to be skeptical of it.  :)  If it’s wrong (for the sake of argument), well, the rest of the series doesn’t rely on it much. The exception is a discussion of brainstorming coming up in the next post. I think if the §2.4.1 thing is wrong, then that brainstorming-related text would need to be edited, but I would bet that there’s a weaker version of §2.4.1 like maybe “it can be approximately linear under certain circumstances…”, where that weaker version is still adequate to explain the things I want to explain. If that’s not good enough to satisfy you, well, my plan for now is to wait until somebody else publishes a correct model with all the gory details of “concepts” in the cortex. After that happens, I will be happy to freely chat about that topic. It’s bound to happen sooner or later! I don’t want to help that process along, because I would prefer “later” over “sooner”. But most neuro-AI researchers aren’t asking me for my opinion. I talked about wanting vs liking in §1.5.2. I have a little theory with some more details about wanting-vs-liking, but it involves a lot of background that I didn’t want to get into, and nothing else I care about right now seems to depend on that, so I have declared that out-of-scope for this series, beyond the brief intuitive discussion in §1.5.2. UPDATE: I wound up writing an appendix with much more details on wanting-vs-liking. I strongly agree that innate drives are diverse and complex, and well worth understanding in great detail. That is a major research interest of mine. It’s true that this particular series mostly treats them as an undifferentiated category—but as a general policy, I think it’s good and healthy to talk about narrow topics, which inevitably entails declaring many important and interesting things to be out-of-scope.  :)
torekp20

When dealing with theology, you need to be careful about invoking common sense. According to https://www.thegospelcoalition.org/themelios/article/tensions-in-calvins-idea-of-predestination/ , Calvin held that God's destiny for a human being is decided eternally, not within time and prior to that person's prayer, hard work, etc.

The money (or heaven) is already in the box. Omega (or God) can not change the outcome.

What makes this kind of reasoning work in the real (natural) world is the growth of entropy involved in putting money in boxes, deciding to d... (read more)

torekp20

I view your final point as crucial. I would put an additional twist on it, though. During the approach to AGI, if takeoff is even a little bit slow, the effective goals of the system can change. For example, most corporations arguably don't pursue profit exclusively even though they may be officially bound to. They favor executives, board members, and key employees in ways both subtle and obvious. But explicitly programming those goals into an SGD algorithm is probably too blatant to get away with.

torekp20

In addition to your cases that fail to be explained by the four modes, I submit that Leonard Cohen's song itself also fails to fit.  Roughly speaking, one thread of meaning in these verses is that "(approximately) everybody knows the dice are loaded, but they don't raise a fuss because they know if they do, they'll be subjected to an even more unfavorable game."  And likewise for the lost war.  A second thread of meaning is that, as pjeby pointed out, people want to be at peace with unpleasant things they can't personally change.  It's ... (read more)

torekp20

Like Paradiddle, I worry about the methodology, but my worry is different.  It's not just the conclusions that are suspect in my view:  it's the data.  In particular, this --

Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea.

-- is a serious problem.  You are basically forcing your subjects to treat a cluster in thingspace as if it must be definable by a single property or process.  Or perhaps they perceive you as urging them ... (read more)

torekp20

this [that there is no ground truth as to what you experience] is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists.

I beg to differ.  The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description.  E.g., in one of the snippets/bits you cite, "I seem to see a pink ring."  If the subject said "I seem to see a reddish oval", perhaps that would have been true.  But compare:

My freely... (read more)

torekp4-2

Fair point about the experience itself vs its description.  But note that all the controversy is about the descriptions.  "Qualia" is a descriptor, "sensation" is a descriptor, etc.  Even "illusionists" about qualia don't deny that people experience things.

4Rafael Harth
Alright, so I changed the paragraph into this: I think a lot of Camp #2 people want to introduce new metaphysics, which is why I don't want to take out the last sentence. I don't think this is true. E.g., Dennett has these bits in Consciousness Explained: 1, 2, 3, 4. Of course, the issue is still tricky, and you're definitely not the only one who thinks it's just a matter of description, not existence. Almost everyone agrees that something exists, but Camp #2 people tend to want something to exist over and above the reports of that thing, and Dennett seems to deny this. And (as I mentioned in some other comment) part of the point of this post is that you empirically cannot nail down exactly what this thing is in a way that makes sense to everyone. But I think it's reasonable to say that Dennet doesn't think people experience things. Also, Dennett in particular says that there is no ground truth as to what you experience, and this is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists. Like, I think Camp #2 people will generally hold that, even if errors can come in during the reports of experience, there is still always a precise fact of the matter as to what is being experienced. And depending on their metaphysics, it would be possible to figure out what exactly that is with the right neurotech. And another reason why I don't think it's true is because then I think illusionism wouldn't matter for ethics, but as I mentioned in the post, there are some illusionists who think their position implies moral nihilism. (There are also people who differentiate illusionism and eliminativism based on this point, but I'm guessing you didn't mean to do that.)
torekp51

There are many features you get right about the stubbornness of the problem/discussion.  Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters.  But now I'm going to complain about what I see as your missteps.

Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.

I think we need to be careful not to mush together metaphysics and epistemics... (read more)

4TAG
It's nonetheless the best reason. The amount of times you should add new ontological categories isn't zero, ever -- even if you shouldn't also add a category every time you are confused. Physicists were not wrong to add the nuclear forces to gravity and electromagnetism. Unfortunately, there is no simple algorithm to tell you when you should add categories. Do they? Camp #1 is generally left with denialism about qualia (including illusionism), or promissory physicalism, neither of which is hugely attractive. Regarding promissory physicalism, it's a subjective judgement, not a proof , that we will have a full reductive explanation of consciousness one day, so it is quite cheeky to call the other camp "wrong" because they have a subjective judgement that we won't. No, it's about the implications. People are quite explicit that they don't want to believe in qualia becasue they don't want to have to believe in epiphenomenalism, zombies, non physical properties, etc.. Of course, rejecting evidence because it doesn't fit a theory is the opposite of rationality. Well, materialist -- it doesn't require immaterial substances or non physical properties, but it also denies that all facts are physical facts, contra strong physicalism. I don't see DANM as a radical third option to the two camps, I see it as the lightweight or minimalist position in camp #2.
4Rafael Harth
Agreed; too tired right now but will think about how to rewrite this part. I don't think I said that. I think I said that Camp #2 claims one cannot be wrong about the experience itself. I agree (and I don't think the post claims otherwise) that errors can come in during the step from the experience to the task of finding a verbalization of the experience. You chose an example where that step is particularly risky, hence it permits a larger error. Note that for Camp #2, you can draw a pretty sharp line between conscious and unconscious modules in your brain, and finding the right verbalization is mostly an unconscious process.
torekp20

The belief in irreducibility is much more of a sine qua non of qualiaphobia,

Can you explain that?  It seems that plenty of qualiaphiles believe they are irreducible, epistemically if not metaphysically.  (But not all:  at least some qualiaphiles think qualia are emergent metaphysically.  So, I can't explain what you wrote by supposing you had a simple typo.)

torekp30

I think you can avoid the reddit user's criticism if you go for an intermediate risk averse policy. On that policy, there being at least one world without catastrophe is highly important, but additional worlds also count more heavily than a standard utilitarian would say, up until good worlds approach about half (1/e?) the weight using the Born rule.

However, the setup seems to assume that there is little enough competition that "we" can choose a QRNG approach without being left behind. You touch on related issues when discussing costs, but this merits separate consideration.

1GödelPilled
My understanding is that GPT style transformer architecture already incorporates random seeds at various points. In which case, adding this functionality to the random seeds wouldn't cause any significant "cost" in terms of competing with other implementations.
torekp20

"People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do."

I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person's position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it's working with - or expects to become superior - it's not clear why simulation would be a good approach for it. Yes that works f... (read more)

2Kaj_Sotala
Maybe. On the other hand, AIs have recently been getting quite good at things that we previously thought to require human-like intuition, like playing Go, understanding language, and making beautiful art. It feels like a natural continuation of these trends would be for it to develop a superhuman ability for intuitive social modeling as well.
torekp20

Update:  John Collins says that "Causal Decision Theory" is a misnomer because (some?) classical formulations make subjunctive conditionals, not causality as such, central.  Cited by the Wolfgang Schwarz paper mentioned by wdmcaskill in the Introduction.

torekp10

I have a terminological question about Causal Decision Theory.

Most often, this [causal probability function] is interpreted in counterfactual terms (so P (SA) represents something like the probability of ​S​ coming about were I to choose ​A​) but it needn’t be.

Now it seems to me that causation is understood to be antisymmetric, i.e. we can have at most one of "A causes B" and "B causes A".  In contrast, counterfactuals are not antisymmetric, and "if I chose A then my simulation would also do so" and "If my simulation chose A then I would also do so" ... (read more)

2torekp
Update:  John Collins says that "Causal Decision Theory" is a misnomer because (some?) classical formulations make subjunctive conditionals, not causality as such, central.  Cited by the Wolfgang Schwarz paper mentioned by wdmcaskill in the Introduction.
torekp20

I love #38

A time-traveller from 2030 appears and tells you your plan failed. Which part of your plan do you think is the one ...?

And I try to use it on arguments and explanations.

torekp50

Right, you're interested in syntactic measures of information, more than a physical one  My bad.

torekp30

the initial conditions of the universe are simpler than the initial conditions of Earth.

This seems to violate a conservation of information principle in quantum mechanics.

6Mark Xu
perhaps would have been better worded as "the simplest way to specify the initial conditions of Earth is to specify the initial conditions of the universe, the laws of physics, and the location of Earth."
torekp20

On #4, which I agree is important, there seems to be some explanation left implicit or left out.

#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem.

But middle managers who are good at producing actual results will therefore want to decrease mazedom, in order that their competence be recognized.  Is it, then, that incompetent people will be disproportionately attracted to - and capable of crowding others out from - middle management?  That they will be attracted is a no-brainer, ... (read more)

torekp20

When I read

To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! [...] But it turns out, he wants to be one level up!

I thought, thank goodness, Graziano (and steve2152) gets it. But in the moral implications section, you immediately start talking about attention schemas rather than simply attention. Attention schemas aren't necessary for consciousness or sentience; they're necessary for meta-consciousness. ... (read more)

torekp40
how to quote

Paste text into your comment and then select/highlight it. Formatting options will appear, including a quote button.

2Motasaurus
Thank you :)
2Bucky
Is there a way to do this on mobile devices?
torekp30
People often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can't perfectly know the hardware it is running on.

How could Emmy, an embedded agent, know its source code perfectly, or even be certain that it is a computing device under the Church-Turing definition? Such certainty would seem dogmatic. Without such certainty, the choice of 10 rather than 5 cannot be firmly classified as an error. (The classification as an error seemed to play an important role in your discussion.) So Emmy has a motivation to keep looking and find that U(10)=10.

torekp30

Thanks for making point 2. Moral oughts need not motivate sociopaths, who sometimes admit (when there is no cost of doing so) that they've done wrong and just don't give a damn. The "is-ought" gap is better relabeled the "thought-motivation" gap. "Ought"s are thoughts; motives are something else.

torekp40

Technicalities: Under Possible Precisifications, 1 and 5 are not obviously different. I can interpret them differently, but I think you should clarify them. 2 is to 3 as 4 is to 1, so I suggest listing them in that order, and maybe adding an option that is to 3 as 5 is to 1.

Substance: I think you're passing over a bigger target for criticism, the notion of "outcomes". In general, agents can and do have preferences over decision processes themselves, as contrasted with the standard "outcomes" of most literature like winning or lo... (read more)

torekp70

If there were no Real Moral System That You Actually Use, wouldn't you have a "meh, OK" reaction to either Pronatal Total Utilitarianism or Antinatalist Utilitarianism - perhaps whichever you happened to think of first? How would this error signal - disgust with those conclusions - be generated?

gjm100

Suppose you have immediate instinctive reactions of approval and disapproval -- let's call these pre-moral judgements -- but that your actual moral judgements are formed by some (possibly somewhat unarticulated) process of reflection on these judgements. E.g., maybe your pre-moral judgements about killing various kinds of animal are strongly affected by how cute and/or human-looking the animals are, but after giving the matter much thought you decide that you should treat those as irrelevant.

In that case, you might have a strong reaction to either of ... (read more)

torekp00

Shouldn't a particular method of inductive reasoning be specified in order to give the question substance?

torekp00

Great post and great comment. Against your definition of "belief" I would offer the movie The Skeleton Key. But this doesn't detract from your main points, I think.

torekp00

I think there are some pretty straightforward ways to change your true preferences. For example, if I want to become a person who values music more than I currently do, I can practice a musical instrument until I'm really good at it.

torekp00

I don't say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C ... causes T, where T is the talk; the talk can still be about A. It just can't be about Z, where Z is something which never appears in any chain leading to T.

I just now added the caveat "basic" because you have a good point about free will. (I assume you mean contracausal "free will". I think ca... (read more)

torekp00

The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You've called A "color perception A", but I aim to dispute that.)

Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word "gumie" always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greate... (read more)

0halcyon
I'm not sure that analogy can be extended to our cognitive processes, since we know for a fact that: 1. We talk about many things, such as free will, whose existence is controversial at best, and 2. Most of the processes causally leading to verbal expression are preconscious. There is no physical cause preventing us from talking about perceptions that our verbal mechanisms don't have direct causal access to for reasons that are similar to the reasons that we talk about free will. Why must A cause C for C to be able to accurately refer to A? Correlation through indirect causation could be good enough for everyday purposes. I mean, you may think the coincidence is too perfect that we usually happen to experience whatever it is we talk about, but is it true that we can always talk about whatever we experience? (This is an informal argument at best, but I'm hoping it will contradict one of your preconceptions.)
torekp10

Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.

Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

torekp00

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we ... (read more)

2cousin_it
It doesn't seem to be universally true. For example, a thermostat's action is correlated with past temperature. People are similar to thermostats in some ways, for example upon touching a hot stove you'll quickly withdraw your hand. But we also differ from thermostats in other ways, because small amounts of noise in the brain (or complicated sensitive computations) can lead to large differences in actions. Maybe Carroll is talking about that?
torekp00

We not only stop at red lights, we make statements like S1: "subjectively, red is closer to violet than it is to green." We have cognitive access both to "objective" phenomena like the family of wavelengths coming from the traffic light, and also to "subjective" phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.

By the way, the fact that there is a large equivalence class of wavelength combinations that will be per... (read more)

0halcyon
I don't see how you can achieve a reductionist ontology without positing a hierarchy of qualities. In order to propose a scientific reduction, we need at least two classes, one of which is reducible to the other. Perhaps "physical" and "perceived" qualities would be more specific than "primary" and "secondary" qualities. Regarding your question, if the "1->2 and 1->3" theory is accurate, then I suppose when we say that "red is more like violet than green", certain wavelength ranges R are causing the human cognitive architecture to undertake some brain activity B that drives both the perception of color similarity A a well as behavior which accords with perception C. So it follows that "But, by definition of epiphenomenalism, it's not A that causes people to say the above sentences S1 and S2, but rather some other brain activity, call it B." is true, but "But now by our theory of reference, subjective-red is B, rather than A." is false. The problem comes from an inaccurate theory of reference which conflates the subset of brain activities that are a color perception A with the entirety of brain activities, which includes preconscious processes B that cause A as well as the behavior C of expressing sentences S1 and S2. Regarding S2, I think there is an equivocation between different definitions of the word "subjective". This becomes clear when you consider that the light rays entering your eyes are objectively red. We should expect any correctly functioning human biological apparatus to report the object as appearing red in that situation. If subjective experiences are perceptions resulting from your internal mechanisms alone, then the item in question is objectively red. If the meaning of "subjective experience" is extended to include all misreportings of external states of affairs, then the item in question is subjectively red. This dilemma can be resolved by introducing more terms to disambiguate among the various possible meanings of the words we are using. So
torekp00

The point is literally semantic. "Experience" refers to (to put it crudely) the things that generally cause us to say "experience", because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). "Horse" means horse because horses typically occasion the use of "horse". If there were a language in which cows typically occasioned the word "horse", in that language "horse" would mean cow.

0halcyon
I don't think epiphenomenalists are using words like "experience" in accordance with your definition. I'm no expert on epiphenomenalism, but they seem to be using subjective experience to refer to perception. Perception is distinct from external causes because we directly perceive only secondary qualities like colors and flavors rather than primary qualities like wavelengths and chemical compositions. EY's point is that we behave as if we have seen the color red. So we have: 1. physical qualities, 2. perceived qualities, and 3. actions that accord with perception. To steelman epiphenomenalism, instead of 1 -> 2 -> 3, are other causal diagrams not possible, such as 1 -> 2 and 1 -> 3, mediated by the human cognitive architecture? (Or maybe even 1 -> 3 -> 2 in some cases, where we perceive something on the basis of having acted in certain ways.) However, the main problem with your explanation is that even if we account for the representation of secondary qualities in the brain, that still doesn't explain how any kind of direct perception of anything at all is possible. This seems kind of important to the transhumanist project, since it would decide whether uploaded humans perceive anything or whether they are nothing but the output of numerical calculations. Perhaps this question is meaningless, but that's not demonstrated simply by pointing out that, one way or another, our actions sometimes accord with perception, right?
torekp00

I agree that non-universal-optimizers are not necessarily safe. There's a reason I wrote "many" not "all" canonical arguments. In addition to gaming the system, there's also the time honored technique of rewriting the rules. I'm concerned about possible feedback loops. Evolution brought about the values we know and love in a very specific environment. If that context changes while evolution accelerates, I foresee a problem.

torekp00

I think the "non universal optimizer" point is crucial; that really does seem to be a weakness in many of the canonical arguments. And as you point out elsewhere, humans don't seem to be universal optimizers either. What is needed from my epistemic vantage point is either a good argument that the best AGI architectures (best for accomplishing the multi-decadal economic goals of AI builders) will turn out to be close approximations to such optimizers, or else some good evidence of the promise and pitfalls of more likely architectures.

Needless to say, that there are bad arguments for X does not constitute evidence against X.

0Vaniver
Do you think there's "human risk," in the sense that giving a human power might lead to bad outcomes? If so, then why wouldn't the same apply to AIs that aren't universal optimizers? It seems to me that one could argue that humans have various negative drives, that we could just not program into the AI, but I think this misses several important points. For example, one negative behavior humans do is 'game the system,' where they ignore the spirit of regulations while following their letter, or use unintended techniques to get high scores. But it seems difficult to build a system that can do any better than its training data without having it fall prey to 'gaming the system.' One needs to not just convey the goal in terms of rewards, but the full concept around what's desired and what's not desired.
torekp10

This is the right answer, but I'd like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don't find that very reassuring.

torekp00

Well if you narrow "metaphysics" down to "a priori First Philosophy", as the example suggests -- then I'm much less enthusiastic about "metaphysics". But if it's just (as I conceive it) continuous with science, just an account of what the world contains and how it works, we need a healthy dose of that just to get off the ground in epistemology..

torekp30

The post persuasively displays some of the value of hermeneutics for philosophy and knowledge in general. Where I part ways is with the declaration that epistemology precedes metaphysics. We know far more about the world than we do about our senses. Our minds are largely outward-directed by default. What you know far exceeds what you know that you know, and what you know how you know is smaller still. The prospects for reversing cart and horse are dim to nonexistent.

0TheAncientGeek
Well, so long as we can be sure we know anything without doing epistemology....
2Gordon Seidoh Worley
This reads to me like you're confusing the differences between epistemology and experience and metaphysics and reality. The formers are studies of the latters. I agree that reality exists first and then experience is something that happens inside reality: this is specifically the existentialist view of reality that stuff exists before it has meaning and is contrasted with the essentialist view that meaning causes stuff to exist. The point that epistemology precedes metaphysics is that, because you exist inside reality and know it only through experience inside of it, understanding how you know must come before understanding what you know. To be concrete, I know that 1 + 1 = 2, but I learned this information by experiencing that combining one thing and another thing gave me two things. There seems little to no evidence to support the opposite view, that I had timeless access to the knowledge of the true proposition that 1 + 1 = 2 and then was able to experience putting one thing and another together to get two things because I knew it to be true. That we are perhaps better at metaphysics than epistemology seems beside the point that knowledge comes to us through experience.
0MrMind
... or even far exceeds what you feel that you know. This is the most important objection of all.
torekp00

Mostly it's no-duh, but the article seems to set up a false contrast between justification in ethics, and life practice. But large swaths of everyday ethical conversation are justificatory. This is a key feature that the philosopher needs to respect.

torekp00

Nice move with the lyrical section titles.

torekp50

There's a lot of room in between fully integrated consciousness and fully split consciousness. The article seems to take a pretty simplistic approach to describing the findings.

torekp00

Here's another case of non-identity, which deserves more attention: having a child. This one's not even hypothetical. There is always a chance to conceive a child with some horrible birth defect that results in suffering followed by death, a life worse than nothing. But there is a far greater chance of having a child with a very good life. The latter chance morally outweighs the former.

torekp00

Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.

torekp00

The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.

torekp00

I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.

torekp00

I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.

0TheAncientGeek
If you mean this sort of thing http://www.kurzweilai.net/slate-this-is-your-brain-on-neural-implants, then he is barely arguing the point at all...this is miles below philosophy-grade thinking..he doesn't even set out a theory of selfhood, just appeals to intuitions. Absent Qualia is much better, although still not anything that should be called a proof.
torekp00

You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.

Mentioning something is not a prerequisite for having it.

0TheAncientGeek
That reads like a non sequitur to me. We don't know what the relationship between algorithms and experience is. It's possible for a description that doesn't explicitly mention X to nonethless add up to X, but only possible..you seem to be treating it as a necessity.
torekp00

I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.

I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.

0TheAncientGeek
You only get your guarantee if experiences are the only thing that can cause thoughts about experiences. However, you don;t get that by noting that in humans thoughts are usually caused by experiences. Moreover, in a WBE or AI, there is always a causal account of thoughts that doesn't mention experiences, namely the account i terms of information processing.
Load More