Suppose someone draws a "personal identity" line to exclude this future sunrise-witnessing person. Then if you claim that, by not anticipating, they are degrading the accuracy of the sunrise-witness's beliefs, they might reply that you are begging the question.
I have a closely related objection/clarification. I agree with the main thrust of Rob's post, but this part:
...Presumably the question xlr8harder cares about here isn't semantic question of how linguistic communities use the word "you"...
Rather, I assume xlr8harder cares about more substantive questions like: (1) If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self? (2) Should I anticipate experiencing what my upload experiences? (3) If the scannin
I'm not at all convinced by the claim that <valence is a roughly linear function over included concepts>, if I may paraphrase. After laying out a counterexample, you seem to be constructing a separate family of concepts that better fits a linear model. But (a) this is post-hoc and potentially ad-hoc, and (b) you've given us little reason to expect that there will always be such a family of concepts. It would help if you could outline how a privileged set of concepts arises for a given person, that will explain their valences.
Also, y...
When dealing with theology, you need to be careful about invoking common sense. According to https://www.thegospelcoalition.org/themelios/article/tensions-in-calvins-idea-of-predestination/ , Calvin held that God's destiny for a human being is decided eternally, not within time and prior to that person's prayer, hard work, etc.
The money (or heaven) is already in the box. Omega (or God) can not change the outcome.
What makes this kind of reasoning work in the real (natural) world is the growth of entropy involved in putting money in boxes, deciding to d...
I view your final point as crucial. I would put an additional twist on it, though. During the approach to AGI, if takeoff is even a little bit slow, the effective goals of the system can change. For example, most corporations arguably don't pursue profit exclusively even though they may be officially bound to. They favor executives, board members, and key employees in ways both subtle and obvious. But explicitly programming those goals into an SGD algorithm is probably too blatant to get away with.
In addition to your cases that fail to be explained by the four modes, I submit that Leonard Cohen's song itself also fails to fit. Roughly speaking, one thread of meaning in these verses is that "(approximately) everybody knows the dice are loaded, but they don't raise a fuss because they know if they do, they'll be subjected to an even more unfavorable game." And likewise for the lost war. A second thread of meaning is that, as pjeby pointed out, people want to be at peace with unpleasant things they can't personally change. It's ...
Like Paradiddle, I worry about the methodology, but my worry is different. It's not just the conclusions that are suspect in my view: it's the data. In particular, this --
Some people seemed to have multiple views on what consciousness is, in which cases I talked to them longer until they became fairly committed to one main idea.
-- is a serious problem. You are basically forcing your subjects to treat a cluster in thingspace as if it must be definable by a single property or process. Or perhaps they perceive you as urging them ...
this [that there is no ground truth as to what you experience] is arguably a pretty well-defined property that's in contradiction with the idea that the experience itself exists.
I beg to differ. The thrust of Dennett's statement is easily interpreted as the truth of a description being partially constituted by the subject's acceptance of the description. E.g., in one of the snippets/bits you cite, "I seem to see a pink ring." If the subject said "I seem to see a reddish oval", perhaps that would have been true. But compare:
My freely...
Fair point about the experience itself vs its description. But note that all the controversy is about the descriptions. "Qualia" is a descriptor, "sensation" is a descriptor, etc. Even "illusionists" about qualia don't deny that people experience things.
There are many features you get right about the stubbornness of the problem/discussion. Certainly, modulo the choice to stop the count at two camps, you've highlighted some crucial facts about these clusters. But now I'm going to complain about what I see as your missteps.
Moreover, even if consciousness is compatible with the laws of physics, ... [camp #2 holds] it's still metaphysically tricky, i.e., it poses a conceptual mystery relative to our current understanding.
I think we need to be careful not to mush together metaphysics and epistemics...
The belief in irreducibility is much more of a sine qua non of qualiaphobia,
Can you explain that? It seems that plenty of qualiaphiles believe they are irreducible, epistemically if not metaphysically. (But not all: at least some qualiaphiles think qualia are emergent metaphysically. So, I can't explain what you wrote by supposing you had a simple typo.)
I think you can avoid the reddit user's criticism if you go for an intermediate risk averse policy. On that policy, there being at least one world without catastrophe is highly important, but additional worlds also count more heavily than a standard utilitarian would say, up until good worlds approach about half (1/e?) the weight using the Born rule.
However, the setup seems to assume that there is little enough competition that "we" can choose a QRNG approach without being left behind. You touch on related issues when discussing costs, but this merits separate consideration.
"People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do."
I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person's position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it's working with - or expects to become superior - it's not clear why simulation would be a good approach for it. Yes that works f...
Update: John Collins says that "Causal Decision Theory" is a misnomer because (some?) classical formulations make subjunctive conditionals, not causality as such, central. Cited by the Wolfgang Schwarz paper mentioned by wdmcaskill in the Introduction.
I have a terminological question about Causal Decision Theory.
Most often, this [causal probability function] is interpreted in counterfactual terms (so P (S∖A) represents something like the probability of S coming about were I to choose A) but it needn’t be.
Now it seems to me that causation is understood to be antisymmetric, i.e. we can have at most one of "A causes B" and "B causes A". In contrast, counterfactuals are not antisymmetric, and "if I chose A then my simulation would also do so" and "If my simulation chose A then I would also do so" ...
I love #38
A time-traveller from 2030 appears and tells you your plan failed. Which part of your plan do you think is the one ...?
And I try to use it on arguments and explanations.
Right, you're interested in syntactic measures of information, more than a physical one My bad.
the initial conditions of the universe are simpler than the initial conditions of Earth.
This seems to violate a conservation of information principle in quantum mechanics.
On #4, which I agree is important, there seems to be some explanation left implicit or left out.
#4: Middle management performance is inherently difficult to assess. Maze behaviors systematically compound this problem.
But middle managers who are good at producing actual results will therefore want to decrease mazedom, in order that their competence be recognized. Is it, then, that incompetent people will be disproportionately attracted to - and capable of crowding others out from - middle management? That they will be attracted is a no-brainer, ...
When I read
To be clear, if GNW is "consciousness" (as Dehaene describes it), then the attention schema is "how we think about consciousness". So this seems to be at the wrong level! [...] But it turns out, he wants to be one level up!
I thought, thank goodness, Graziano (and steve2152) gets it. But in the moral implications section, you immediately start talking about attention schemas rather than simply attention. Attention schemas aren't necessary for consciousness or sentience; they're necessary for meta-consciousness. ...
how to quote
Paste text into your comment and then select/highlight it. Formatting options will appear, including a quote button.
People often try to solve the problem of counterfactuals by suggesting that there will always be some uncertainty. An AI may know its source code perfectly, but it can't perfectly know the hardware it is running on.
How could Emmy, an embedded agent, know its source code perfectly, or even be certain that it is a computing device under the Church-Turing definition? Such certainty would seem dogmatic. Without such certainty, the choice of 10 rather than 5 cannot be firmly classified as an error. (The classification as an error seemed to play an important role in your discussion.) So Emmy has a motivation to keep looking and find that U(10)=10.
Thanks for making point 2. Moral oughts need not motivate sociopaths, who sometimes admit (when there is no cost of doing so) that they've done wrong and just don't give a damn. The "is-ought" gap is better relabeled the "thought-motivation" gap. "Ought"s are thoughts; motives are something else.
Technicalities: Under Possible Precisifications, 1 and 5 are not obviously different. I can interpret them differently, but I think you should clarify them. 2 is to 3 as 4 is to 1, so I suggest listing them in that order, and maybe adding an option that is to 3 as 5 is to 1.
Substance: I think you're passing over a bigger target for criticism, the notion of "outcomes". In general, agents can and do have preferences over decision processes themselves, as contrasted with the standard "outcomes" of most literature like winning or lo...
If there were no Real Moral System That You Actually Use, wouldn't you have a "meh, OK" reaction to either Pronatal Total Utilitarianism or Antinatalist Utilitarianism - perhaps whichever you happened to think of first? How would this error signal - disgust with those conclusions - be generated?
Suppose you have immediate instinctive reactions of approval and disapproval -- let's call these pre-moral judgements -- but that your actual moral judgements are formed by some (possibly somewhat unarticulated) process of reflection on these judgements. E.g., maybe your pre-moral judgements about killing various kinds of animal are strongly affected by how cute and/or human-looking the animals are, but after giving the matter much thought you decide that you should treat those as irrelevant.
In that case, you might have a strong reaction to either of ...
Shouldn't a particular method of inductive reasoning be specified in order to give the question substance?
Great post and great comment. Against your definition of "belief" I would offer the movie The Skeleton Key. But this doesn't detract from your main points, I think.
I think there are some pretty straightforward ways to change your true preferences. For example, if I want to become a person who values music more than I currently do, I can practice a musical instrument until I'm really good at it.
I don't say that we can talk about every experience, only that if we do talk about it, then the basic words/concepts we use are about things that influence our talk. Also, the causal chain can be as indirect as you like: A causes B causes C ... causes T, where T is the talk; the talk can still be about A. It just can't be about Z, where Z is something which never appears in any chain leading to T.
I just now added the caveat "basic" because you have a good point about free will. (I assume you mean contracausal "free will". I think ca...
The core problem remains that, if some event A plays no causal role in any verbal behavior, it is impossible to see how any word or phrase could refer to A. (You've called A "color perception A", but I aim to dispute that.)
Suppose we come across the Greenforest people, who live near newly discovered species including the greater geckos. Greenforesters use the word "gumie" always and only when they are very near greater geckos. Since greater geckos are extremely well camouflaged, they can only be seen at short range. Also, all greate...
Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.
Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?
Sean Carroll writes in The Big Picture, p. 380:
The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.
I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we ...
We not only stop at red lights, we make statements like S1: "subjectively, red is closer to violet than it is to green." We have cognitive access both to "objective" phenomena like the family of wavelengths coming from the traffic light, and also to "subjective" phenomena of certain low-level sensory processing outputs. The epiphenomenalist has a theory on the latter. Your steelman is well taken, given this clarification.
By the way, the fact that there is a large equivalence class of wavelength combinations that will be per...
The point is literally semantic. "Experience" refers to (to put it crudely) the things that generally cause us to say "experience", because almost all words derive their reference from the things that cause their utterances (inscriptions, etc.). "Horse" means horse because horses typically occasion the use of "horse". If there were a language in which cows typically occasioned the word "horse", in that language "horse" would mean cow.
I agree that non-universal-optimizers are not necessarily safe. There's a reason I wrote "many" not "all" canonical arguments. In addition to gaming the system, there's also the time honored technique of rewriting the rules. I'm concerned about possible feedback loops. Evolution brought about the values we know and love in a very specific environment. If that context changes while evolution accelerates, I foresee a problem.
I think the "non universal optimizer" point is crucial; that really does seem to be a weakness in many of the canonical arguments. And as you point out elsewhere, humans don't seem to be universal optimizers either. What is needed from my epistemic vantage point is either a good argument that the best AGI architectures (best for accomplishing the multi-decadal economic goals of AI builders) will turn out to be close approximations to such optimizers, or else some good evidence of the promise and pitfalls of more likely architectures.
Needless to say, that there are bad arguments for X does not constitute evidence against X.
This is the right answer, but I'd like to add emphasis on the self-referential nature of the evaluation of humans in the OP. That is, it uses human values to assess humanity, and comes up with a positive verdict. Not terribly surprising, nor terribly useful in predicting the value, in human terms, of an AI. What the analogy predicts is that evaluated by AI values, AI will probably be a wonderful thing. I don't find that very reassuring.
Well if you narrow "metaphysics" down to "a priori First Philosophy", as the example suggests -- then I'm much less enthusiastic about "metaphysics". But if it's just (as I conceive it) continuous with science, just an account of what the world contains and how it works, we need a healthy dose of that just to get off the ground in epistemology..
The post persuasively displays some of the value of hermeneutics for philosophy and knowledge in general. Where I part ways is with the declaration that epistemology precedes metaphysics. We know far more about the world than we do about our senses. Our minds are largely outward-directed by default. What you know far exceeds what you know that you know, and what you know how you know is smaller still. The prospects for reversing cart and horse are dim to nonexistent.
Mostly it's no-duh, but the article seems to set up a false contrast between justification in ethics, and life practice. But large swaths of everyday ethical conversation are justificatory. This is a key feature that the philosopher needs to respect.
Nice move with the lyrical section titles.
There's a lot of room in between fully integrated consciousness and fully split consciousness. The article seems to take a pretty simplistic approach to describing the findings.
Here's another case of non-identity, which deserves more attention: having a child. This one's not even hypothetical. There is always a chance to conceive a child with some horrible birth defect that results in suffering followed by death, a life worse than nothing. But there is a far greater chance of having a child with a very good life. The latter chance morally outweighs the former.
Well, unless you're an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.
The linked paper is only about current practices, their benefits and harms. You're right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.
I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.
I'm convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn't be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers's Absent Qualia argument, I'm eager to hear it.
You seem to be inventing a guarantee that I don't need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.
Mentioning something is not a prerequisite for having it.
I'm not equating thoughts and experiences. I'm relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.
I'm not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I'd probably object and on others I wouldn't.
Given the disagreement over what "causality" is, I suspect that different CDT's might have different tolerances for adding precommitment without spoiling the point of CDT. For an example of a definition of causality that makes interesting impacts on decision theory, see Douglas Kutach, Causation and its Basis in Fundamental Physics. There's a nice review here. Defining "causation" Kutach's way would allow both making and keeping precommitments to count as causing good results. It would also at least partly collapse the divergence between CDT and EDT. Maybe completely - I haven't thought that through yet.