Wiki Contributions

Comments

Quoting https://en.wikipedia.org/wiki/Kleene%27s_T_predicate:

The ternary relation T1(e,i,x) takes three natural numbers as arguments. The triples of numbers (e,i,x) that belong to the relation (the ones for which T1(e,i,x) is true) are defined to be exactly the triples in which x encodes a computation history of the computable function with index e when run with input i, and the program halts as the last step of this computation history.

In other words: If someone gives you an encoding of a program, an encoding of its input and a trace of its run, you can check with a primitive recursive function whether you have been lied to.

A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.

And indeed Kleene's normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.

I don't buy your first argument against time-travel. Even under the model of the universe as a static mathematical object connected by wave-function consistency constraints, there is still a consistent interpretation of the intuitive notion of "time travel":

The "passage" of time is the continuous measurement of the environment by a subsystem (which incidentally believes itself to be an 'observer') and the resulting entanglement with farther away parts of the system as "time goes on" (i.e. further towards positive time). Then time-travel is a measurement of a "past" state or described differently (but the same thing) an entanglement between a subsystem (the location in the past the traveler visited) and its surroundings, which does not respect the common constraint that entanglement propagates at speed of light (because the traveler came from some future location (and its past light-cone) which is -- "surprisingly" -- entangled with the past). While violating common understanding of space-time, it is not logically impossible in this understanding of the universe.

This time-travel allows interaction with the past (which are not different from observations anyway).

Do I overlook something here?

Here is my attempt to convince you also of 1 (in your numbering):

I disagree with your: "From a preference utilitarian Perspective, only a self-conscious being can have preferences for the future, therefore you can only violate the preferences of a self-conscious being by killing it."

To the contrary, every agent which follows an optimization goal exhibits some preference (even if itself does not understand them). Namely that its optimization goal shall be reached. The ability to understand ones own optimization goal is not necessary for a preference to be morally relevant, otherwise babies and even unconscious people would not have moral weight. (And even non-sleeping people don't understand all their optimization goals.)

This leaves the problem of how to weight various agents. A solution which gives equal weight "per agent" has ugly consequences (because we should all immediately take immunosuppressants to save the bacteria) and is ill-defined, because many systems allow multiple ways to count "agents" (each cell has equal weight? each organ? each human? each family? each company? each species? each gene allele?).

A decent solution seems to be to take computing power (alternatively: the ability to reach the optimization goals) of the system exhibiting optimizing behavior as a "weight" (If only for game-theoretic reasions; it certainly makes sense to value preferences of extremly powerful optimizers strongly). Unfortunately, there is no clear scale of "computing power" one can calculate with. Extrapolating from intuition gives us a trivial weight for bacterias' goals and a weight near our own for the goals of other humans. In the concrete context of killing animals to obtain meat, it should be observed that animals are generally rather capable of reaching their goals in the wild (e.g. getting food, spawning offspring) - better than human children, I'd say.

I, for one, like my moral assumptions and cached thoughts challenged regularly. This works well with repugnant conclusions. Hence I upvoted this post (to -21).

I find two interesting questions here:

  1. How to reconcile opposing interests in subgroups of a population of entities whose interests we would like to include into our utility function. An obvious answer is facilitating trade between all interested to increase utility. But: How do we react to subgroups whose utility function values trade itself negatively?

  2. Given that mate selection is a huge driver of evolution, I wonder if there is actually a non-cultural, i.e. genetic, component to the aversion (which I feel) against providing everyone with sexual encounters / the ability to create genetic offspring / raise children. And I'd also be interested in hearing where other people feel the "immoral" line...

Interestingly, there appears (at least in my local cultural circle) that being attended by human caretakers when incapacitated by age, is supposed to be a basic right. Hence, there must be some other reason - and not just the problem about rights being fulfilled by other persons, why the particular example assumed to underlie the parable, is reprehensible to many people.

To disagree with this statement is to say that a scanned living brain, cloned, remade and started will contain the exact same consciousness, not similar, the exact same thing itself, that simultaneously exists in the still-living original. If consciousness has an anatomical location, and therefore is tied to matter, then it would follow that this matter here is the exact matter as that separate matter there. This is an absurd proposition.

You conclude that consciousness in your scenario cannot have 1 location(s).

If consciousness does not have an anatomical / physical location then it is the stuff of magic and woo.

You conclude that consciousness in your scenario cannot have 0 locations.

However, there are more numbers than those two.

The closest parallel I see to your scenario is a program run on two computers for redundancy (like it is sometimes done in safety-critical systems). It is indeed the same program in the same state but in 2 locations.

The two consciousnesses will diverge if given different input data streams, but they are (at least initially) similar. Given that the state of your brain tomorrow will be different from the state of if today, why do you care about the wellbeing of that human, who is not identical to now-you? Assuming that you care about your tomorrow, why does it make a difference if that human is separated from you by time and not by space (as in your scenario)?

Regarding auras. I am not sure, if I observed the same phenomenon, but if I sit still and keep my eyes fixed on the same spot for a while (in a still scene), my eyes will -- after a while -- get accustomed to the exact light pattern incoming and everythig kind-of fades to gray. But very slight movements will then generate colorful borders on edges (like a gaussian edge detector).

  • Best way to fix climate change: "Renewables / Nuclear"
  • Secret Services are necessary to fight terrorism / Secret Services must be abolished
  • GPL / BSD-Licences
  • Install a smoke detector

  • Do martial arts training until you get the falling more or less right. While this might be helpful against muggers the main benefit is the reduced probability of injury in various unfortunate situation.

Load More