This is mostly just arguing over semantics. Just replace "philosophical zombie" with whatever your preferred term is for a physical human who lacks any qualia.
This is mostly just arguing over semantics.
If an argument is about semantics, this is not a good response. That is...
Just replace "philosophical zombie" with whatever your preferred term is for
An important part of normal human conversations is error correction. Suppose I say "three, as an even number, ..."; the typical thing to do is to silently think "probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen." But in technical contexts, this is often a mistake; if...
Why is it that philosophical zombies are unlikely to exist? In Eliezer's article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Also, what are LessWrong's views on the idea of a ...
In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?
Nope, your "in other words" summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.
This video by CGPGrey is somewhat related to the idea of memetic tribes and the conflicts that arise between them.
This is a bit unrelated to the original post, but Ted Kaczynski has an interesting hypothesis on the Great Filter, mentioned in Anti-Tech Revolution: Why and How.
But once self-propagating systems have attained global scale, two crucial differences emerge. The first difference is in the number of individuals from among which the "fittest" are selected. Self-prop systems sufficiently big and powerful to be plausible contenders for global dominance will probably number in the dozens, or possibly in the hundreds; they certainly will not number in the...
One perspective on pain is that it is ultimately caused by less than ideal Darwinian design of the brain. Essentially, we experience pain and other forms of suffering for the same reason that we have backwards retinas. Other proposed systems, such as David Pearce's gradients of bliss, would accomplish the same things as pain without any suffering involved.
Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.
This LessWrong Survey had the lowest turnout since Scott's original survey in 2009
What is the average amount of turnout per survey, and what has the turnout been year by year?
Does anyone here know any ways of dealing with brain fog and sluggish cognitive tempo?
What is the probability that induction works?
On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.
Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That "induction works" has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.
If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?
Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.
Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that...
For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?
Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?
Eliezer wrote this article a few years ago, about the 2 things that rationalists need faith to believe. Has any progress been made in finding justifications for either of these things that do not require faith?
We guess we are around the LW average.
What would you estimate to be the LW average?
Although with a sufficiently advanced artificial superintelligence, it could probably prevent something like the scenario discussed in this article from occurring.
Ted Kaczynski wrote about something similar to this in Industrial Society And Its Future.
...We distinguish between two kinds of technology, which we will call small-scale technology and organization-dependent technology. Small-scale technology is technology that can be used by small-scale communities without outside assistance. Organization-dependent technology is technology that depends on large-scale social organization. We are aware of no significant cases of regression in small-scale technology. But organization-dependent technology DOES regress when th
Does it make more sense to sign up for cryonics at Alcor or the Cryonics Institute?
If you are a consequentialist, it's the exact same calculation you would use if happiness were your goal. Just with different criteria to determine what constitute "good" and "bad" world states.
I agree with the conclusion that the Great Filter is more likely behind us than ahead of us. Some explanations of the Fermi Paradox, such as AI disasters or advanced civilizations retreating into virtual worlds, do not seem to fully explain the Fermi Paradox. For AI disasters, for instance, even if an artificial superintelligence destroyed the species that created it, the artificial superintelligence would likely colonize the universe itself. If some civilizations become sufficiently advanced but choose not to colonize for whatever reason, there would likely be at least some civilizations that would.
But what exactly constitutes "enough data"? With any finite amount of data, couldn't it be cancelled out if your prior probability is small enough?
effective altruist youtubers
Such as?
Believing in a soul that departs to the afterlife would seem to make cryonics pointless. What I am asking is, are there Christians here that believe in an afterlife and a soul, but plan on being cryopreserved regardless?
For any Christians here on LessWrong, are you currently or do you plan on signing up for cryonics? If so, how do you reconcile being a cryonicist with believing in a Christian afterlife?
TL;DR: In the study, a number of White and Black children were adopted into upper middle class homes in Minnesota, and the researchers had the adopted children take IQ tests at age 7 and age 17. What they found is that the Black children consistently scored lower on IQ tests, even when controlling for education and upbringing. Basically the study suggests that IQ is to an extent genetic, and the population genetics of different ethnic groups are a contributing factor to differences in average IQ and achievement.
Channels that make videos on similar topics covered in the Sequences.
Are there any 2017 LessWrong surveys planned?
I'm surprised that there aren't any active YouTube channels with LessWrong-esque content, or at least none that I am aware of.
Avoiding cryonics because of possible worse than death outcomes sounds like a textbook case of loss aversion.
Ted Kaczynski wrote something similar to this in Industrial Society And Its Future, albeit with different motivations.
...
- Revolutionaries should have as many children as they can. There is strong scientific evidence that social attitudes are to a significant extent inherited. No one suggests that a social attitude is a direct outcome of a person’s genetic constitution, but it appears that personality traits are partly inherited and that certain personality traits tend, within the context of our society, to make a person more likely to hold this or that soci
I remember a while ago Eliezer wrote this article, titled Bayesians vs. Barbarians. In it, he describes how in a conflict between rationalists and barbarians, or to your analogy Athenians and Spartans, the barbarians/Spartans will likely win. In the world today, low IQ individuals are reproducing at far higher rates than high IQ individuals, so are "winning" in an evolutionary sense. Having universalist, open, trusting values is not necessarily a bad thing in itself, but should not be done to such an extent that this altruism becomes pathological, and leads to the protracted suicide of the rationalist community.
Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?
What is the general consensus on LessWrong regarding Race Realism?
This, and find better ways to optimize power efficiency.
How do you even define free will? It seems like a poorly defined concept in general, and is more or less meaningless. The notion of free will that people talk about seems to be little more than a glorified form of determinism and randomness.
But why should the probability for lower-complexity hypotheses be any lower?
But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?
How is it that Solomonoff Induction, and by extension Occam's Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has "worked" in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?
With transhumanist technology, what is the probability that any human alive today will live forever, and not just thousands, or millions of years? I assume an extremely small, but non-zero, amount.
Also, how do we know when the probability surpasses 50%? Couldn't the prior probability of the sun rising tomorrow be astronomically small, and with Bayesian updates using the evidence that the sun will rise tomorrow, merely make the probability slightly less astronomically small?
How do we determine our "hyper-hyper-hyper-hyper-hyperpriors"? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
Plastination is one technology you might be interested in.
The money you would have spent on giving money to a beggar might be better spent on something that will decrease existential risk or contribute to transhumanist goals, such as donating to MIRI or the Methuselah Foundation.
Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn't we ultimately need some kind of "first hyperprior" to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?
What do you think of Avshalom Elitzur's arguments for why he reluctantly thinks interactionist dualism is the correct metaphysical theory of consciousness?