All of ImmortalRationalist's Comments + Replies

What do you think of Avshalom Elitzur's arguments for why he reluctantly thinks interactionist dualism is the correct metaphysical theory of consciousness?

5jessicata
Responding to a few points from his article: Not true if "qualia" refers to a high-level property of some matter. These explanations aren't actually completely stupid; if the plant is optimizing then its optimization could possibly neatly be described by analogy with properties of human optimization that are perceivable in qualia. Maybe empathizing with a plant is a good way of understanding its optimization. Of course, the analogy isn't great (though is much better in the case of large animals), hence why I'm saying the explanations are "not completely stupid" rather than "good". False dichotomy, the explanations may apply to different properties of the process, or to different levels of analysis of the process. (I am, here, rejecting the principle of unique causality) "Is" seems like overeager reductionism. Telling what program a given computer is running, based on its physical configuration, is nontrivial. This seems true. Note that "perceived as different" is doing a lot of work here. Qualia are known "directly" in a way that is close to metaphysically basic, whereas percepts are known indirectly, e.g. by looking at a brain scan of one's self or another person (note, this observation is also known through qualia, indirectly, since the brain scan must be observed). These are different epistemic modalities, that could nonetheless always yield the same information due to being two views on the same process. False under identity/double-aspect theory. Because changing qualia means changing percepts too. "Percept" as he uses the term refers to the entire brain processing starting from seeing something, including the part of the processing that processes things through world-perception ontology, self-concepts, etc. So "qualia per se - as nonidentical with percepts" is already assuming the falsity of identity/double-aspect theory. Why would there be "neural correlates of false beliefs"? The brain can't tell that all of its false beliefs are false; that would req

This is mostly just arguing over semantics. Just replace "philosophical zombie" with whatever your preferred term is for a physical human who lacks any qualia.

Vaniver110
This is mostly just arguing over semantics.

If an argument is about semantics, this is not a good response. That is...

Just replace "philosophical zombie" with whatever your preferred term is for

An important part of normal human conversations is error correction. Suppose I say "three, as an even number, ..."; the typical thing to do is to silently think "probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen." But in technical contexts, this is often a mistake; if... (read more)

Why is it that philosophical zombies are unlikely to exist? In Eliezer's article Zombies! Zombies?, it seemed to mostly be an argument against epiphenomenalism. In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Also, what are LessWrong's views on the idea of a ... (read more)

Vaniver110
In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Nope, your "in other words" summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.

9mako yass
It's kind of against the moderation guidelines of "Make personal statements instead of statements that try to represent a group consensus" for anyone to try to answer that question hahah =P But, authentically relating just for myself as a product of the local meditations: There is no reason to think continuity of anthropic measure uh.. exists? On a metaphysical level. We can conclude from Clones in Rooms style thought experiments that different clumps of matter have different probabilities of observing their own existence (different quantities of anthropic measure or observer-moments) but we have no reason to think that their observer-moments are linked together in any special way. Our memories are not evidence of that. If your subjectivity-mass was in someone else, a second ago, you wouldn't know. An agent is allowed to care about the observer-states that have some special physical relationship to their previous observer-states, but nothing in decision theory or epistemology will tell you what those physical relationships have to be. Maybe the agent does not identify with itself after teleportation, or after sleeping, or after blinking. That comes down to the utility function, not the metaphysics.
7Charlie Steiner
P-zombies are indeed all about epiphenomenalism. Go check out David Chalmers' exposition for the standard usage. I think the problem with epiphenominalism is that it's treating ignorance as a positive license to intoduce its epiphenomenal essence. We know that the brain in your body does all sorts of computational work, and does things that function like memory, and planning, and perception, and being affected by emotions. We might even use a little poetic language and say that there is "someone home" in your body - that it's convenient and natural to treat this body as a person with mental attributes. But it is the unsolved Hard Problem of Consciousness, as some would say, to prove that the person home in your body is you. We could have an extra consciousness-essence attached to these bodies, they say. You can't prove we don't! When it comes to denying qualia, I think Dennett would bring up the anecdote about magic from Lee Siegel: "I'm writing a book on magic”, I explain, and I'm asked, “Real magic?” By real magic people mean miracles, thaumaturgical acts, and supernatural powers. “No”, I answer: “Conjuring tricks, not real magic”. Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic." Dennett thinks peoples' expectations are that "real qualia" are the things that live in the space of epiphenomenal essences and can't possibly be the equivalent of a conjuring trick.
2TheWakalix
Zombie Dennett: which is more likely? That philosophers could interpret the same type of experience in fundamentally different ways, or that Dennett has some neurological defect which has removed his qualia but not his ability to sense and process sensory information? Consciousness continuity: I know I’m a computationalist and [causalist?], and I am weakly confident that most LWers share at least one of these beliefs. (Speaking for others is discouraged here, so I doubt you’ll be able to get more than a poll of beliefs, or possibly a link to a previous poll.) Definitions of terms: computationalism is the view that cognition, identity, etc. are all computations or properties of computations. Causalist is a word I made up to describe the view that continuity is just a special form of causation, and that all computation-preserving forms of causation preserve identity as well. (That is, I don’t see it as fundamentally different if the causation from one subjective moment to the next is due to the usual evolution of brains over time or due to somebody scanning me and sending the information to a nanofactory, so long as the information that makes me up isn’t lost in this process.)

This video by CGPGrey is somewhat related to the idea of memetic tribes and the conflicts that arise between them.

This is a bit unrelated to the original post, but Ted Kaczynski has an interesting hypothesis on the Great Filter, mentioned in Anti-Tech Revolution: Why and How.

But once self-propagating systems have attained global scale, two crucial differences emerge. The first difference is in the number of individuals from among which the "fittest" are selected. Self-prop systems sufficiently big and powerful to be plausible contenders for global dominance will probably number in the dozens, or possibly in the hundreds; they certainly will not number in the
... (read more)

One perspective on pain is that it is ultimately caused by less than ideal Darwinian design of the brain. Essentially, we experience pain and other forms of suffering for the same reason that we have backwards retinas. Other proposed systems, such as David Pearce's gradients of bliss, would accomplish the same things as pain without any suffering involved.

Should the mind projection fallacy actually be considered a fallacy? It seems like being unable to imagine a scenario where something is possible is in fact Bayesian evidence that it is impossible, but only weak Bayesian evidence. Being unable to imagine a scenario where 2+2=5, for instance, could be considered evidence that 2+2 ever equaling 5 is impossible.

0Pattern
2Oscar_Cunningham
This isn't an accurate description of the mind projection fallacy. The mind projection fallacy happens when someone thinks that some phenomenon occurs in the real world but in fact the phenomenon is a part of the way their mind works. But yes, it's common to almost all fallacies that they are in fact weak Bayesian evidence for whatever they were supposed to support.

This LessWrong Survey had the lowest turnout since Scott's original survey in 2009

What is the average amount of turnout per survey, and what has the turnout been year by year?

1satt
I believe the following is a comprehensive list of LW-wide surveys and their turnouts. Months are those when the results were reported. 1. May 2009, 166 2. December 2011, 1090 3. December 2012, 1195 4. January 2014, 1636 5. January 2015, 1503 6. May 2016, 3083 And now in the current case we have "about 300" responses, although results haven't been written up and published. I hope they will be. If the only concern is sample size, well, 300 beats zero!

Does anyone here know any ways of dealing with brain fog and sluggish cognitive tempo?

0Elo
Yes there are various things. What are you taking? What have you stopped taking? Do you have any allergies? What's your hr + BP. Has it always been like this or is it recent? Pm if you like.

What is the probability that induction works?

0Gurkenglas
By Solomonoff induction, the hypothesis that governs the universe under the assumption that induction works has less complexity penalty than one that counts to a number on the order of 10^80 to 10^18000 steps while the universe is running and then starts working differently by a factor of about 10^17 (since that's how many turing machines with 6 states there are, which is the number of states you need to count to that sort of number of steps), so the probability that induction works can be given an upper bound of about 1-10^-17.
0torekp
Shouldn't a particular method of inductive reasoning be specified in order to give the question substance?

On a related question, if Unfriendly Artificial Intelligence is developed, how "unfriendly" is it expected to be? The most plausible sounding outcome may be human extinction. The worst case scenario could be if the UAI actively tortures humanity, but I can't think of many scenarios in which this would occur.

0hairyfigment
I would only expect the latter if we started with a human-like mind. A psychopath might care enough about humans to torture you; an uFAI not built to mimic us would just kill you, then use you for fuel and building material. (Attempting to produce FAI should theoretically increase the probability by trying to make an AI care about humans. But this need not be a significant increase, and in fact MIRI seems well aware of the problem and keen to sniff out errors of this kind. In theory, an uFAI could decide to keep a few humans around for some reason - but not you. The chance of it wanting you in particular seems effectively nil.)

Eliezer Yudkowsky wrote this article a while ago, which basically states that all knowledge boils down to 2 premises: That "induction works" has a sufficiently large prior probability, and that there exists some single large ordinal that is well-ordered.

If you are young, healthy, and have a long life expectancy, why should you choose CI? In the event that you die young, would it not be better to go with the one that will give you the best chance of revival?

Not sure how relevant this is to your question, but Eliezer wrote this article on why philosophical zombies probably don't exist.

Explain. Are you saying that since induction appears to work in your everyday like, this is Bayesian evidence that the statement "Induction works" is true? This has a few problems. The first problem is that if you make the prior probability sufficiently small, it cancels out any evidence you have for the statement being true. To show that "Induction works" has at least a 50% chance of being true, you would need to either show that the prior probability is sufficiently large, or come up with a new method of calculating probabilities that... (read more)

For those in this thread signed up for cryonics, are you signed up with Alcor or the Cryonics Institute? And why did you choose that organization and not the other?

1Turgurth
I saw this same query in the last open thread. I suspect you aren't getting any responses because the answer is long and involved. I don't have time to give you the answer in full either, so I'll give you the quick version: I am in the process of signing up with Alcor, because after ten years of both observing cryonics organizations myself and reading what other people say about them, Alcor has given a series of cues that they are the more professional cryonics organization. So, the standard advice is: if you are young, healthy with a long life expectancy, and are not wealthy, choose C.I., because they are less expensive. If those criteria do not apply to you, choose Alcor, as they appear to be the more serious, professional organization. In other words: choose C.I. as the type of death insurance you want to have, but probably won't use, or choose Alcor as the type of death insurance you probably will use.

Eliezer Yudkowsky wrote this article about the two things that rationalists need faith to believe in: That the statement "Induction works" has a sufficiently large prior probability, and that some single large ordinal that is well-ordered exists. Are there any ways to justify belief in either of these two things yet that do not require faith?

0hairyfigment
Not exactly. MIRI and others have research on logical uncertainty, which I would expect to eventually reduce the second premise to induction. I don't think we have a clear plan yet showing how we'll reach that level of practicality. Justifying a not-super-exponentially-small prior probability for induction working feels like a category error. I guess we might get a kind of justification from better understanding Tegmark's Mathematical Macrocosm hypothesis - or, more likely, understanding why it fails. Such an argument will probably lack the intuitive force of 'Clearly the prior shouldn't be that low.'
1drethelin
You can justify a belief in "Induction works" by induction over your own life.

Eliezer wrote this article a few years ago, about the 2 things that rationalists need faith to believe. Has any progress been made in finding justifications for either of these things that do not require faith?

We guess we are around the LW average.

What would you estimate to be the LW average?

1b4yes_duplicate0.9924090729683128
According to LW 2014 survey, IQ 135-140. Sounds about right.

Although with a sufficiently advanced artificial superintelligence, it could probably prevent something like the scenario discussed in this article from occurring.

Ted Kaczynski wrote about something similar to this in Industrial Society And Its Future.

We distinguish between two kinds of technology, which we will call small-scale technology and organization-dependent technology. Small-scale technology is technology that can be used by small-scale communities without outside assistance. Organization-dependent technology is technology that depends on large-scale social organization. We are aware of no significant cases of regression in small-scale technology. But organization-dependent technology DOES regress when th

... (read more)

Does it make more sense to sign up for cryonics at Alcor or the Cryonics Institute?

0J Thomas Moros
If you can afford it, it makes more sense to sign up at Alcor. Alcor's patient care trust improves the chances that you will be cared for indefinitely after cryopreservation. CI asserts their all volunteer status as a benefit, but the cryonics community has not been growing and has been aging. It is not unlikely that there could be problems with availability of volunteers in the next 50 years.

If you are a consequentialist, it's the exact same calculation you would use if happiness were your goal. Just with different criteria to determine what constitute "good" and "bad" world states.

4JenniferRM
I think you're missing the thrust of my question. I'm asking something more like "What if mental states are mostly a means of achieving worthwhile consequences, rather than being mostly the consequences that should be cared about in and for themselves?" It is "consequences" either way. But what might be called intrinsic hedonism would then be a consequentialism that puts the causal and moral stop sign at "how an action makes people feel" (mostly ignoring the results of the feelings (except to the degree that the feelings might cause other feelings via some series of second order side effects)). An approach like this suggests that if people in general could reliably achieve an utterly passive and side effect free sort of bliss, that would be the end game... it would be an ideal stable outcome for people to collectively shoot for, and once it was attained the lack of side effects would keep it from being disrupted. By contrast, hedonic instrumentalism (that I'm mostly advocating) would be a component of some larger consequentialism that is very concerned with what arises because of feelings (like what actions, with what results) and defers the core axiological question about the final value of various world states to a separate (likely independent) theory. The position of hedonic instrumentalism is basically that happiness that causes behavior with bad results for the world is bad happiness. Happiness that causes behavior with good results in the world is good happiness. And happiness is arguably pointless if it is "sterile"... having no behavioral or world affecting consequences (though this depends on how much control we have over our actions and health via intermediaries other than by wireheading our affective subsystems). What does "good" mean here? That's a separate question. Basically, the way I'm using the terms here: intrinsic hedonism is "an axiology", but hedonic instrumentalism treats affective states mostly as causal intermediates that lead to large

I agree with the conclusion that the Great Filter is more likely behind us than ahead of us. Some explanations of the Fermi Paradox, such as AI disasters or advanced civilizations retreating into virtual worlds, do not seem to fully explain the Fermi Paradox. For AI disasters, for instance, even if an artificial superintelligence destroyed the species that created it, the artificial superintelligence would likely colonize the universe itself. If some civilizations become sufficiently advanced but choose not to colonize for whatever reason, there would likely be at least some civilizations that would.

But what exactly constitutes "enough data"? With any finite amount of data, couldn't it be cancelled out if your prior probability is small enough?

0Luke_A_Somers
Yes, but that's not the way the problem goes. You don't fix your prior in response to the evidence in order to force the conclusion (if you're doing it anything like right). So different people with different priors will have different amounts of evidence required: 1 bit of evidence for every bit of prior odds against, to bring it up to even odds, and then a few more to reach it as a (tentative, as always) conclusion.

effective altruist youtubers

Such as?

Believing in a soul that departs to the afterlife would seem to make cryonics pointless. What I am asking is, are there Christians here that believe in an afterlife and a soul, but plan on being cryopreserved regardless?

For any Christians here on LessWrong, are you currently or do you plan on signing up for cryonics? If so, how do you reconcile being a cryonicist with believing in a Christian afterlife?

2entirelyuseless
I am neither of those, but the obvious answer would be that a soul would depart to the afterlife upon information theoretic death.

TL;DR: In the study, a number of White and Black children were adopted into upper middle class homes in Minnesota, and the researchers had the adopted children take IQ tests at age 7 and age 17. What they found is that the Black children consistently scored lower on IQ tests, even when controlling for education and upbringing. Basically the study suggests that IQ is to an extent genetic, and the population genetics of different ethnic groups are a contributing factor to differences in average IQ and achievement.

Channels that make videos on similar topics covered in the Sequences.

Are there any 2017 LessWrong surveys planned?

2namespace
Sorry for the late response but yes, I was just working on finishing one up now.

Is anyone here familiar with the Minnesota Transracial Adoption Study? Any opinions on it?

0Viliam
First reaction after looking at the title: "Was someone trying to find out whether adopting children to a family of different race will change their color of skin?" :D Sorry, I wish I could post a more relevant comment, but I don't understand it; seems like everyone participating in the study was losing IQ points over time, including the parents. Or maybe I am reading it completely wrong; that is actually quite likely.

I'm surprised that there aren't any active YouTube channels with LessWrong-esque content, or at least none that I am aware of.

0ThoughtSpeed
I just started a Facebook group to coordinate effective altruist youtubers. I'd definitely say rationality also falls under the umbrella. PM me and I can add you. :)
0ignoranceprior
What would count as "LessWrong-esque"?

Avoiding cryonics because of possible worse than death outcomes sounds like a textbook case of loss aversion.

Ted Kaczynski wrote something similar to this in Industrial Society And Its Future, albeit with different motivations.

  1. Revolutionaries should have as many children as they can. There is strong scientific evidence that social attitudes are to a significant extent inherited. No one suggests that a social attitude is a direct outcome of a person’s genetic constitution, but it appears that personality traits are partly inherited and that certain personality traits tend, within the context of our society, to make a person more likely to hold this or that soci
... (read more)

I remember a while ago Eliezer wrote this article, titled Bayesians vs. Barbarians. In it, he describes how in a conflict between rationalists and barbarians, or to your analogy Athenians and Spartans, the barbarians/Spartans will likely win. In the world today, low IQ individuals are reproducing at far higher rates than high IQ individuals, so are "winning" in an evolutionary sense. Having universalist, open, trusting values is not necessarily a bad thing in itself, but should not be done to such an extent that this altruism becomes pathological, and leads to the protracted suicide of the rationalist community.

1wubbles
Dysgenesis is worrying, but we have the means to fight it: subsidized egg freezing and childcare, changes to employment culture, and it is a very slow prospect. I don't think that is a correct summary of the essay at all, which is really pointing to a problem with how we think about coordination.

Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?

0MrMind
While I was searching for the manifesto, I noticed a strange incongruence between the English and the Italian Wikipedia. While the latter source is very similar to the former, there is this strange sentence: which translates roughly as "his document 35000 words-long Industrial Society and Its Future (also known as The Red Pill, also called "Unabomber Manifesto"). Wait, what? The Red Pill? Since when? There's no trace of such name in the English version. Any source on that? Is it plausible? Is it some kind of fucked-up joke?

What is the general consensus on LessWrong regarding Race Realism?

3James_Miller
I'm not sure, but I doubt there is a LW consensus on this issue.
3Lumifer
Some people consider it obvious and some people consider it distasteful.

How do you even define free will? It seems like a poorly defined concept in general, and is more or less meaningless. The notion of free will that people talk about seems to be little more than a glorified form of determinism and randomness.

1entirelyuseless
It certainly isn't meaningless, and there are several possible meanings. You mention one of them yourself. Also, all words are poorly defined, since we define words by other words (which are themselves subject to the same problem) or by pointing to things, which is a poor way to define things.

But why should the probability for lower-complexity hypotheses be any lower?

6gjm
It shouldn't, it should be higher. If you just meant "... be any higher?" then the answer is that if the probabilities of the higher-complexity hypotheses tend to zero, then for any particular low-complexity hypothesis H all but finitely many of the higher-complexity hypotheses have lower probability. (That's just part of what "tending to zero" means.)

But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?

0entirelyuseless
gjm's explanation is correct.
4gjm
Almost all hypotheses have high complexity. Therefore most high-complexity hypotheses must have low probability. (To put it differently: let p(n) be the total probability of all hypotheses with complexity n, where I assume we've defined complexity in some way that makes it always a positive integer. Then the sum of the p(n) converges, which implies that the p(n) tend to 0. So for large n the total probability of all hypotheses of complexity n must be small, never mind the probability of any particular one.) Note: all this tells you only about what happens in the limit. It's all consistent with there being some particular high-complexity hypotheses with high probability.

How is it that Solomonoff Induction, and by extension Occam's Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has "worked" in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?

0hairyfigment
See: You only need faith in two things and the comment on the binomial monkey prior (a theory which says that the 'past' does not predict the 'future'). You could argue that there exists a more fundamental assumption, hidden in the supposed rules of probability, about the validity of the evidence you're updating on. Here I can only reply that we're trying to explain the data regardless of whether or not it "is true," and point to the fact that you're clearly willing to act like this endeavor has value.
0entirelyuseless
There are more hypotheses with a high complexity than with a low complexity, so it is mathematically necessary to assign lower probabilities to high complexity cases than to low complexity cases (broadly speaking and in general -- obviously you can make particular exceptions) if you want your probabilities to sum to 1, because you are summing an infinite series, and to get it to come to a limit, the terms in the series must be generally decreasing.

With transhumanist technology, what is the probability that any human alive today will live forever, and not just thousands, or millions of years? I assume an extremely small, but non-zero, amount.

0MrMind
If you mean 'forever' literally... well, the amount of energy inside our cosmological horizon is finite, and is becoming increasingly unreachable. If you stipulate that there's some means for humans to reach outside of our light-cone, then you have to confront with the possibility of time-travel (as far as we know). The conclusion is that 'forever' is either unreachable or loses its meaning once you consider sufficiently advanced tech.
2NancyLebovitz
If you mean literally forever, I think the odds aren't good. Admittedly, physics is somewhat in flux, but there doesn't seem to be any guarantee that there will be a universe which which has continuity with ours trillions of years from now, though millions shouldn't be a problem at all. Also, I'm not sure what survival means for the very long haul. You might have a consciousness which has continuity with yours millions of years later, but I suspect there would be so much change that your current self and your far future self would have little or nothing in common.

Also, how do we know when the probability surpasses 50%? Couldn't the prior probability of the sun rising tomorrow be astronomically small, and with Bayesian updates using the evidence that the sun will rise tomorrow, merely make the probability slightly less astronomically small?

0Epictetus
The prior probability could be anything you want. Laplace advised taking a uniform distribution in the absence of any other data. In other words, unless you have some reason to suppose one outcome is more likely than another, you should weigh them equally. For the sunrise problem, you could invoke the laws of physics and our observations of the solar system to assign a prior probability much greater than 0.5. Example: If I handed you a biased coin and asked for the prior probability that it comes up heads, it would be reasonable to suppose that there's no reason to suppose it biased to one side over the other and so assign a prior of 0.5. If I asked about the prior probability of three heads in a row, then there's nothing stopping you from saying "Either it happens or it doesn't, so 50/50". However, if your prior is that the coin is fair, then you can compute the prior for three heads in a row as 1/8.

How do we determine our "hyper-hyper-hyper-hyper-hyperpriors"? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?

0ChristianKl
Intuition. Trusting our brain to come up with something useful.
3Epictetus
In some applications, you can get a base rate through random sampling and go from there. Otherwise, you're stuck making something up. The simplest principle is to assume that if there's no evidence with which to distinguish possibilities, then one should take a uniform distribution (this has obvious drawbacks if the number of possibilities is infinite). Another approach is to impose some kind of complexity penalty, i.e. to have some way of measuring the complexity of a statement and to prefer statements with less complexity. If you have no data, you can't have a good way to calculate the probability of something. If you defend a method by saying it works in practice, then you're using data.
0Ishaan
mutter mutter something something to do with parsimony/complexity/occam?
0hairyfigment
No, practical Bayesian probability starts with an attempt to represent your existing beliefs and make them self-consistent. For a brief post on the more abstract problem, see here.

Plastination is one technology you might be interested in.

0oge
Yup! Unfortunately it isn't ready for humans yet http://blog.brainpreservation.org/2015/04/27/shawn-mikula-on-brain-preservation-protocols/

The money you would have spent on giving money to a beggar might be better spent on something that will decrease existential risk or contribute to transhumanist goals, such as donating to MIRI or the Methuselah Foundation.

Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn't we ultimately need some kind of "first hyperprior" to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?

4Epictetus
This is a well-known problem dating back to Laplace (pp 18-19 of the book).
Load More