I am not familiar with those concepts. References would be appreciated. 🙏
It seems obvious that your change in relationship with suffering constitutes a kind of value shift, doesn't it?
This is not obvious to me. In the first place, I never had the value "avoid suffering" even before I started my practices. Since before I even knew the concept of suffering, I have always had the compulsion of avoiding suffering, but the value to transcend it.
What's your relationship with value drift? Are you unafraid of it? That gradual death by mutation? The infidelity of your future self?
I am afraid of value drift, but I ...
Anyone, it seems, can have the experience of “feeling totally fine and at ease while simultaneously experiencing intense … pain”[1]:
It would greatly please me if people could achieve a deeper understanding of suffering just by taking analgesics. If that were the case, perhaps we should encourage people to try them just for that purpose. However, I'm guessing that the health risks, especially cognitive side-effects (a reduction of awareness that would preclude the possibility of gaining any such insight), risks of addiction and logistical issues surr...
I would assign that a probability less than 0.1, and that's because I already experienced some insights which defy verbal transmission. For instance, I feel that I am close to experientially understanding the question of "what is suffering?" The best way I can formulate my understanding into words is, "there is no such thing as suffering. It is an illusion." I don't think additional words or higher-context instructions would help in conveying my understanding to someone who cannot relate to the experience of feeling totall...
Anyone, it seems, can have the experience of “feeling totally fine and at ease while simultaneously experiencing intense … pain”[1]:
...It turns out there is painless pain: lobotomized people experience that, and “reactive dissociation” is the phrase used to describe the effects sometimes of analgesics like morphine when administered after pain has begun, and the patient reports, to quote Dennett 1978 [PDF] (emphasis in original), that “After receiving the analgesic subjects commonly report not that the pain has disappeared or diminished (as with aspirin) bu
I'm reducing my subjective probability that you will abandon rationality...
I suppose what you are attempting is similar to what Buddha did in the first place. The sages of his time must have felt pained to see their beautiful non-dualism sliced and diced into mass-produced sutras, rather than the poems and songs and mythology which were, up until then, the usual vehicle of expression for these truths.
I guess I'm just narcissistic enough to still be a Quinean naturalist and say 'yep, that is also me.'
Considering God to be part of yourse...
The truths of General Relativity cannot be conveyed in conventional language. But does one have to study the underlying mathematics before evaluating its claims?
Just as there exists a specialized language that accurately conveys General Relativity, there similarly exists a specialized language (mythological language) for conveying mystical truths. However, I think the wrong approach would be to try to understand that language without having undergone the necessary spiritual preparation. As St. Paul says in 1 Corinthians 2:14
The natural person does not a...
I started out as a self-identified rationalist, got fascinated by mysticism and 'went native.' Ever since, I have been watching the rationality from the sidelines to see if anyone else will 'cross over' as well.
I predict that if Romeo continues to work on methods for teaching meditation, that eventually he will also 'go mystical' and publicly rescind his claim that all perceived metaphysical insights can be explained as pathological disconnects with reality caused by neural rewiring. Conditional on his continuing to teach, I...
Why no probability on "there exists a truth that is very difficult to express in conventional language, such that as contexts change, fixed written accounts of it tend to decay into uselessness, it's so difficult that even most people who get it lack the verbal skill to express it clearly in their words in their time, this is compounded by most people needing higher-context instruction than words alone to get to the point where the words can mean anything to them, and because of this the vast majority of people trying to talk about this round it ...
Cool, I will take a look at the paper!
Great comment, mind if I quote you later on? :)
That said, if you have example problems where a logically omniscient Bayesian reasoner who incorporates all your implicit knowledge into their prior would get the wrong answers, those I want to see, because those do bear on the philosophical question that I currently see Bayesian probability theory as providing an answer to--and if there's a chink in that armor, then I want to know :-)
It is well known where there might be chinks in the armor, which is what happens when two logically omniscient Bayesians si...
...If the game is really working like they say it is, then the frequentist is often concentrating probability around some random psi for no good reason, and when we actually draw random thetas and check who predicted better, we'll see that they actually converged around completely the wrong values. Thus, I doubt the claim that, setting up the game exactly as given, the frequentist converges on the "true" value of psi. If we assume the frequentist does converge on the right answer, then I strongly suspect either (1) we should be using a prior where
Ok. So the scenario is that you are sampling only from the population f(X)=1.
EDIT: Correct, but you should not be too hung up on the issue of conditional sampling. The scenario would not change if we were sampling from the whole population. The important point is that we are trying to estimate a conditional mean of the form E[Y|f(X)=1]. This is a concept commonly seen in statistics. For example, the goal of non-parametric regression is to estimate a curve defined by f(x) = E[Y|X=x].
...Can you exhibit a simple example of the scenario in the section &qu
Update from the author:
Thanks for all of the comments and corrections! Based on your feedback, I have concluded that the article is a little bit too advanced (and possibly too narrow in focus) to be posted in the main section of the site. However, it is clear that there is a lot of interest in the general subject. Therefore, rather than posting this article to main, I think it would be more productive to write a "Philosophy of Statistics" sequence which would provide the necessary background for this kind of post.
The confusion may come from mixing up my setup and Robins/Ritov's setup. There is no missing data in my setup.
I could write up my intuition for the hierarchical model. It's an almost trivial result if you don't assume smoothness, since for any x1,...,xn the parameters g(x1)...g(xn) are conditionally independent given p and distributed as F(p), where F is the maximum entropy Beta with mean p (I don't know the form of the parameters alpha(p) and beta(p) off-hand). Smoothness makes the proof much more difficult, but based on high-dimensional intuition one ...
I didn't reply to your other comment because although you are making valid points, you have veered off-topic since your initial comment. The question of "which observations to make?" is not a question of inference but rather one of experimental design. If you think this question is relevant to the discussion, it means that you neither understand the original post nor my reply to your initial comment. The questions I am asking have to do with what to infer after the observations have already been made.
By "importance sampling distribution" do you mean the distribution that tells you whether Y is missing or not?
Right. You could say the cases of Y1|D=1 you observe in the population are an importance sample from Y1, the hypothetical population that would result if everyone in the population were treated. E[Y1], the quantity to be estimated, is the mean of this hypothetical population. The importance sampling weights are q(x) = Pr[D=1|x]/p(x) where p(x) is the marginal distribution (ie you invert these weights to get the average), the importance sampling distribution is the conditional density of X|D=1.
I will go ahead and answer your first three questions
Objective Bayesians might have "standard operating procedures" for common problems, but I bet you that I can construct realistic problems where two Objective Bayesians will disagree on how to proceed. At the very least the Objective Bayesians need an "Objective Bayesian manifesto" spelling out what are the canonical procedures. For the "coin-flipping" example, see my response to RichardKennaway where I ask whether you would still be content to treat the problem as coin-fl
It is worth noting that the issue of non-consistency is just as troublesome in the finite setting. In fact, in one of Wasserman's examples he uses a finite (but large) space for X.
Yes, I think you are missing something (although it is true that causal inference is a missing data problem).
It may be easier to think in terms of the potential outcomes model. Y0 is the outcome is no treatment, Y1 is the outcome of treatment, you only ever observe either Y0 or Y1, depending on whether D=0 or 1. Generally you are trying to estimate E[Y1] or E[Y0] or their difference.
The point is that the quantity Robbins and Wasserman are trying to estimate, E[Y], does not depend on the importance sampling distribution. Whereas the quantity I am trying ...
My example is very similar to the Robbins/Wasserman example, but you end up drawing different conclusions. Robbins/Wasserman show that you can't make sense of importance sampling in a Bayesian framework. My example shows that you can't make sense of "conditional sampling" in a Bayesian framework. The goal of importance sampling is to estimate E[Y], while the goal of conditional sampling is to estimate E[Y|event] for some event.
We did talk about this before, that's how I first learnt of the R/W example.
I do not need to model the process f by which that population was selected, only the behaviour of Y within that population?
There are some (including myself and presumably some others on this board) who see this practice as epistemologically dubious. First, how do you decide which aspects of the problem to incorporate into your model? Why should one only try to model E[Y|f(X)=1] and not the underlying function g(x)=E[Y|x]? If you actually had very strong prior information about g(x), say that "I know g(x)=h(x) with probability 1/2 or g(x) = j(x) ...
Good catch, it should be Beta(991, 11). The prior is uniform = Beta(1,1 ) and the data is (990 successes, 10 fails)
How do you get the top portion of the second payoff matrix from the first? Intuitively, it should be by replacing the Agent A's payoff with the sum of the agents' payoffs, but the numbers don't match.
Most people are altruists but only to their in-group, and most people have very narrow in-groups. What you mean by an altruist is probably someone who is both altruistic and has a very inclusive in-group. But as far as I can tell, there is a hard trade-off between belonging to a close-knit, small in-group and identifying with a large, diverse but weak in-group. The time you spend helping strangers is time taken away from potentially helping friends and family.
Like V_V, I don't find it "reasonable" for utility to be linear in things we care about.
I will write a discussion topic about the issue shortly.
EDIT: Link to the topic: http://lesswrong.com/r/discussion/lw/mv3/unbounded_linear_utility_functions/
I'll need some background here. Why aren't bounded utilities the default assumption? You'd need some extraordinary arguments to convince me that anyone has an unbounded utility function. Yet this post and many others on LW seem to implicitly assume unbounded utility functions.
Let's talk about Von Neumann probes.
Assume that the most successful civilizations exist digitally. A subset of those civilizations would selfishly pursue colonization; the most convenient means would be through Von Neumann machines.
Tipler (1981) pointed out that due to exponential growth, such probes should already be common in our galaxy. Since we haven't observed any, we must be alone in the universe. Sagan and Newman countered that intelligent species should actually try to destroy probes as soon as they are detected. This counterargument, known as...
Sociology, political science and international politics, economics (graduate level), psychology, psychiatry, medicine.
Undergraduate mathematics, Statistics, Machine Learning, Intro to Apache Spark, Intro to Cloud Computing with Amazon
Thanks--this is a great analysis. It sounds like you would be much more convinced if even a few people already agreed to tutor each other--we can try this as a first step.
That's OK, you can get better. And you can use any medium which suits you. It could be as simple as assigning problems and reading, then giving feedback.
This is an interesting counterexample, and I agree with Larry that using priors which depend on pi(x) is really no Bayesian solution at all. But if this example is really so problematic for Bayesian inference, can one give an explicit example of some function theta(x) for which no reasonable Bayesian prior is consistent? I would guess that only extremely pathological and unrealistic examples theta(x) would cause trouble for Bayesians. What I notice about many of these "Bayesian non-consistency" examples is that they require consistency over ve...
EDIT: Edited my response to be more instructive.
On some level it's fine to make the kinds of qualitative arguments you are making. However, to assess whether a given hypothesis really robust to parameters like ubiquity of civilizations, colonization speed, and alien psychology, you have to start formulating models and actually quantify the size of the parameter space which would result in a particular prediction. A while ago I wrote a tutorial on how to do this:
http://lesswrong.com/lw/5q7/colonization_models_a_tutorial_on_computational/
which covers the ...
The second civ would still avoid building them too close to each other. This is all clear if you do the analysis.
Thanks for the references.
I am interested in answering questions of "what to want." Not only is it important for individual decision-making, but there are also many interesting ethical questions. If a person's utility function can be changed through experience, is it ethical to steer it in a direction that would benefit you? Take the example of religion: suppose you could convince an individual to convert to a religion, and then further convince them to actively reject new information that would endanger their faith. Is this ethical? (My opi...
Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don't believe these kinds of scenarios are extreme beyond belief--they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.
The second civilization would just go ahead and build them anyways, since doing so maximizes their own utility function. Of course, there is an additional question of whether and how the first civilization will try to stop this from happening, since the second civ's Catastrophe Engines reduce their own utility. If the first civ ignores them, the second civ builds Catastrophe Engines the same way as before. If the first civ enforces a ban on Catastrophe Engines, then the second civ colonizes space using conventional methods. But most likely the first civ would eliminate the second civ (the "Berserker" scenario.)
For the original proposal:
Explain:
Invalidate:
Catastrophe engines should still be detectable due to extremely concentrated energy emission. A thorough infrared sky survey would rule them out along with more conventional hypotheses such as Dyson spheres.
If it becomes clear there is no way to exploit vacuum energy, this eliminates one of the main candidates for a new energy source.
A better understanding of the main constraints for engineering Matrioshka br
Disclaimer: I am lazy and could have done more research myself.
I'm looking for work on what I call "realist decision theory." (A loaded term, admittedly.) To explain realist decision theory, contrast with naive decision theory. My explanation is brief since my main objective at this point is fishing for answers rather than presenting my ideas.
Naive Decision Theory
Assumes that individuals make decisions individually, without need for group coordination.
Assumes individuals are perfect consequentialists: their utility function is only a funct
I mostly agree with you, but we may disagree on the implausibility of exotic physics. Do you consider all explanations which require "exotic physics" to be less plausible than any explanation that does not? If you are willing to entertain "exotic physics", then are there many ideas involving exotic physics that you find more plausible than Catastrophe Engines?
In the domain of exotic physics, I find Catastrophe Engines to be relatively plausible since are already analogues of similar phenomena to Catastrophe Engines in known physics: f...
There are only a limited number of ideas we can work on
You are right in general. However, it is also a mistake to limit your scope to too few of the most promising ideas. Suppose we put a number K on the number of different explanations we should consider for the Fermi paradox. What number K do you think would give the best tradeoff between thoroughness and time?
It's not a contest. And although my explanation invokes unknown physics, it makes specific predictions which could potentially be validated or invalidated, and it has actionable consequences. Could you elaborate on what criteria make an idea "worth entertaining"?
Regardless of whether ETs are sending signals, presumably we should be able to detect Type II or Type III civilizations given most proposals for how such civilizations should look like.
There exists a technological plateau for general intelligence algorithms, and biological neural networks already come close to optimal. Hence, recursive self-improvement quickly hits an asymptote.
Therefore, artificial intelligence represents a potentially much cheaper way to produce and coordinate intelligence compared to raising humans. However, it will not have orders of magnitude more capability for innovation than the human race. In particular, if humans are unable to discover breakthroughs enabling vastly more efficient production of computational ...
There is no way to raise a human safely if that human has the power to exponentially increase their own capabilities and survive independently of society.
You can try reduce philosophy to science, but how can you justify the scientific method itself? To me, philosophy refers to the practice of asking any kind of "meta" question. To question the practice of science is philosophy, as is the practice of questioning philosophy. The arguments you make are philosophical arguments--and they are good arguments. But to make a statement to the effect of "all philosophy is cognitive science" is too broad a generalization.
What Socrates was doing was asking "meta" questions about intuiti...
Hopefully people here do not interpret "rationalists" as synonymous for "the LW ingroup." For one, you can be a rationalist without being a part of LW. And secondly, being a part of LW in no way certifies you as a rationalist, no matter how many internal "rationality tests" you subject yourself to.
A different kind of "bias-variance" tradeoff occurs in policy-making. Take college applications. One school might admit students based only on the SAT score. Another admits students based on scores, activities, essays, etc. The first school might reject a lot of exceptional people who just happen to be bad at test-taking. The second school tries to make sure they accept those kinds of exceptional people, but in the process of doing so, they will admit more unexceptional people with bad test scores who somehow manage to impress the admissions...
EM202623997 state complexity hierarchy
Relative to any cellular automata capable of universal computation, initial states can be classified according to a nested hierarchy'of complexity classes. The first three levels of the hierarchy were informally known since the beginnings of cellular automata theory in the 20th century, and the next two levels were also speculated to exist, motivated by the idea of formalizing an abstract notion of "organism" and an abstract notion of "sentience", respectively. EM-brain 202623897, a descendant of ...
Simulated dream state experiments
Simulated dream state experiments (SDSEs) are computer simulation experiments involving simulated humans sentiences in a dream state. Since the passing of the Banford agreement (1) in 2035, SDSEs are the exclusive means of ethically conducting simulation experiments of simulated human sentiences without active consent (2), although contractual consent (3) is still universally required for SDSEs. SDSEs have widespread scientific, commercial, educational, political, military and legal purposes. Scientific studies using SDSEs...
Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.
Thanks for the link MakoYass.
I am familiar with the concept of superrationality, which seems similar with what you are describing. The lack of special relationship between observer moments--let's call it non-continuity--is also a common concept in many mystical traditions. I view both of these concepts as different than the concept of unity, "we are all one".
Superrationality combines a form of unity with a requirement for rationality. I could think that "we are all one" without thinking that we should behave rationally. If I th... (read more)