Is there anywhere I can find a decent analysis of the effectiveness and feasibility of our current methods of cryonic preservation?
(one that doesn't originate with a cryonics institute)
Well, that doesn't seem too difficult -
(one that doesn't originate with a cryonics institute)
Oh.
So, who exactly do you expect to be doing this analysis? The most competent candidates are the cryobiologists, and they are ideologically committed to cryonics not working and have in the past demonstrated their dishonesty\*.
* Literally; I understand the bylaw banning any cryonicists from the main cryobiology association is still in effect. ** eg. by claiming on TV cryonics couldn't work because of the 'exploding lysosomes post-death' theory, even after experiments had disproven the theory.
I've read the metaethics sequence twice and am still unclear on what the basic points it's trying to get across are. (I read it and get to the end and wonder where the "there" is there. What I got from it is "our morality is what we evolved, and humans are all we have therefore it is fundamentally good and therefore it deserves to control the entire future", which sounds silly when I put it like that.) Would anyone dare summarise it?
Morality is good because goals like joy and beauty are good. (For qualifications, see Appendices A through OmegaOne.) This seems like a tautology, meaning that if we figure out the definition of morality it will contain a list of "good" goals like those. We evolved to care about goodness because of events that could easily have turned out differently, in which case "we" would care about some other list. But, and here it gets tricky, our Good function says we shouldn't care about that other list. The function does not recognize evolutionary causes as reason to care. In fact, it does not contain any representation of itself. This is a feature. We want the future to contain joy, beauty, etc, not just 'whatever humans want at the time,' because an AI or similar genie could and probably would change what we want if we told it to produce the latter.
This comment by Richard Chappell explained clearly and concisely Eliezer's metaethical views. It was very highly upvoted, so apparently the collective wisdom of the community considered it accurate. It didn't receive an explicit endorsement by Eliezer, though.
I'm pretty sure Eliezer is actually wrong about whether he's a meta-ethical relativist, mainly because he's using words in a slightly different way from the way they use them. Or rather, he thinks that MER is using one specific word in a way that isn't really kosher. (A statement which I think he's basically correct about, but it's a purely semantic quibble and so a stupid thing to argue about.)
Basically, Eliezer is arguing that when he says something is "good" that's a factual claim with factual content. And he's right; he means something specific-although-hard-to-compute by that sentence. And similarly, when I say something is "good" that's another factual claim with factual content, whose truth is at least in theory computable.
But importantly, when Eliezer says something is "good" he doesn't mean quite the same thing I mean when I say something is "good." We actually speak slightly different languages in which the word "good" has slightly different meanings. Meta-Ethical Relativism, at least as summarized by wikipedia, describes this fact with the sentence "terms such as "good," "bad," "right&q...
I am wondering if risk analysis and mitigation is a separate "rationality" skill. I am not talking about some esoteric existential risk, just your basic garden-variety everyday stuff. While there are several related items here (ugh fields, halo effect), I do not recall EY or anyone else addressing the issue head-on, so feel free to point me to the right discussion.
A couple of embarrassingly basic physics questions, inspired by recent discussions here:
On occasion people will speak of some object "exiting one's future light cone". How is it possible to escape a light cone without traveling in a spacelike direction?
Does any interpretation of quantum mechanics offer a satisfactory derivation of the Born rule? If so, why are interpretations that don't still considered candIdates? If not, why do people speak as if the lack of such a derivation were a point against MWI?
Suppose (just to fix ideas) that you are at rest, in some coordinate system. Call FLC(t) your future light cone from your space-time position at time t.
An object that is with you at t=0 cannot exit FLC(0), no matter how it moves from there on. But it can accelerate in a way that its trajectory is entirely outside FLC(T) from some T>0. Then it makes sense to say that the object has exited your future light cone: nothing you do after time T can affect it.
Very rapid increase/acceleration. Originally it's the sound you hear if you pour gasoline on the ground and set fire to it.
What do people mean here when they say "acausal"?
Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?
"Acausal" is used as a contrast to Causal Decision Theory (CDT). CDT states that decisions should be evaluated with respect to their causal consequences; ie if there's no way for a decision to have a causal impact on something, then it is ignored. (More precisely, in terms of Pearl's Causality, CDT is equivalent to having your decision conduct a counterfactual surgery on a Directed Acyclic Graph that represents the world, with the directions representing causality, then updating nodes affected by the decision.) However, there is a class of decisions for which your decision literally does have an acausal impact. The classic example is Newcomb's Problem, in which another agent uses a simulation of your decision to decide whether or not to put money in a box; however, the simulation took place before your actual decision, and so the money is already in the box or not by the time you're making your decision.
"Acausal" refers to anything falling in this category of decisions that have impacts that do not result causally from your decisions or actions. One example is, as above, Newcomb's Problem; other examples include:
One must distinguish different varieties of MWI. There is an old version of the interpretation (which, I think, is basically what most informed laypeople think of when they hear "MWI") according to which worlds cannot interact. This is because "world-splitting" is a postulate that is added to the Schrodinger dynamics. Whenever a quantum measurement occurs, the entire universe (the ordinary 3+1-dimensional universe we are all familiar with) duplicates (except that the two versions have different outcomes for the measurement). It's basically as mysterious a process as collapse, perhaps even more mysterious.
This is different from the MWI most contemporary proponents accept. This MWI (also called "Everettianism" or "The Theory of the Universal Wavefunction" or...) does not actually have full-fledged separate universes. The fundamental ontology is just a single wavefunction. When macroscopic branches of the wavefunction are sufficiently separate in configuration space, one can loosely describe it as world-splitting. But there is nothing preventing these branches from interfering in principle, just as microscopic branches interfere in the two-slit ...
No. The splitting is not in physical space (the space through which you travel in a spaceship), but in configuration space. Each point in configuration space represents a particular arrangement of fundamental particles in real space.
Moving in real space changes your position in configuration space of course, but this doesn't mean you'll eventually move out of your old branch into a new one. After all, the branches aren't static. You moving in real space is a particular aspect of the evolution of the universal wavefunction. Specifically, your branch (your world) is moving in configuration space.
Don't think of the "worlds" in MWI as places. It's more accurate to think of them as different (evolving) narratives or histories. Splitting of worlds is a bifurcation of narratives. Moving around in real space doesn't change the narrative you're a part of, it just adds a little more to it. Narratives can collide, as in the double slit experiment, which leads to things appearing as if both (apparently incompatible) narratives are true -- the particle went through both slits. But we don't see this collision of narratives at the macroscopic level.
Also, if MWI hypothesis is true, there's no way for one branch to interact with another later, right? If there are two worlds that are different based on some quantum event that occurred in 1000 CE, those two worlds will never interact, in principle, right?
To expand on what pragmatist said: The wavefunction started off concentrated in a tiny corner of a ridiculously high-dimensional space (configuration space has several dimensions for every particle), and then spread out in a very non-uniform way as time passed.
In many cases, the wavefunction's rule for spreading out (the Schrödinger equation) allows for two "blobs" to "separate" and then "collide again" (thus the two-split experiment, Feynman paths and all sorts of wavelike behavior). The quote marks around these are because it's not ever like perfect physical separation, more like the way that the pointwise sum of two Gaussian functions with very different means looks like two "separated" blobs.
But certain kinds of interactions (especially those that lead to a cascade of other interactions) correspond to those blobs "losing" each other. And if they do so, then it's highly unli...
What do people mean here when they say "acausal"?
As I understand it: If you draw out events as a DAG with arrows representing causality, then A acausally effects B in the case that there is no path from A to B, and yet a change to A necessitates a change to B, normally because of either a shared ancestor or a logical property.
I most often use it informally to mean "contrary to our intuitive notions of causality, such as the idea that causality must run forward in time", instead of something formal having to do with DAGs. Because from what I understand, causality theorists still disagree on how to formalize causality (e.g., what constitutes a DAG that correctly represents causality in a given situation), and it seems possible to have a decision theory (like UDT) that doesn't make use of any formal definition of causality at all.
People talk about using their 'mental model' of person X fairly often. Is there an actual technique for doing this or is it just a turn of phrase?
Does ZF assert the existence of an actual formula, that one could express in ZF with a finite string of symbols, defining a well-ordering on the-real-numbers-as-we-know-them? I know it 'proves' the existence of a well-ordering on the set we'd call the real numbers if we endorsed the statement "V=L". I want to know about the nature of that set, and how much ZF can prove without V=L or any other form of choice.
Nope.
ZF is consistent with many negations of strong choice. For example, ZF is consistent with Lebesgue measurability of every subset in R. Well-ordering of R is enought to create unmeasurable set.
So, if ZF could prove existence of such a formula, ZF+measurability would prove contradiction, but ZF+neasurability is equiconsistent with ZF and ZF would be inconsistent.
It is very hard to say anything about any well-ordering of R, they are monster constructions...
So the whole idea of an algorithm that operates on or outputs real numbers is nonsensical
You can work with programs over infinite streams in certain situations. For example, you can write a program that divides a real number by 2, taking an infinite stream as input and producing another infinite stream as output. Similarly, you can write a program that compares two unequal real numbers.
http://wiki.lesswrong.com/mediawiki/index.php?title=Jargon
Belief update What you do to your beliefs, opinions and cognitive structure when new evidence comes along.
I know what it means to update your beliefs, and opinions is again beliefs. What does it mean to "update your cognitive structure"? Does it mean anything or is it just that whoever wrote it needed a third noun for rhythm purposes?
What are the basic assumptions of ultilarianism and how are they justified? I was talking about ethics with a friend and after a bunch of questions like "Why is utilitarianism good?" and "Why is it good for people to be happy?" I pretty quickly started to sound like an idiot.
I've been aware of the concept of cognitive biases going back to 1972 or so, when I was a college freshman. I think I've done a decent job of avoiding the worst of them -- or at least better than a lot of people -- though there is an enormous amount I don't know and I'm sure I mess up. Less Wrong is a very impressive site for looking into nooks and crannies and really following things through to their conclusions.
My initial question is perhaps about the social psychology of the site. Why are two popular subjects here (1) extending lifespan, including cryog...
Why shouldn't I go buy a lottery ticket with quantum-randomly chosen numbers, and then, if I win, perform 1x10^17 rapid quantum decoherence experiments, therefor creating more me-measure in the lottery winning branch and virtually guaranteeing that any given me-observer-moment will fall within a universe where I won?
I obviously do not understand quantum mechanics as well as I thought, because I thought this comment and this comment were saying the same thing, but karma indicates differently. Can someone explain my mistake?
I did not understand Wei Dai's explanation of how UDT can reproduce updating when necessary. Can somebody explain this to me in smaller words?
(Showing the actual code that output the predictions in the example, instead of shunting it off in "prediction = S(history)," would probably also be useful. I also don't understand how UDT would react to a simpler example: a quantum coinflip, where U(action A|heads)=0, U(action B|heads)=1, U(action A|tails)=1, U(action B|tails)=0.)
wait, that was easier to search than I thought. http://lesswrong.com/lw/kn/torture_vs_dust_specks/
Yes, it is Knuth's arrow notation.
How comes that in some of the posts which were imported from Overcoming Bias, even if the “Sort By:” setting is locked to “Old”, some of the comments are out of sequence? Same applies to karma scores of http://lesswrong.com/topcomments/ -- right now, setting the filter to “This week”, the second comment is at 32 whereas the third is at 33, and sometimes when I set the filter to “Today” I even get a few negative-score comments.
Is there a better place to ask similar questions?
I have a question:
If I'm talking with someone about something that they're likely to disbelieve at first, is it correct to say that the longer the conversation goes on, the more likely they are to believe me? The reasoning goes that after each pause or opportunity to interrupt they can either interrupt and disagree, or don't do anything (perhaps nod their head but it's not required). If they interrupt and disagree then that obviously that's evidence in favor of them disbelieving. However, if they don't, then is that evidence in favor of them believing?
The standard utilitarian argument for pursuing knowledge, even when it is unpleasan to know, is that greater knowledge makes us more able to take actions that fulfil our desires, and hence make us happy.
However the psychological evidence is that our introspective access to our desires and our ability to predict what circumstances will make us happy is terrible.
So why should we seek additional knowledge is we can't use it to make ourselves happier? Surely we should live in a state of blissful ignorance as much as possible.
About Decision Theory, specifically DT relevant to LessWrong.
Since there is quite a lot of advanced material already on LW that seem to me as if they would be very helpful if one is one is perhaps near to finishing or beyond an intermediate stage:
Various articles: http://lesswrong.com/r/discussion/tag/decision_theory/ http://lesswrong.com/r/discussion/tag/decision/
And the recent video (and great transcript): http://lesswrong.com/lw/az7/video_paul_christianos_impromptu_tutorial_on_aixi/
And there are a handful of books that seem relevant to overall decision ...
LWers are almost all atheists. Me too, but I've rubbed shoulders with lots of liberal religious people in my day. Given that studies show religious people are happier than the non-religious (which might not generalize to LWers but might apply to religious people who give up their religion), I wonder if all we really should ask of them is that they subscribe to the basic liberal principle of letting everyone believe what they want as long as they also live by shared secular rules of morality. All we need is for some humility on their part -- not being total...
From Costanza's original thread (entire text):
Meta: