All of TitaniumDragon's Comments + Replies

Art is part of everything, so yes.

Photoshop allows artists to practice and produce works vastly more rapidly, correct errors quite easily, and otherwise do a ton of things they couldn't do before. Other such programs can do many of the same things.

More artists, plus better tools, plus faster production of art, plus better understanding of the technology of art, probably means that the best piece of art ever made was made in the last few decades.

Indeed, it is possible that more art will be produced in the first few decades of this century than were produced by all of humankind for the first several thousand years of our existence.

4Epictetus
"More" and "faster" are not words commonly associated with quality art. Great art often takes years to produce. Technology may have been a limitation in architecture where a massive cathedral could take centuries to build, but on the scale of music or literature technological limitations were the minor hurdles. Computers certainly help things get published faster, but actually writing literature requires lots of thinking that really isn't facilitated by technology.
3[anonymous]
You'll have to elaborate on what exactly you mean by 'good' and 'best' art here. A lot of people would very much prefer Bach to whatever has been produced in the last 10/20/... years.

The real flaw here is that counting arguments is a poor way to make decisions.

"They don't have the ability to make said meteor strikes" is enough on its own to falsify the hypothesis unless you have evidence to the contrary.

As Einstein said about "100 Authors Against Einstein", if he was wrong, they would have only needed one.

It isn't a problem to judge things from different time periods; the Model-T might have been a decent car in 1910, but it is a lemon today.

New things are better than old things. I'd wager that the best EVERYTHING has been produced within the last few decades.

If you're judging "Which is better, X or Y," and X is much older than Y, it is very likely Y is better.

3Lumifer
Art?
3Desrtopa
If we have incentive to continue to produce better things of that type, then probably, but sometimes the incentives we once had to do things well go away. There may not be any modern works of portrait painting which surpass premodern ones, for instance, because photography has removed a lot of the incentive to practice portrait painting.

The idea of natural selection is remarkably awesome and has applications even outside of biology, which is part of what makes it such a great idea.

It isn't literally that for every single person, but assuming you don't have a mutation in your chronobiological genes it is pretty close to that.

People with mutations in various regulatory genes end up with significantly different sleep-wake cycles. The reason that our bodies reset ourselves under sunlight is probably to help correct for our clocks being "off" by a bit; indeed, it is probably very difficult to hit exactly 24 hours via evolution. But 24:11 plus correction lets it be off by a bit without causing a problem.

Good enough is probably b... (read more)

The more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.

Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.

2Vaniver
Agreed. The word 'avoid' and the group selection-y argument made me think it was a good idea to raise that objection and make sure we were discussing reciprocal pacifists, not pure pacifists.

Opportunistic seizure of capital is to be expected in a war fought for any purpose.

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they're not good for actually keeping the capital you are trying to take intact.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" ca... (read more)

0Lumifer
The best way to win a war is to have an overwhelming advantage. That sort is situation is much better described by the word "lopsided". Asymmetric warfare is something different. Example: Iraqi invasion of Kuwait. Spying can capture technology, but technology is not the same thing as capital. Neither subversion nor purchasing are "means of capturing capital" at all. Subversion destroys capital and purchases are exchanges of assets. That's an unusual idea of the West. It looks to me like it was custom-made to fit your thesis. Can you provide a definition? One sufficiently precise to be able to allocate countries like Poland, Israel, Chile, British Virgin Islands, Estonia, etc. to either "West" or "not-West".

You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.

Yes, we can be clever and think of humans as green goo - the ultimate in green goo, really. That isn't what we're talking about and you know it - yes, intelligent life can spread out everywhere, that isn't what we're worried about. We're worried about unintelligent things wiping out intelligent things.

The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider - I'm not sure if th... (read more)

7private_messaging
The gray goo is predicated on the sort of thinking common in bad scifi. Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it'd get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashers, with corresponding development in goo control methods. You don't want your dishwasher goo eating your bread. The levels of metabolic efficiency and sheer universality required for the gray goo to be able to eat everything in it's path (and that's stuff which hasn't gotten eaten naturally), require multitude of breakthroughs on top of an incredibly advanced nanotechnology and nano-manufacturing capacity within artificial environments. How does such an advanced civilization fight the gray goo? I can't know what would be the best method, but a goo equivalent of bacteriophage is going to be a lot, lot less complicated than the goo itself (as the goo has to be able to metabolize a variety of foods efficiently).

That's a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don't know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

Most likely, the Great Filter lies somewhere in the latter half of the equation - complex, multicellular life, intelligent life, civilization, or the rapid dest... (read more)

0ChristianKl
We don't even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids. Why? I don't see any reason why that should be the case. If you take for example posts that internet forum users write most of the time most users who write posts only write one post.

After reading through all of the comments, I think I may have failed to address your central point here.

Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility".

The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume ... (read more)

Incidentally, regarding some other things in here:

[quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it... (read more)

0Lumifer
Capital is also easier to capture than it is to create. Your argument looks like saying that it's better to avoid wars than to lose them. Well, yeah. But what about winning wars? In which meaning are you using the word "never"? :-D
0Vaniver
Incidentally, you can blockquote paragraphs by putting > in front of them, and you can find other help by clicking the "Show Help" button to the bottom right of the text box. (I have no clue why it's all the way over there; it makes it way less visible.) But, the more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

Apparently I don't know how to use this system properly.

[This comment is no longer endorsed by its author]Reply

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversit... (read more)

0ChristianKl
That's a bad argument. We don't know for sure that intelligent life has arisen. The fact that we don't see events like that can simply mean that we are the first.
5more_wrong
The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident. Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent. Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution. Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a
0TitaniumDragon
After reading through all of the comments, I think I may have failed to address your central point here. Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility". The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur. Say the Nazis gain an excessive amount of power. What happens then? Well, there's the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn't. And - let's face it - chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you're in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation - sooner or later, someone is going to do it, and if it isn't you, then it will be someone else who gains whatever benefits there are from it. The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don't make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.
0TitaniumDragon
Incidentally, regarding some other things in here: [quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote] There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital. As capital becomes increasingly important, conflict - at least, violent, capital-destroying conflict - becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade. And that's ignoring the fact that we've already sort of engineered a global scenario where "The West" (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others' capital, benefit from each others' capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don't, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West. The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility - in "the West" we've seen crime rates and homicide rates decline for 20+ years now. As a final, random aside: My favorite thing about the Trinity

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought ... (read more)

0Vaniver
Here is a contemporary paper discussing the risk, which doesn't seem to come up with the 3e-6 number, and here are some of Hamming's reflections. An excerpt from the second link: Compton claims (in an interview with Pearl Buck I cannot easily find online) that 3e-6 was actually the decision criterion (if it was higher than that, they were going to shut down the project as more dangerous than the Nazis), and the estimate came in at lower, and so they went ahead with the project. In modern reactors, they try to come up with a failure probability by putting distributions on unknown variables during potential events, simulating those events, and then figuring out what portion of the joint input distribution will lead to a catastrophic failure. One could do the same with unknown parameters like the cross-section of nitrogen at various temperatures; "this is what we think it could be, and we only need to be worried if it's over here."
0TitaniumDragon
Apparently I don't know how to use this system properly.
2TitaniumDragon
Everything else is way further down the totem pole. People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with. Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws

While we are, in the end, meat machines, we are adaptive meat machines, and one of the major advantages of intelligence is the ability to adapt to your environment - which is to say, doing more than executing preexisting adaptations but being able to generate new ones on the fly.

So while adaptation-execution is important, the very fact that we are capable of resisting adaptation-execution means that we are more than adaptation-executors. Indeed, most higher animals are capable of learning, and many are capable of at least basic problem solving.

There is pre... (read more)

I will note that this is one of the fundamental failings of utilitarianism, the "mere addition" paradox. Basically, take a billion people who are miserable, and one million people who are very happy. If you "add up" the happiness of the billion people, they are "happier" on the whole than the million people; therefore, the billion are a better solution to use of natural resources.

The problem is that it always assumes some incorrect things:

1) It assumes all people are equal 2) It assumes that happiness is transitive 3) It assum... (read more)

I think you're wrong about an important point here, actually, which is that not all things are as exciting as other things. Not all things are equally exciting.

Riding a dragon is actually way cooler than hang gliding for any number of reasons. Riding animals is cool in and of itself, but riding a dragon is actually flying, rather than hang gliding, which is "falling with style". You get the benefits of hang-gliding - you can see the landscape, for instance - but you have something which natively can fly beneath you. You need to worry less about c... (read more)

1Kartikey Patel
I agree with your basis of reasoning to a significant extent. You're right in saying that certain aspects of limitless fantasy are way cooler than the limited reality which often need specified conditions. The convenience of portable wand or say a whistle to call out for your pet dragon is always a better and preferable choice than having to go to a designated place, fulfil all the hectic conditions and then finally getting to hand glide for a moment provided we have proper supervisors all the time. This is not really the freedom to navigate through the uncharted places like it is on the dorsum of one's own dragon friend. 

By the way, a benchmark I've found useful in discussing factual matters or matters with a long pre-existing literature is number of citations and hyperlinks per comment. You're still batting a zero.

So that means your comment is worthless, and thus can be safely ignored, given your only "citations" do not support yourself in any way and is merely meant to insult me?

In any case, citations are mostly unimportant. I use google and find various articles to support my stances; you can do the same to support yours, but I don't go REF Fahy et. al. &qu... (read more)

6JoshuaZ
I'm sorry, what is the intended content here? Because you can write a hymn that parodies strong AI claims that therefore we need to take them less seriously? Many people are not in favor of discussing the basilisk not because of the issue with a potential AI, but because of the danger that mentally vulnerable people will be disturbed by the notion. But in any event, you are pattern matching in an unhelpful way. The fact that something resembles something done by religions doesn't make it intrinsically wrong. Note for example, that large amounts of computer programming and maintenance look heavily ritualistic if you don't know what it is. So these are all conclusions, not arguments. And speaking as someone who agrees with you on a lot of this stuff, you are being both highly irrational and unnecessarily insulting in how you lay out these claims. What other routes are you comparing it to? You mention a few methods of life-extension, but none are methods likely to add by themselves more than a few centuries at most. Don't confuse not having a certain goal set with disagreeing with you about what will most likely accomplish that goal set.
8Desrtopa
I'll note here that the average estimated chance among regulars here for cryonics working is actually lower than that, and the difference in how seriously people on Less Wrong tend to take cryonics compared to the general population is less to do with thinking it's much more likely than most people, and more to do with thinking that a chance of revival on that order is worth taking seriously.
0wedrifid
You have just declared yourself ignorant of what cryonics is intended to do and screened off whatever value your opinion may otherwise have had. I invite you to leave and find another place where the style of thought is more in accord with that of your own contributions. (I strongly oppose any attempts to make lesswrong more like that.)

I understood Dunning-Kruger quite well. Dunning-Kruger suggests that, barring outside influence, people will believe themselves to be of above-average ability. Incompetent people will greatly overestimate their capability and understanding, and the ability to judge talent in others was proportional to ability in the skill itself - in other words, people who are incompetent are not only incompetent, but also incapable of judging competence in other people.

Competent people, conversely, overestimate the competence of the incompetent; however, they do have the... (read more)

8Kawoomba
You are comparing the current state of the art (freezing mammals and rabbits) with what may or may not be theoretically possible, potentially centuries down the line. How long a rabbit survived upon being revived using current methods is besides the point - how long rabbits (and humans) can possibly survive when revived a long time into the future would be more relevant. Potentially no survival would be necessary at all, if the informational state was uploaded to a different hardware substrate. Not postulating magic, just not postulating anything which would contradict our current understanding of the laws of physics - and even that is more of a lower bound. Concerning the technological feasibility, all we can say is that we can't say one way or the other how closely a reconstituted / scanned brain would resemble the original person. There is little indication that a high-fidelity reconstruction is in principle impossible. And a supposed impossibility cannot be established by looking at how long rabbits survive using current methods, molecular biology in its more theoretical variants would be more relevant. So, the jury's still out for the "technological viability in the future" part. The "would any agent (group of agents) get to the point where it (they) could revive us, and if so, would it (they) want to, and if so, would we want to be revived that way" are different questions. Let's not muddle the issues. Few cryonicists expect to be revived if Earth is rendered uninhabitable during World Wars III to X. Or if the facility in which they were stored went bankrupt, and the cadavers thrown out. Or if the facility were destroyed in some natural disaster (building on tectonic fault lines is a dumb long-term plan). Also, few cryonicists would want to be revived by some uncaring alien civilization stumbling upon our remains, and reanimating us to test the pain endurance of 21st century human specimens. Maybe for whatever reasons resources would be scarce, and reviva

Humans aren't dinosaurs, nor can you put them on your mantlepiece as a conversation piece. They are not property, but living, independent persons.

2wedrifid
Speak for yourself. (People have at times kept humans for similar purposes and there is no reason why future intelligent agents could not do so.) That is either a false dichotomy or a No True Scottsman equivocation on 'property'.
3drethelin
That only makes them insanely more valuable for reality tv

You can't make an educated guess that a combination of multiple factors is no greater than the sum of their individual effects, and indeed, when you're talking about disease states, this is the OPPOSITE of what you should assume. The harm done to your body taxes its ability to deal with harm; the more harm you apply to it, whatever the source, the worse things get. Your body only has so much ability to fight off bad things happening to it, so if you add two bad things on top of each other, you're actually likely to see harm which is worse than the sum of t... (read more)

0Desrtopa
Well, there's certainly no shortage of evidence that it's unhealthy for children to be malnourished, so that amounts to defying one true correlation in favor of the possibility of another. Supposing that there were a causative relation between red meat consumption and mortality, with a low effect size, under what circumstances would you be persuaded to believe in it?

[quote]1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn't be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn't be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc.[/quote]

Neither of those feats are even particularly impressive, though. Humans can make it snow indoors, and likewise an apparent reversal in gravity can be achieved via numerous routes, ranging from inverting the ro... (read more)

-1ArisKatsaris
You're being too verbose, which makes me personally find discussion with you rather tiring, and you're not addressing the actual points I'm making. Let me try to ask some more specific questions 1) Below which point do you want us treating a prior probability as effectively 0, and should never be updated upwards no matter what evidence? E.g. one in a billion? One in a trillion? What's the exact point, and can you justify it to me? 2) Why do you keep talking about things not being "causally linked", since all of the examples of Pascal's mugging given above do describe causal links? It's not as if I said anything weird about acausal trade or some such, every example I gave describes normal causal links. Humans don't tend to explicitly assign probabilities at all. Actually since people rarely bother to claim that things exist when they actually do (e.g. nobody is going around claiming "tables exist", "the sun exists") such people claiming that souls exist are probably minor evidence against their existence.

Dunning-Kruger and experience with similar religious movements suggests otherwise.

It takes someone who really thinks about most things very little time to come up with very obvious objections to most religious doctrine, and given the overall resemblance of cryonics to religion (belief in future resurrection, belief that donating to the church/cyronics institution will bring tangible rewards to yourself and others in the future, belief in eternal life) its not really invalid to suggest something like that.

Which is more likely - that people are deluding them... (read more)

6Paul Crowley
gwern's interpretation of what I wrote here is entirely correct.
9gwern
You obviously have not actually read the Dunning-Krueger paper and understood what it showed. Name three. Like V_V, I suspect that for all that you glibly allude to 'cults' you have no personal experience and you have not acquainted yourself with even a surface summary of the literature, much like you have not bothered to so much as read a cryonics FAQ or book before thinking you have refuted it. And it takes even less time to notice that there are long thorough answers to the obvious objections. Your point here is true, but says far more about you than religion or cryonics; after all, many true things like heliocentrism or evolution have superficial easily thought-of objections which have been addressed in depth. Sometimes they work, sometimes they don't; the argument from evil is probably the single most obvious argument against Western religions, there are countless replies from theists of various levels of sophistication, and while I don't think any of them actually work, I also don't think someone going 'My mother died! God doesn't exist!' is contributing anything whatsoever. What, you think the theists somehow failed to notice that bad things happen? Of course they did notice, so if you want to argue against the existence of God, read up on their response. If you had spent less time being arrogant, it might have occurred to you that I see this sort of flip reaction all the time in which people learn of cryonics and in five seconds think they've come up with the perfect objection and refuse to spend any time at all to disconfirm their objection. You are acting exactly like the person who said, "but it's not profitable to revive crypatients! QED you're all idiots and suckers", when literally the first paragraph of the Wikipedia article on ALCOR implies how they attempt to resolve this issue; here's a link to the discussion: http://lesswrong.com/lw/gh5/cryo_and_social_obligations/8d43 Notice how you are acting exactly like cheapviagra. You've come up with an

You're thinking about this too hard.

There are, in fact, three solutions, and two of them are fairly obvious ones.

1) We have observed 0 such things in existence. Ergo, when someone comes up to me and says that they are someone who will torture people I have no way of ever knowing existing unless I give them $5, I can simply assign them the probability of 0 that they are telling the truth. Seeing as the vast, vast majority of things I have observed 0 of do not exist, and we can construct an infinite number of things, assigning a probability of 0 to any part... (read more)

4ArisKatsaris
The problem is with formalizing solutions, and making them consistent with other aspects that one would want an AI system to have (e.g. ability to update on the evidence). Your suggested three solutions don't work in this respect because: 1) If we e.g. make an AI literally assign a probability 0 on scenarios that are too unlikely, then it wouldn't be able to update on additional evidence based on the simple Bayesian formula. So an actual Matrix Lord wouldn't be able to convince the AI he/she was a Matrix Lord even if he/she reversed gravity, or made it snow indoors, etc. 2) The assumption that a person's words provides literally zero evidence one way or another seems again something you axiomatically assume rather than something that arises naturally. Is it really zero? Not just effectively zero where human discernment is concerned, but literally zero? Not even 0.000000000000000000000001% evidence towards either direction? That would seem highly coincidental. How do you ensure an AI would treat such words as zero evidence? 3) We would hopefully want the AI to care about things it can't currently directly observe, or it wouldn't care at all about the future (which it likewise can't currently directly observe). The issue isn't helping human beings not fall prey to Pascal's Mugging -- they usually don't. The issue is to figure out a way to program a solution, or (even better) to see that a solution arises naturally from other aspects of our system.

The problem is that the choice to eat differently itself is potentially a confounding factor (people who pick particular diets may not be like people who do not do so in very important ways), and any time you have to deal with, say, 10 factors, and try to smooth them out, you have to question whether any signal you find is even meaningful at all, especially when it is relatively small.

The study in particular notes:

[quote]Men and women in the top categories of red or processed meat intake in general consumed fewer fruits and vegetables than those with low i... (read more)

2Desrtopa
Well, if you already know how much each of the associated factors contributes alone via other tests where you were able to isolate those variables, you can make an educated guess that their combined effect is no greater than the sum of their individual effects. The presence of other studies that didn't show the same significant results weighs against it, but on the other hand such cases are certainly not unheard of with respect to associations that turn out to be real. The Cochrane Collaboration's logo comes from a forest plot of results for whether an injection of corticosteroids reduce the chance of early death in premature birth. Five out of seven studies failed to achieve statistical significance, but when their evidence was taken together, it achieved very high signficance, and further research since suggests a reduction of mortality rate between 30-50%. While a study of the sort linked above certainly doesn't establish the truth of its findings with the confidence of its statistical significance, "never believe studies like this" doesn't leave you safe from a treatment-of-evidence standpoint, because even in the case of a real association, the data are frequently going to be messy enough that you'd be hard pressed to locate it statistically. You don't want to set your bar for evidence so high that, in the event that the association were real, you couldn't be persuaded to believe in it.

People like my mother (who occasionally go to the casino with $40 in their pocket, betting it all in 5-cent slot machines a nickel at a time, then taking back whatever she gets back) go to the casino in order to have fun/relax, and playing casino games is an enjoyable past time to them. Thus while they lose money, they acknowledge that it is more likely than not that it will happen, and are not distressed when they leave with less money than they enter with because their goal was to enjoy themselves, not to end up with more money - getting more money is ju... (read more)

7private_messaging
The only people who I know that do this, do it due to beliefs about their own capability of pre-cognition, not due to incoherent beliefs. There was a non-mistaken variation where someone reverse engineered the PRNG in the slot machines to win.
7Eugine_Nier
That doesn't strike me as money pumping so much as having false beliefs.

I will note that I went through the mental exercise of cars in a much simpler (and I would say better) way: I took the number of cars in the US (300 million was my guess for this, which is actually fairly close to the actual figure of 254 million claimed by the same article that you referenced) and guessed about how long cars typically ended up lasting before they went away (my estimate range was 10-30 years on average). To have 300 million cars, that would suggest that we would have to purchase new cars at a sufficiently high rate to maintain that number ... (read more)

Uh, yeah. The reason for that is that sickly animals carry parasites. It is logical that we wouldn't want to eat parasite-ridden or diseased animals, because then WE get the parasites. If the animal is not parasite-ridden, there's no good reason to believe it would be unhealthy to eat.

My personal suspicion for the cause is underlying SES factors (wealthy people tend to eat better, fresher food than the poor) as well as the simple issue of dietary selection - people who watch what they eat are also more likely to exercise and generally have healthier habits than those who are willing to eat anything.

2Desrtopa
There might be some factors which the study is failing to control for, but from the link in the grandparent The study seems to control for the more obvious associated factors. Also, the full text states that the consumption of red meat is associated with an increase in mortality when controlling for the confounders assessed in their study, with processed meat being associated with a greater increase, but poultry not being associated with an increase in mortality.

I have never actually seen any sort of cogent response to this issue. Ever. I see it being brushed aside constantly, along with the magical brain restoration technology necessary for this, but I've never actually seen someone go into why, exactly, anyone would bother to thaw them out and revive them, even if it WAS possible to do. They are, all for all intents and purposes, dead, from a legal, moral, and ethical standpoint. Not only that, but defrosting them has little actual practical benefit - while there is obvious value to the possible cryopreservation... (read more)

0drethelin
We spend millions of dollars digging up dinosaurs. People get really excited when we find things like Troy. Look at all the antique stores that are around. Why WOULDN'T people get revived?

There's a lot of good reasons to believe that cyronics is highly infeasible. I agree that P(B|A) is low, and P(D|A,B,C) is also absurdly low. We don't care about starving people in Africa today; what is the likelihood that we care about dead frozen people in the future, especially if we have to spend millions of dollars resurrecting them (as is more than likely the case), especially given how difficult it would be to do so? And that's assuming we can even fix whatever caused the problem in the first place; if they die of brain cancer, how are we supposed t... (read more)

0pianoforte611
"what is the likelihood that we care about dead frozen people in the future?" I wondered that as well when I first heard about cryonics. It is true that society in general won't care about frozen people in the future. But that isn't necessary for cryonics to work. Rather its enough that cryonics organizations care about frozen people. Why would they care? Because the people running the organization have a vested interested in making their clients live. Among other reasons, the people running the organization might one day be clients as well, so they have to care about the success of the project.
7gwern
This is a standard criticism people come up with after 5 seconds of thought, and a perfect example of http://lesswrong.com/lw/h8n/litany_of_a_bright_dilettante/ Do you really think that no one in cryonics hasn't ever thought - 'wait a second! why would anyone in the future even bother putting in the work?' - and you have successfully exposed a fatal ~70-year-old blindspot in a comment written in a few seconds?

You assume that economists are actually an expert on the economy. They aren't. That's the problem.

Economics only really has a good understanding of very low level effects, and even there things are very difficult to truly deal with. The law of supply and demand, for instance, is really less of a law and more of a guideline - the only way to actually determine real world behavior is experimentation, as there is no single equation you can plug things into to get a result out of. And that's something SIMPLE. Ask them how to fix the economy? They have no abili... (read more)

What matters is not knowledge but probability. Is it likely that something as complicated as our Universe would be simulated?

Is it likely that they would simulate something with vastly different rules than their own universe with such a high level of complexity?

It is possible that the Universe is a simulation, but it is highly improbable due to the difficulty and complexity inherent to doing so. Creating something of this level of complexity for non-simulation purposes is unlikely.

It is of course impossible to disprove it absolutely, but it doesn't really ... (read more)

2Decius
Reference Dwarf Fortress.

There's a Freefall comic where the captain says to end all virtual reality simulations and someone else covers his eyes, making him scream. Can't find it offhand though.

We have no way to even measure intelligence, let alone determine how close to capacity we're at. We could be 90% there, or 1%, and we have no way, presently, of distinguishing between the two.

We are the smartest creatures ever to have lived on the planet Earth as far as we can tell, and given that we have seen no signs of extraterrestrial civilization, we could very well be the most intelligent creatures in the galaxy for all we know.

As for shoving out humans, isn't the simplest solution to that simply growing them in artificial wombs?

1Desrtopa
We already have a simpler solution than that, namely the Cesarian section. It hasn't been a safe option long enough to have had a significant impact as an evolutionary force though. Plus, there hasn't been a lot of evolutionary pressure for increased intelligence since the advent of agriculture. We might be the most intelligent creatures in the galaxy, but that's a very different matter from being near the most intelligent things that could be constructed out of a comparable amount of matter. Natural selection isn't that great a process for optimizing intelligence, it's backpedaled on hominids before given the right niche to fill, so while we don't have a process for measuring how close we are to the ceiling, I think the reasonable prior on our being close to it is pretty low.
2MugaSofer
Relevant link is relevant: The Finale of the Ultimate Meta Mega Crossover

It won't be any smarter at all actually, it will just have more relative time.

Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor - if you have to run experiments and wait for results, all you're really doing is decreasing the lag time between experiments, and even then only po... (read more)

2Desrtopa
In many fields of technology, we see sigmoid curves, where initial advancements lead to accelerating returns until it becomes difficult to move further ahead without running up against hard problems or fundamental limits, and returns diminish. Making an artificial intelligence as capable as a human intelligence may be difficult, but that doesn't mean that if we reach that point, we'll be facing major barriers to further progression. I would say we don't have much evidence to suggest humans are even near the ceiling of what's strictly possible with a purely biological intelligence; we've had very little opportunity for further biological development since the point when cultural developments started accounting for most of our environmental viability, plus we face engineering challenges such as only being able to shove so large a cranium through a bipedal pelvis.

Is it? Or do we simply not call some such organizations terrorist organizations out of politeness?

I suppose one could argue that the proper definition is "A non-state entity who commits criminal acts for the purpose of invoking terror to coerce actions from others", which will capture almost all groups that we consider to be terrorist groups, though it really depends - is a group who creates fear about the food supply for their own ends a terrorist group? I would argue yes (though one could also argue that this is equivalent to crying fire in a crowded theater, and thus a criminal act).

3TimS
Legal nitpick - the issue is falsely shouting fire
2jefftk
Arguing about definitions isn't very useful. Discussion is much more likely to go in a positive direction if you point to specific actions and describe why they're harmful.

I think they assume that intending to kill someone is ALWAYS malicious in the US, regardless of your personal convictions on the matter. But yes, you are correct that you could be charged with murder without actual malice on your part (not that it is really inappropriate - the fact that you're being dumb doesn't excuse you for your crime).

By the US definitions, assisted suicide is potentially murder due to your intent to kill, unless your state has an exception, though it is more likely to be voluntary manslaughter. Involuntary euthanasia is a whole different kettle of fish, though.

Are the blacks ever going to give up the right to being selected over whites, now that they have the majority of votes in the country? Or is it just going to be a permanent bias?

I think we all know the answer to this in our heart of hearts. They will always claim that they need it to combat bias against them, and because they "deserve" it because their parents/grandparents/whatever were disadvantaged.

As time goes on, the whites will feel that they are being punished for things that their parents or grandparents did, and will grow bitter and racis... (read more)

1CCC
That is a potential failure mode, yes. People are aware of it. But one does not refuse to manufacture a car on the basis that it might, one day, hit a pedestrian. It is not evil yet. The current affirmative action policies are supposed to be temporary. If they are not, there is an additional line of defense against that failure mode; the policies state that the judicial services (for example) should 'broadly reflect the demographics of the country'. That means that any group that gets unfairly discriminated against has a claim.

The people who believe that they are grown-ups who can eyeball their data and claim results which fly in the face of statistical rigor are almost invariably the people who are unable to do so. I have seen this time and again, and Dunning-Kruger suggests the same - the least able are very likely to do this based on the idea that they are better able to do it than most, whereas the most able people will look at it and then try to figure out why they're wrong, and consider redoing the study if they feel that there might be a hidden effect which their present ... (read more)

0buybuydandavis
The problem we started with was that "statistical rigor" is generally not rigorous. Those employing it don't know what it would mean under the assumptions of the test, and fewer still know that the assumptions make little sense.

A terrorist is someone who uses terror in order to coerce a reaction out of people.

PETA's propaganda's purpose is to horrify people into not eating meat.

PETA's funding and relationship with ALF has the purpose of terrorizing scientists, agribusiness, and other groups that they want to cause harm to by threatening to or actually destroying research, burning down buildings, destroying crops, freeing animals, ect. They give people who have engaged in such activities leadership positions, portray it as a reasonable response, give them money, recruit members fo... (read more)

3wedrifid
The set of people who do that is not the same as the set to which 'terrorist' applies. In fact, it isn't even a superset. There is no rule. Just a general tendency to think less of the contributions of sloppy thinking. In fact political advocacy of this kind is somewhat discouraged in general due to the near inevitable nature of such conversations.

If the Flying Spaghetti Monster is running the simulation, it is non-falsifiable, but also not worth considering because he can just stick his noodley appendage in and undo any results he doesn't like anyway retroactively. Its not like we would know the difference.

For us to break the fourth wall, either our creators would have to desire it or be pretty bad at running simulations.

2John_D
"For us to break the fourth wall, either our creators would have to desire it or be pretty bad at running simulations." There is also the possibility that, assuming we live in a simulation, the creators simply do not care or mind whether they are discovered or not. This would also leave open the possibility of breaking the fourth wall.
4PrawnOfFate
..like the ones in The Matri, who allowed its denizens to evolve to knowledge of virtual reality, rather than keeping them at a medieval level, or using the bio-energy of sheep... But I am more interested in the epistemic mistake people keep making. People keep assuming that it is possible for simulatees to know what is going on outside the simulation (or, equivalently, that basic laws must be same).

It is extremely unlikely that the Universe is a simulation for a wide variety of reasons, foremost amongst them being expense. The level of simulation present in the Universe is sufficiently high that the only purpose of it would BE simulation, meaning that our physical laws would necessarily be quite close to the laws of whatever universe overlies us. However, this implies that building an Earth simulator with the level of fine-grained reality present here would be insanely expensive.

Ergo, it is highly unlikely that we are in a simulation because the amount of matter-energy necessary to generate said simulation is far in excess of any possible benefit for doing so.

2Decius
How could you possibly know what kinds of things are expensive in a universe that can simulate ours?

[1] The level of simulation present in the Universe is sufficiently high that the only purpose of it would BE simulation, meaning that [2] our physical laws would necessarily be quite close to the laws of whatever universe overlies us.

[2] does not follow from [1]. The REAL real world might be sufficiently more complex than ours and it can be running thousand of simulations for a variety of reasons. I'm really not sure why you think that our level of simulation or physical laws are as complex as it gets but this is not a valid argument.

For a quick example of what I mean I would like you to think about us full-on simulating a 2 dimnesions(+time) environment.

7PrawnOfFate
Unless our simulator are fooling us about the expense of computational power.
0MugaSofer
Well, somebody missed the point of this post...
1CCC
Yes, as the article pointed out. No. Not necessarily. It can be applied in non-evil ways; this is not always the case, but it is possible. Let me consider the case of South Africa, a country that is close to my heart (mainly because I happen to live in it). For some significant time, the country was under a set of laws described, as a whole, under the name of Apartheid. These laws were racist, and evil, and punished the black majority for things that they did not do. In 1994 (yes, only nineteen years ago) this regime was decisively ended. And the black majority gained the vote, making it highly unlikely that it would return. Affirmative action policies were then promptly initiated. Now, this was not a case of 'blood guilt' - this was a case of redressing wrongs that had continued right up until 1994. It wasn't a case of 'punish the descendants' - it was aimed directly at the very same generation who had reaped the benefits of Apartheid themselves. (It's also the rare case of affirmative action designed to benefit a majority instead of a minority). Incidentally, the policies in question do state that if there is only one properly qualified person, then that person should be hired (but you should look very hard to make sure and keep the documentation handy). If you have a choice of qualified applicants, then you get to pick. It's not without its problems, but the core idea is not evil. Of course, this does not apply to all societies.
3wedrifid
Intending malice is not strictly a requirement. Killing someone because you (for example) believe that it will save their immortal soul is not malicious. Euthenasia is also still punished as murder in many juristictions even when it is done with the intent of mercy not malice.

PETA is without question a terrorist organization. They act as a front for recruitment for terrorist groups such as ALF, they give money to such groups, they support their activities, they send out large amounts of propaganda, they have significant overlap in membership with said terrorist groups and put terrorists in leadership roles... the list goes on. They DO, in fact, go around burning stuff down, commit arson, and I know on at least one of those occaisions left their dog locked up in their truck on a hot day while they were out "liberating' rabb... (read more)

2wedrifid
It is possible (and preferable) to give reasons why you don't approve of PETA and why you like eating animals without inflationary use of the term 'terrorist'.

95% is an arbitrarily chosen number which is a rule of thumb. Very frequently you will see people doing further investigation into things where p>0.10, or if they simply feel like there was something interesting worth monitoring. This is, of course, a major cause of publication bias, but it is not unreasonable or irrational behavior.

If the effect is really so minor it is going to be extremely difficult to measure in the first place, especially if there is background noise.

0Decius
It's not a rule of thumb; it's used as the primary factor in making policy decisions incorrectly. In this specific example, the regulatory agency made the statement "There is no evidence that artificial colorings are linked to hyperactivity" based on the data that artificial colorings are linked to hyperactivity with p~.13 There are many other cases in medical where 0.05p<.5 is used as evidence against p.

Fanfiction inherently limits the number of people who will ever look at it; an independent work stands on its own merits, but a fanfiction stands on both its own merits and the merits of the continuity to which it is attached. Write the best fanfic ever about Harry Potter, and most people still will never read it because your audience is restricted to Harry Potter fans who read fanfiction - a doubly restricted group.

While it is undeniable that it can act to promote your material, you are forever constrained in audience size by the above factors, as well as the composition of said audience by said people who consume fanfiction of fandom X.

1MugaSofer
Woah, I never though of it like that before. We should be writing crossovers!

There's quite a number of HPMOR readers who've never read HP. Admittedly this may be a special case, and it's not HPMOR's original intended optimal use-case either (reading Philosopher's Stone first is a good idea if you can).

I agree that fanfic has a lower ceiling than original work. But it isn't necessarily better to raise my ceiling than to raise my average.

8Vaniver
The second factor is much more important for most authors for most stories. I read a lot of fanfiction by people whose original works I never would have found, because their original works aren't stored in a fanfiction repository. It's like how you could go to DeviantArt and look at people's original works, but you're much more likely to come across drawings they've done of things you're both fans of. Worrying that you are forever constrained in audience size seems odd; most people never read most stories. The question is how many you can get to read it, and when.

Write an original work, and unless you are both very lucky and very good, the number of people who see it is more or less zero.

If you write an original work, then I am very sorry, but I probably will not read it. There is a barrier to diving into a new world, a trivial inconvenience, but nonetheless, a cost to high for the expected return, which by Sturgeon's Law is near zero. On the other hand, in fanfiction I already know the world, and that makes it easier to jump in.

Yes, for fanfiction there is an upper bound to the readership numbers, but in practice,... (read more)

6jefftk
"Vegetarianism isn't really rational in the first place" You're throwing around "rational" in an unhelpful way. The question at issue is: does going vegetarian make the world better? There's a good argument that the number of animals living on factory farms is proportional to the number of meat eaters, so if you value the suffering of animals then decreasing the number of people who eat meat should be helpful. "probably having a net negative impact by encouraging terrorist groups like PETA." PETA may sometimes get classified as terrorists but they're clearly not what most people will think of when they hear "terrorist"; they don't go around blowing things up. This is the noncentral fallacy. While I think PETA isn't going about things well, if you value the suffering of animals I have a hard time seeing how "maybe encouraging PETA a bit" can outweigh many fewer animals living on factory farms.

This is why you never eyeball data. Humans are terrible at understanding randomness. This is why statistical analysis is so important.

Something that is at 84% is not at 95%, which is a low level of confidence to begin with - it is a nice rule of thumb, but really if you're doing studies like this you want to crank it up even further to deal with problems with publication bias. publish regardless of whether you find an effect or not, and encourage others to do the same.

Publication bias (positive results are much more likely to be reported than negative res... (read more)

0buybuydandavis
Some are more terrible than others. A little bit of learning is a dangerous thing. Grown ups eyeball their data and know the limits of standard hypothesis testing. Yeah, evidence that the FDA doesn't accept doesn't exist.
Load More