Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: more_wrong 02 June 2014 01:09:40PM 0 points [-]

Moreover, we know of examples where natural selection has caused drastic decreases in organismal complexity – for example, canine venereal sarcoma, which today is an infectious cancer, but was once a dog.

Or human selection. Henrietta Lax (or her cancer) is now many tonnes of cultured cells; she has become an organism that reproduces by mitosis and thrives in the niche environment of medical research labs.

Comment author: more_wrong 02 June 2014 08:49:15AM 0 points [-]

I love the idea of an intelligence explosion but I think you have hit on a very strong point here:

In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

In fact, we can see from both history and paleontology that when a new breakthrough was made in "biologicial technology" like the homeobox gene or whatever triggered the PreCambrian explosion of diversity, that when self-modification (here a 'self' isn't one meat body, it's a clade of genes that sail through time and configuration space together - think of a current of bloodlines in spacetime, that we might call a "species" or genus or family) is made easier (and the development of modern-style morphogenesis is in some way like developing a toolkit for modification of body plan at some level) then there was apparently an explosion of explorers, bloodlines, into the newly accessible areas of design space.

But the explosion eventually ended. After the Diaspora into over a hundred phyla of critters hard enough to leave fossils, the expansion into new phyla stopped. Some sort of new frontier was reached within tens of millions of years, then the next six hundred million years or so was spent slowly whittling improvements within phyla. Most phyla died out, in fact, while a few like Arthropdoda took over many roles and niches.

We see very similar incidents throughout human history, look at the way languages develop, or technologies. For an example perhaps familiar to many readers, look at the history of algorithms. For thousands of years we see slow development in this field, from Babylonian algorithms on how to find the area of a triangle to the Sieve of Eratosthenes to... after a lot of development - medieval Italian merchants writing down how to do double entry bookkeeping.

Then in the later part of the Renaissance there is some kind of phase change and the mathematical community begins compiling books of algorithms quite consciously. This has happened before, in Sumer and Egypt to start, in Babylon and Greece, in Asia several times, and most notably in the House of Wisdom in Baghdad in the ninth century. But always there are these rising and falling cycles where people compile knowledge and then it is lost and others have to rebuild, often the new cycle is helped by the rediscovery or re-appreciation of a few surviving texts from a prior cycle.

But around 1350 there begins a new cycle (which of course draws on surviving data from prior cycles) where people begin to accumulate formally expressed algorithms that is unique in that it has lasted to this day. Much of what we call the mathematical literature consists of these collections, and in the 1930s people (Church, Turing, many others) finally develop what we might now call classical theory of algorithms. Judging by the progress of various other disciplines, you would expect little more progress in this field, relative to such a capstone achievement, for a long time.

(One might note that this seven-century surge of progress might well be due, not to human mathematicians somehow becoming more intelligent in some biological way, but to the development of printing and associated arts and customs that led to the wide spread dissemination of information in the form of journals and books with many copies of each edition. The custom of open-sourcing your potentially extremely valuable algorithms was probably as important as the technology of printing here; remember that medieval and ancient bankers and so on all had little trade secrets of handling numbers and doing maths in a formulaic way, but we don't retain in the general body of algorithmic lore any of their secret tricks unless they published or chance preserved some record of their methods.)

Now, we'd have expected Turing's 1930's work to be the high point in this field for centuries to come (and maybe it was; let history be the judge) but between the development of the /theory/ of a general computing machine, progress in other fields such as electronics, and a leg up in from the intellectual legacy left by priors such as George Boole, the 1940's somehow put together (under enormous pressure of circumstances) a new sort of engine that could run algorithmic calculations without direct human intervention. (Note that here I say 'run', not 'design - I mean that the new engines could execute algorithms on demand).

The new computing engines, electro-mechanical as well as purely electronic, were very fast compared to human predecessors. This led to something in algorithm space that looks to me a lot like the Precambrian explosion, with many wonderful critters like LISP and FORTRAN and BASIC evolving that bridged the gap between human minds and assembly language, which itself was a bridge to the level of machine instructions, which... and so on. Layers and layers developed, and then in the 1960's giants wrought mighty texts of computer science no modern professor can match; we can only stare in awe at their achievements in some sense.

And then... although Moore's law worked on and on tirelessly, relatively little fundamental progress in computer science happened for the next forty years. There was a huge explosion in available computing power, but just as jpaulson suspects, merely adding computing power didn't cause a vast change in our ability to 'do computer science'. Some problems may /just be exponentially hard/ and an exponential increase in capability starts to look like a 'linear increase' by 'the important measure'.

It may well be that people will just ... adapt... to exponentially increasing intellectual capacity by dismissing the 'easy' problems as unimportant and thinking of things that are going on beyond the capacity of the human mind to grasp as "nonexistent" or "also unimportant". Right now, computers are executing many many algorithms too complex for any one human mind to follow - and maybe too tedious for any but the most dedicated humans to follow, even in teams - and we still don't think they are 'intelligent'. If we can't recognize an intelligence explosion when we see one under our noses, it is entirely possible we won't even /notice/ the Singularity when it comes.

If it comes - as jpaulson indicates, there might be a never ending series of 'tiers' where we think "Oh past here it's just clear sailing up to the level of the Infinite Mind of Omega, we'll be there soon!" but when we actually get to the next tier, we might always see that there is a new kind of problem that is hyperexponentially difficult to solve before we can ascend further.

If it was all that easy, I would expect that whatever gave us self-reproducing wet nanomachines four billion years ago would have solved it - the ocean has been full of protists and free swimming virii, exchanging genetic instructions and evolving freely, for a very long time. This system certainly has a great deal of raw computing power, perhaps even more than it would appear on the surface. If she (the living ocean system as a whole) isn't wiser than the average individual human, I would be very surprised, and she apparently either couldn't create such a runaway explosion of intelligence, or decided it would be unwise to do so any faster than the intelligence explosion we've been watching unfold around us.

Comment author: Eliezer_Yudkowsky 26 September 2013 05:15:30PM 2 points [-]

Configurations like that may have amplitudes so small that stray flows of amplitude from larger worlds dominate their neighboring configurations, preventing any computation from taking place.

Even if such worlds do 'exist', whether I believe in magic within them is unimportant, since they are so tiny; and also there is no reason to privilege that hypothesis as something to react to, since the real reason we are discussing that world is someone else choosing to single it out for discussion.

Comment author: more_wrong 31 May 2014 02:41:27AM 0 points [-]

Even if such worlds do 'exist', whether I believe in magic within them is unimportant, since they are so tiny;

Since there is a good deal of literature indicating that our own world has a surprisingly tiny probabilty (ref: any introduction to the Anthropic Principle), I try not to dismiss the fate of such "fringe worlds" as completely unimportant.

army1987's argument above seems very good though, I suggest you look at his comment very seriously

Comment author: AlanCrowe 29 November 2010 12:25:50PM 8 points [-]

This reminds me of the classic industrial accident involving a large, pressurised storage tank. There is a man-sized door to allow access for maintenance and a pressure gauge. The maintenance man is supposed to wait for the pressure to fall to zero before he undoes the heavy steel latches. It is a big tank and he gets bored with waiting for the pressure to vent. The gauge says one pound per square inch. One pound doesn't sound like much so the man undoes the latches. Since the force is per square inch it is several hundred times larger than expected. The heavy door flies open irresistibly and kills the man.

I'm not seeing how the parable helps one be less wrong in real life. In the parable the victim has seen a dog taken by the dragon. If the maintenance man had seen an apprentice crushed in an earlier similar accident the experience would scar him mentally and he would always be wary of pressure vessels. I'm worrying that the parable is cruder than the problems we face in real life.

I don't know more than I've already said about pressure vessel accidents. Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death? Is it a matter of incentives; the manager gets a bonus if he encourages the maintenance team to work quickly, but doesn't go to jail when cutting corners leads to a fatal accident? Is it a matter of education; the maintenance man just didn't get pressure? Is it a matter of labeling; why not label the gauge by the door with the force per door area? Is it matter of class; the safety officer is middle class, the maintenance man is working class, the working class distrust the middle class and don't much believe what they say?

Comment author: more_wrong 31 May 2014 01:36:58AM 3 points [-]

Is there an underlying problem of crying wolf; too many warning messages obscure the ones that are really matters of life and death?

This is certainly an enormous problem for interface design in general for many systems where there is some element of danger. The classic "needle tipping into the red" is an old and brilliant solution for some kinds of gauges - an analogue meter where you can see the reading tipping toward a brightly marked "danger zone", usually with a 'safe' zone and an intermediate zone also marked, has surely prevented many accidents. If the pressure gauge on the door had such a meter where green meant "safe to open hatches" and red meant "terribly dangerous", that might have been a better design than just raw numbers.

I haven't worked with pressure doors but I have worked with large vacuum systems, cryogenic systems, labs with lasers that could blind you or x-ray machines that can be dangerously intense, and so on. I can attest that the designers of physics lab equipment do indeed put a good deal of thought and effort into various displays that indicate when the equipment is in a dangerous state.

However, when there are /many/ things that can go dangerously wrong, it becomes very difficult to avoid cluttering the sensorium of the operator with various warnings. The classic example are the control panels for vehicles like airplanes or space ships; you can see a beautiful illustration of the 'indicator clutter problem' in the movie "Airplane!":

Comment author: more_wrong 30 May 2014 11:08:38PM 0 points [-]

"There is an object one foot across in the asteroid belt composed entirely of chocolate cake."

This is a lovely example, which sounds quite delicious. It reminds me strongly of the famous example of Russell's Teapot (from his 1952 essay "Is There a God?"). Are you familiar with his writing?

You'll just subconsciously avoid any Devil's arguments that make you genuinely nervous, and then congratulate yourself for doing your duty.

Yes, I have noticed that many of my favorite people, myself included, do seem to spend a lot of time on self-congratulation that they could be spending on reasoning or other pursuits. I wonder if you know anyone who is immune to this foible :)

Comment author: TitaniumDragon 27 May 2014 10:46:25PM *  1 point [-]

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment author: more_wrong 28 May 2014 03:59:55AM 3 points [-]

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.

Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.

Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.

Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.

Or we eukaryotes are the stupid runaway "wet" technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don't see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.

What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth's coalescence, then some combination of goo more or less dominated the Earth's surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.

Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So... five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.

I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots - I am an idiot, and it's the kind of mistake I would make, and I'm demonstrably above average by many measures of intelligence. It's just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)

Comment author: nshepperd 28 May 2014 02:12:24AM 1 point [-]

You appear to have posted this as a reply to the wrong comment. Also, you need to indent code 4 spaces and escape underscores in text mode with a \_.

On the topic, I don't mind if you post tirades against people posting false information (I personally flipped the bozo bit on private_messaging a long time ago). But you should probably keep it short. A few paragraphs would be more effective than two pages. And there's no need for lengthy apologies.

Comment author: more_wrong 28 May 2014 03:23:55AM 0 points [-]

Yes, I am sorry for the mistakes, not sure if I can rectify them. I see now about protecting special characters, I will try to comply.

I am sorry, I have some impairments and it is hard to make everything come out right.

Thank you for your help

Comment author: [deleted] 22 May 2014 12:18:56PM 0 points [-]

Once you throw away this whole 'can and will try absolutely anything' and enter the domain of practical software, you'll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of "uncontrollable" (but easy to describe) AI is that it is too slow by a ridiculous factor.

Once you enter the domain of practical software you've entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.

Comment author: more_wrong 27 May 2014 10:37:54PM 2 points [-]

On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.

Let me post about my emotional state since this is a rationality discussion and if we can't deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.

I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.

Then I saw that privatemessaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen privatemessaging attack elisennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. "If privatemessaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!" And that led to the result you see, a savage mocking.

I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.

If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just "a lot of random guesses" and that "even a very stupid evolutionary program" would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.

Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. "Oh those idiots, using those stupid methods, why don't they switch to evolutionary algorithms". I'm not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.

So I went off a little heavily on private_messaging, who I am sure is a good person at heart.

Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.

I apologize to anyone who got emotionally hurt by my tirade.

Comment author: private_messaging 22 May 2014 04:09:42AM *  -1 points [-]

Do you even know what "monte carlo" means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.

Once you throw away this whole 'can and will try absolutely anything' and enter the domain of practical software, you'll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of "uncontrollable" (but easy to describe) AI is that it is too slow by a ridiculous factor.

Comment author: more_wrong 27 May 2014 05:48:27PM 0 points [-]

Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?

For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:

Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.

A classic very very simple example is a program that approximates the value of 'pi' thusly:

Estimate pi by dropping $total_hits random points into a square with corners at -1,-1 and 1,1

(then count how many are inside radius one circle centered on origin)

(loop here for as many runs as you like) { define variables $x,$y, $hitsinsideradius = 0, $radius =1.0, $totalhits=0, piapprox;

input $total_hits for this run;
seed random function 'rand';
for (0..total_hits-1) do {
$x = rand(-1,1);
$y = rand(-1,1);
$hits_inside_radius++ if ( $x*$x + $y * $y <= 1.0);
}
$pi_approx = 4 * $hits_inside_radius
add $pi_approx and $total_hits to a nice output data vector or whatever

} output data for this particular run } print nice report exit();


OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It's a very powerful methodology and very well known.

In what way is this little program an instance of throwing a lot of random programs at the problem of approximating 'pi'? What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?

I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.

If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli's part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.

OK. Now some serious advice:

If you find that you have just typed "Do you even know what X is?" then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I'm having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to HERE was done before the check. (Off half recalled stuff from grad school a quarter century ago...)

OK, Wikipedia's article is much better than mine. But I don't need to change anything, so I won't.

P.S. It's ok to look like an idiot in public, it's a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?

P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?

P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?

In response to You Only Live Twice
Comment author: Thomas_Nowa 12 December 2008 10:06:14PM 2 points [-]

The use of the financial argument against cryonics is absurd.

Even if the probability of being revived is sub-1%, it is worth every penny since the consequence is immortality (or at least another chance at life). If you don't sign up, your probability of revival is 0% (barring a "The Light of Other Days" scenario) and the consequence is death - for eternity.

By running a simple risk analysis, the choice is obvious.

The only scenario where a financial argument makes sense is if you're shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.

Comment author: more_wrong 27 May 2014 03:40:49AM 5 points [-]

The only scenario where a financial argument makes sense is if you're shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.

What if I am facing death and have an estate in the low six figures, and I can afford one cryonic journey to the future, or my grandchildren's education plus, say, charitable donations enough to save 100 young children who might otherwise live well into a lovely post-Singularity world that would include life extension, uploading, and so on? Would that be covered under "can't afford it"? If my personal survival is just not that high a priority to me (compared to what seem to me much better uses of my limited funds) does that mean I'm ipso facto irrational in your book, so my argument 'doesn't make sense'?

I do think cryonics is a very interesting technology for saving the data stored in biological human bodies that might otherwise be lost to history, but that investing in a micro-bank or The Heifer Project might have greater marginal utility in terms of getting more human minds and their contents "over the hump" into the post-singularity world many of us hope for. I just don't see why the fact that it's /me/ matters.

What if the choice is "use my legacy cash to cryopreserve a few humans chosen at random" versus "donate same money to help preserve a whole village worth of young people in danger who can reasonably be expected to live past the Singularity if they can get past the gauntlet of childhood diseases" (the Bill Gates approach) to "preserve a lovely sampling of as many endangered species as seems feasible". I would argue that any of these scenarios would make sense.

Also, I think that people relying on cryo would do well to lifeblog as much as possible, I think continuous video footage from inside the home and some vigorous diary type writing or recording might be a huge help in reconstructing a personality in addition to some inevitably fuzzy measurements of some exact values of positions of microtubules in frozen neurons and the like. It would at give future builders of human emulations a baseline to check how good their emulations were. Is this a well known strategy? I cannot recall seeing it discussed, but it seems obvious.

View more: Next