Comment author: [deleted] 22 May 2014 12:18:56PM 0 points [-]

Once you throw away this whole 'can and will try absolutely anything' and enter the domain of practical software, you'll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of "uncontrollable" (but easy to describe) AI is that it is too slow by a ridiculous factor.

Once you enter the domain of practical software you've entered the domain of Narrow AI, where the algorithm designer has not merely specified a goal but a method as well, thus getting us out of dangerous territory entirely.

Comment author: more_wrong 27 May 2014 10:37:54PM 2 points [-]

On rereading this I feel I should vote myself down if I knew how, it seems a little over the top.

Let me post about my emotional state since this is a rationality discussion and if we can't deconstruct our emotional impulses and understand them we are pretty doomed to remaining irrational.

I got quite emotional when I saw a post that seemed like intellectual bullying followed by self congratulation; I am very sensitive to this type of bullying, more so when directed at others than myself as due to freakish test scores and so on as a child I feel fairly secure about my intellectual abilities, but I know how bad people feel when others consider them stupid. I have a reaction to leap to the defense of the victim; however I put this down to local custom of a friendly ribbing type of culture or something and tried not to jump on it.

Then I saw that privatemessaging seemed pretending to be an authority on Monte Carlo methods while spreading false information about them, either out of ignorance (very likely) or malice. Normally ignorance would have elicited a sympathy reaction from me and a very gentle explanation of the mistake, but in the context of having just seen privatemessaging attack elisennesh for his supposed ignorance of Monte Carlo methods, I flew into a sort of berserker sardonic mode, i.e. "If privatemessaging thinks that people who post about Monte Carlo methods while not knowing what they are should be mocked in public, I am happy to play by their rules!" And that led to the result you see, a savage mocking.

I do not regret doing it because the comment with the attack on eli_sennesh and the calumnies against Monte Carlo still seems to be to have been in flagrant violation of rationalist ethics, in particular, presenting himself as if not an expert, at least someone with the moral authority to diss someone else for their ignorance on an important topic, and then followed false and misleading information about MC methods. This seemed like an action with a strongly negative utility to the community because it could potentially lead many readers to ignore the extremely useful Monte Carlo methodology.

If I posed as an authority and when around telling people Bayesian inference was a bad methodology that was basically just "a lot of random guesses" and that "even a very stupid evolutionary program" would do better t assessing probabilities, should I be allowed to get away scot free? I think not. If I do something like that I would actually hope for chastisement or correction from the community, to help me learn better.

Also it seemed like it might make readers think badly of those who rely heavily on Monte Carlo Methods. "Oh those idiots, using those stupid methods, why don't they switch to evolutionary algorithms". I'm not a big MC user but I have many friends who are, and all of them seem like nice, intelligent, rational individuals.

So I went off a little heavily on private_messaging, who I am sure is a good person at heart.

Now, I acted emotionally there, but my hope is that in the Big Searles Room that constitutes our room, I managed to pass a message that (through no virtue of my own) might ultimately improve the course of our discourse.

I apologize to anyone who got emotionally hurt by my tirade.

Comment author: private_messaging 22 May 2014 04:09:42AM *  -1 points [-]

Do you even know what "monte carlo" means? It means it tries to build a predictor of environment by trying random programs. Even very stupid evolutionary methods do better.

Once you throw away this whole 'can and will try absolutely anything' and enter the domain of practical software, you'll also enter the domain where the programmer is specifying what the AI thinks about and how. The immediate practical problem of "uncontrollable" (but easy to describe) AI is that it is too slow by a ridiculous factor.

Comment author: more_wrong 27 May 2014 05:48:27PM 0 points [-]

Private_messaging, can you explain why you open up with such a hostile question at eli? Why the implied insult? Is that the custom here? I am new, should I learn to do this?

For example, I could have opened with your same question, because Monte Carlo methods are very different from what you describe (I happened to be a mathematical physicist back in the day). Let me quote an actual definition:

Monte Carlo Method: A problem solving technique used to approximate the probability of certain outcomes by running multiple trial runs, called simulations, using random variables.

A classic very very simple example is a program that approximates the value of 'pi' thusly:

Estimate pi by dropping $total_hits random points into a square with corners at -1,-1 and 1,1

(then count how many are inside radius one circle centered on origin)

(loop here for as many runs as you like) { define variables $x,$y, $hitsinsideradius = 0, $radius =1.0, $totalhits=0, piapprox;

input $total_hits for this run;
seed random function 'rand';
for (0..total_hits-1) do {
$x = rand(-1,1);
$y = rand(-1,1);
$hits_inside_radius++ if ( $x*$x + $y * $y <= 1.0);
}
$pi_approx = 4 * $hits_inside_radius
add $pi_approx and $total_hits to a nice output data vector or whatever

} output data for this particular run } print nice report exit();


OK, this is a nice toy Monte Carlo program for a specific problem. Real world applications typically have thousands of variables and explore things like strange attractors in high dimensional spaces, or particle physics models, or financial programs, etc. etc. It's a very powerful methodology and very well known.

In what way is this little program an instance of throwing a lot of random programs at the problem of approximating 'pi'? What would your very stupid evolutionary program to solve this problem more efficiently be? I would bet you a million dollars to a thousand (if I had a million) that my program would win a race against a very stupid evolutionary program to estimate pi to six digits accurately, that you write. Eli and Eliezer can judge the race, how is that?

I am sorry if you feel hurt by my making fun of your ignorance of Monte Carlo methods, but I am trying to get in the swing of the culture here and reflect your cultural norms by copying your mode of interaction with Eli, that is, bullying on the basis of presumed superior knowledge.

If this is not pleasant for you I will desist, I assume it is some sort of ritual you enjoy and consensual on Eli's part and by inference, yours, that you are either enjoying this public humiliation masochistically or that you are hoping people will give you aversive condition when you publicly display stupidity, ignorance, discourtesy and so on. If I have violated your consent then I plead that I am from a future where this is considered acceptable when a person advertises that they do it to others. Also, I am a baby eater and human ways are strange to me.

OK. Now some serious advice:

If you find that you have just typed "Do you even know what X is?" then given a little condescending mini lecture about X, please check that you yourself actually know what X is before you post. I am about to check Wikipedia before I post in case I'm having a brain cloud, and i promise that I will annotate any corrections I need to make after I check; everything up to HERE was done before the check. (Off half recalled stuff from grad school a quarter century ago...)

OK, Wikipedia's article is much better than mine. But I don't need to change anything, so I won't.

P.S. It's ok to look like an idiot in public, it's a core skill of rationalists to be able to tolerate this sort of embarassment, but another core skill is actually learning something if you find out that you were wrong. Did you go to Wikipedia or other sources? Do you know anything about Monte Carlo Methods now? Would you like to say something nice about them here?

P.P.S. Would you like to say something nice about eli_sennesh, since he actually turns out to have had more accurate information than you did when you publicly insulted his state of knowledge? If you too are old pals with a joking relationship, no apology needed to him, but maybe an apology for lazily posting false information that could have misled naive readers with no knowledge of Monte Carlo methods?

P.P.P.S. I am curious, is the psychological pleasure of viciously putting someone else down as ignorant in front of their peers worth the presumed cost of misinforming your rationalist community about the nature of an important scientific and mathematical tool? I confess I feel a little pleasure in twisting the knife here, this is pretty new to me. Should I adopt your style of intellectual bullying as a matter of course? I could read all your posts and viciously hold up your mistakes to the community, would you enjoy that?

In response to You Only Live Twice
Comment author: Thomas_Nowa 12 December 2008 10:06:14PM 2 points [-]

The use of the financial argument against cryonics is absurd.

Even if the probability of being revived is sub-1%, it is worth every penny since the consequence is immortality (or at least another chance at life). If you don't sign up, your probability of revival is 0% (barring a "The Light of Other Days" scenario) and the consequence is death - for eternity.

By running a simple risk analysis, the choice is obvious.

The only scenario where a financial argument makes sense is if you're shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.

Comment author: more_wrong 27 May 2014 03:40:49AM 5 points [-]

The only scenario where a financial argument makes sense is if you're shortening your life by spending more than you can afford, or if spending money on cryonics prevents you from buying some future tech that would save your life.

What if I am facing death and have an estate in the low six figures, and I can afford one cryonic journey to the future, or my grandchildren's education plus, say, charitable donations enough to save 100 young children who might otherwise live well into a lovely post-Singularity world that would include life extension, uploading, and so on? Would that be covered under "can't afford it"? If my personal survival is just not that high a priority to me (compared to what seem to me much better uses of my limited funds) does that mean I'm ipso facto irrational in your book, so my argument 'doesn't make sense'?

I do think cryonics is a very interesting technology for saving the data stored in biological human bodies that might otherwise be lost to history, but that investing in a micro-bank or The Heifer Project might have greater marginal utility in terms of getting more human minds and their contents "over the hump" into the post-singularity world many of us hope for. I just don't see why the fact that it's /me/ matters.

What if the choice is "use my legacy cash to cryopreserve a few humans chosen at random" versus "donate same money to help preserve a whole village worth of young people in danger who can reasonably be expected to live past the Singularity if they can get past the gauntlet of childhood diseases" (the Bill Gates approach) to "preserve a lovely sampling of as many endangered species as seems feasible". I would argue that any of these scenarios would make sense.

Also, I think that people relying on cryo would do well to lifeblog as much as possible, I think continuous video footage from inside the home and some vigorous diary type writing or recording might be a huge help in reconstructing a personality in addition to some inevitably fuzzy measurements of some exact values of positions of microtubules in frozen neurons and the like. It would at give future builders of human emulations a baseline to check how good their emulations were. Is this a well known strategy? I cannot recall seeing it discussed, but it seems obvious.

In response to You Only Live Twice
Comment author: more_wrong 27 May 2014 03:18:12AM 0 points [-]

I think cryonics is very promising but the process of bringing people back from frozen state will need a lot of research and practice.

I would like to volunteer to go in as a research subject if someone else will pay and if any data mined from my remains is released as open source historical data under some reasonable license, for example the Perl Artistic License, with myself listed as the author of the raw recovered data. (I wrote it into my memories, no?)

People could then use the mined data, such as it is, for research on personality reconstruction or any other ethical purpose. I would be quite surprised to find my mind reconstructed with continuity of identity, and perhaps quite pleased, but that's not at all necessary; I believe the Universe will keep the reference copy, if any, of my key information in distributed form, so I'm happy to make myself available for practice material for future entities (more likely than not Friendly AI type people) who wish to practice on volunteers who are indifferent to any mistakes in the attempted reconstruction process.

I do think it would behoove the cryonics community to find volunteers such as myself willing to undergo this sort of experimentation. If I had the money to invest in freezing myself with an eye to later reconstruction, I would certainly think it a good investment to help pay the cryonics cost for a volunteer willing to be the practice dummy for aspiring future Revivalists.

Are any of the cryonics enthusiasts here aware of a call for volunteers from any cryonics institute or group? A cursory search did not lead me to anywhere to sign up for such a program.

This is a serious request and offer, I would be quite happy to be frozen and datamined, primarily for the benefit of future historians and scientists but also would be very pleased if I could in some way help the people who are hoping to be revived with intact minds someday.

I would request that any personality constructed or reconstructed from my data be offered control of a mercy switch that could turn off whatever process is emulating its consciousness.

In response to Circular Altruism
Comment author: more_wrong 27 May 2014 02:18:26AM -1 points [-]

It depends on the actual situation and my goal.

Imagine I were a ship captain assigned to try to If rescue a viable sample of a culture from a zone that was about to be genocided, I would be very likely to take the 400 peopleweights (including books or whatever else they valued as much as people) of evacuees, unless someone made a convincing case that the extra 100 people were vital cultural or genetic carriers. For definiteness, imagine my ship is rated to carry up to 400 peopleweight worth of passengers in almost any weather, but 500 people would overload it to the point of sinking during a storm of the sort that the weather experts predict 10 percent probable during voyage to safe harbor.

People are not dollars or bales of cotton to be sold at market. You can't just count heads and multiply that number by utilons per head and say "This answer is best, any other answer is foolish."

Well obviously you can do that, but the main reward for doing so is the feeling that you are smarter than the poor dumb fools who believe that the world is complex and situation dependent. That is, you can give yourself a sort of warm fuzzy feeling of smug superiority by defeating the straw man you constructed as your foolish competitor in the Intelligence Sweepstakes.

That being said, if there really is no other information available, I would take the same choice Eliezer recommends; I just deny that it is the only non foolish choice.

This applies to lottery tickets as well. A slim chance at escaping economic hell might be worth more than its nominal expected return value to a given individual. 100 million dollars might very well have a personal utility over a billion times the value of one dollar for example, if that person's deep goals would be facilitated mightily by the big win and not at all by a single dollar or any reasonable number of dollars they might expect to save over the available time. Also, if any entertainment dollar is not a foolish waste, then a dollar spent on a lottery ticket is worth its expected winning value plus its entertainment value, which varies /profoundly/ from person to person.

I myself prefer to give people $1 lottery tickets instead of $2.95 witty birthday cards. Am I wise or foolish in this? But posts here have branded all lottery purchases as foolish, so I must be a fool. I bow to the collective wisdom here and admit that I am a fool. There is a lot of other evidence that supports this conclusion :)

if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

I heartily agree, that's one reason I try to avoid trotting out applause lights to trigger other people into giving me warm fuzzies.

I am happy for one person to be tortured for 50 years to stave off the dust specks, as long as that person is me. In fact, this pretty much sums up my career in software development, it is not my favorite thing to do but I endured cubicle hell for many years in exchange for money in part, but also because of my deep belief that solving annoying little bugs and glitches that might inconvenience many many people was an activity important enough to override my personal preferences; I could easily have found other combinations of pay and fun that pleased me better, so I have actually been through this dilemma in muted form in real life and chose to personally suffer to hold off 'specks' like poorly designed user interfaces.

I do have great admiration for Eliezer but he claims to want to be more rational and to welcome criticism intended to promote his progress on The Way, so I thought it would be ok to be critical of this post, which irked me because paragraph four is a straw man "fool" phrased in second person, which seems like a sort of pre-emptive ad hominem against any reader of the post foolish enough to disagree with the premise of the writer. This seems like an extremely poor substitute for rational discourse, the sort of nonsense that could cost the writer Quirrell points, and none of us want that. I don't want to seem hostile, but since I am exactly the sort of fool who disagreed with the premise of paragraph 3, I do feel like I was being flamed a bit, and since I am apparently made of straw, flames make me nervous :)

Comment author: Tesseract 29 December 2010 10:59:39AM 1 point [-]

Not to dispute your main point here (that emotionally-protected false beliefs discourage contact with reality), but do you really think that many religious practices were developed consciously and explicitly for the purpose of preventing contact with outside ideas? It seems to me that something like kosher law was more likely the combination of traditional practice and the desire to forge a sense of social identity than a structure explicitly designed to stop interactions. Group differences hinder interaction between groups, but that doesn't mean that the purpose of group differences is to do so.

I don't disagree with you on the point that religion often explicitly discourages contact with nonbelievers, either, but that seems to me to be more easily explained by honest belief than Dark Side practices. If you believe something is true (and important to know the truth of) but that someone can be easily persuaded otherwise by sophistic arguments, then it's reasonable to try to prevent them from hearing them. If someone believes in global warming but doesn't have a firm grasp on the science, then you shouldn't let them wander into a skeptics' convention if you value valid beliefs.

Comment author: more_wrong 26 May 2014 06:25:02PM 3 points [-]

It seems very likely to me that tribal groups in prehistory observed that "eating some things leads to illness and sometimes death; eating other things seems to lead to health or happiness or greater utility" and some very clever group of people starting compiling a system of eating rules that seemed to work. It became traditional to hand over rules for eating, and other activities, to their children. Rules like "If a garment has a visible spot of mildew, either cut out the mildewed spot with a specified margin around it or discard it entirely, for god's sake don't store it with your other garments" or "don't eat insects that you don't specifically recognize as safe and nutritious" or 'don't eat with unclean hands, for a certain technical definition of 'unclean', for example, don't touch a rotting corpse then stuff your face or deliver a baby with those hands" etc. etc.

Then much much later, some of the descendants of some of those tribes thought to write a bunch of this stuff down before it could be forgotten. They ascribed the origin of the rules to a character representing "The best collective wisdom we have available to us" and used about ten different names for that character, who was seen as a collection of information much like any person is, but the oldest and wisest known collection of information around.

Then when different branches of humanity ran into each other and found out that other branches had different rule sets, different authority figures, and different names for the same thing as well as differing meanings for the same names in many cases, hilarity ensued.

Then a group of very very serious atheists came and said "We have the real truth, and our collective wisdom is much much better than that of the ancient people who actually fought through fire and blood, death and disease and a shitstorm of suffering to hand us a lot of their distilled wisdom on a platter, so we could then take the cream of what they offered, throw away the rest, and make fun of their stupid superstitions while not acknowledging that they actually did extremely well for the conditions they experienced"

Religious minds did most of the heavy lifting to get rationality at least as far as Leibniz and Newton, both of whom were notably religious. I'm not saying that the religious mindset is correct or superior, but the development of rational thought among humans has been like a relay race carrying a torch for a million years, and then when the torch is at the finish line (when it gets passed on to nonhumans) a subset of the people who carried the torch for the last little bit doesn't need to say "Hah we are so much better than the people who fought and died under the banner of beliefs at variance with our own". This is a promulgation of what is /bad/ about religion, and I see a lot of it in this group. I love the group but would really like it even better if people showed a tiny bit of respect for the minds that fought through the eras of slavery and religious war and other evils, instead of proclaiming very loudly about how wonderful they are compared to everyone else.

I mean, you ARE wonderful, you are doing amazing things, but... come on.

Not that I am any better, here I am bashing you lovely people because your customs are at variance with my own - but that's what reading this group has taught me to do!

Comment author: more_wrong 26 May 2014 04:59:23PM 3 points [-]

I chose more_wrong as a name because I'm in disagreement with a lot of the lesswrong posters about what constitutes a reasonable model of the world. Presumably my opinions are more wrong than opinions that are lesswrong, hence the name :)

My rationalist origin story would have a series of watershed events but as far as I can tell, I never had any core beliefs to discard to become rational, because I never had any core beliefs at all. Do not have a use for them, never picked them up.

As far as identifying myself as an aspiring rationalist, the main events that come to mind would be: 1. Devouring as a child anything by Isaac Asimov that I could get my hands on. In case you are not familiar with the bulk of his work, most of it is scientific and historical exposition, not his more famous science fiction; see especially his essays for rationalist material.

  1. Working on questions in physics like "Why do we call two regions of spacetime close to each other?", that is, delving into foundational physics.

  2. Learning about epistemology and historiography from my parents, a mathematician and a historian.

  3. Thinking about the thinking process itself. Note: Being afflicted with neurological and psychological conditions that shut down various parts of my mentality, notably severe intermittent aphasia, has given me a different perspective on the thinking process.

  4. Making some effort to learn about historical perspectives on what constitutes reason or rationality, and not assuming that the latest perspectives are necessarily the best.

    I could go on but that might be enough for an intro.

    My hope is to both learn how to reason more effectively and, if fortunate, make a contribution to the discussion group that helps us to learn the same as a community. mw

View more: Prev | Next