Comment author: moshez 21 November 2012 06:24:07PM 1 point [-]

My initial reaction was "I wish I wouldn't have known about this", because it made me physically shuddered. After the shock and disgust, I forced myself to accept the proposition "There is a company selling bleach as medicine, and people are ingesting it". I am now happy I have seen this, because my model of the world is more accurate, and if I act on my values in accordance with more accurate beliefs, I will be able to do more good.

Comment author: Svante 21 January 2008 10:15:47PM 1 point [-]

As a full-blown Bayesian, I feel that the bayesian approach is *almost* perfect. It was a revelation when I first realized that instead of having this big frequentist toolbox of heuristics, one can simply assume that every involved entity is a random variable. Then everything is solved! But then pretty quickly I came to the catch, namely that to be able to do anything, the probability distributions must be parameterized. And then you start to wonder what the pdf's of the parameters should be, and off we go into infinite regress.

But the biggest catch is of course that the integral for the posterior is almost never solvable. If that wasn't the case, I believe we would have had superhuman AI a long time ago. Still, I think bayesian methods are underexploited in AI. For example, it is straight-forward to make a "curious" system that asks the user all the things it is uncertain of, in a way that minimizes the need for human input (My lab is currently working on such a system for auditory testing).

Comment author: moshez 21 November 2012 05:47:13PM 3 points [-]

You don't need to solve the integral for the posterior analytically, you can usually Monte-Carlo your way into an approximation. That technique is powerful enough on reasonably-sized computers that I find myself doubting that this is the only hurdle to superhuman AI.

Comment author: moshez 09 November 2012 04:20:36PM 10 points [-]

I took it. No SAT scores or classical IQ scores, didn't take Myer-Briggs (because it's stupid) or Autism (because freakin' hell, amateur psychology diagnosis on the 'net).

In response to Logical Pinpointing
Comment author: Viliam_Bur 01 November 2012 08:50:17PM 11 points [-]

You just say: 'For every relation R that works exactly like addition, the following statement S is true about that relation.' It would look like, '∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz))) → S)', where S says whatever you meant to say about +, using the token R.

The expression '(∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz)))' is true for addition, but also for many other relations, such as a '∀x∀y∀z: R(x, y, z)' relation.

Comment author: moshez 01 November 2012 10:20:40PM 3 points [-]

I'm not sure that adding the conjunction (R(x,y,z)&R(x,y,w)->z=w) would have made things clearer...I thought it was obvious the hypothetical mathematician was just explaining what kind of steps you need to "taboo addition"

In response to Logical Pinpointing
Comment author: Eliezer_Yudkowsky 25 October 2012 03:11:59AM 3 points [-]

Meditation:

Humans need fantasy to be human.

"Tooth fairies? Hogfathers? Little—"

Yes. As practice. You have to start out learning to believe the little lies.

"So we can believe the big ones?"

Yes. Justice. Mercy. Duty. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy.

  • Susan and Death, in Hogfather by Terry Pratchett

So far we've talked about two kinds of meaningfulness and two ways that sentences can refer; a way of comparing to physical things found by following pinned-down causal links, and logical reference by comparison to models pinned-down by axioms. Is there anything else that can be meaningfully talked about? Where would you find justice, or mercy?

Comment author: moshez 01 November 2012 04:40:28PM 6 points [-]

It so happens that the three "big lies" death mentions are all related to morality/ethics, which is a hard question. But let me take the conversation and change it a bit:

"So we can believe the big ones?"

Yes. Anger. Happiness. Pain. That sort of thing.

"They're not the same at all!"

You think so? Then take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of happiness, one molecule of pain.

In this version, the final argument is still correct -- if I take the universe and grind it down to a sieve, I will not be able to say "woo! that carbon atom is an atom of happiness". Since the penultimate question of this meditation was "Is there anything else", at least I can answer that question.

Clearly, we want to talk about happiness for many reasons -- even if we do not value happiness in itself (for ourselves or others), predicting what will make humans happy is useful to know stuff about the world. Therefore, it is useful to find a way that allows us to talk about happiness. Happiness, though, is complicated, so let us put it aside for a minute to ponder something simpler: a solar system. I will simplify here, a solar system is one star and a bunch of planets rotating around it. Though solar systems effect each other through gravity or radiation, most of the effects of the relative motions inside a solar system comes from inside itself, and this pattern repeats itself throughout the galaxy. Much like happiness, being able to talk about solar systems is useful -- though I do not particularly value solar systems in and of themselves, it's useful to have a concept of "a solar system", which describes things with commonalities, and allows me to generalize.

If I grind the universe, I cannot find an atom that is a solar system atom -- grinding the universe down destroys the "solar system" useful pattern. For bounded minds, having these patterns leads to good predictive strength without having to figure out each and every atom in the solar system.

In essence, happiness is no different than solar system -- both are crude words to describe common patterns. It's just that happiness is a feature of minds (mostly human minds, but we talk about how dogs or lizards are happy, sometimes, and it's not surprising -- those minds are related algorithms). I cannot say where every atom is in the case of a human being happy, but some atom configurations are happy humans, and some are not.

So: at the very least, happiness and solar systems are part of the causal network of things. They describe patterns that influence other patterns.

Mercy is easier than justice and duty. Mercy is a specific configuration of atoms behaving a human in a specific way -- even though the human feels they are entitled to cause another human hurt ("feeling entitled" is a set of specific human-mind-configurations, regardless of whether "entitlement" actually exists), but does not do so (for specific reasons, etc. etc.). In short, mercy describes specific patterns of atoms, and is part of causal networks.

Duty and justice -- I admit that I'm not sure what my reductionist metaethics are, and so it's not obvious what they mean in the causal network.

In response to comment by moshez on Building Weirdtopia
Comment author: MixedNuts 31 October 2012 06:08:44PM 1 point [-]

I don't understand why people marry so young. The small villages make obvious psychological sense, and since people are immortal you want marriage to be permanent otherwise you'd be drowning in exes. That could be done with having the Eternal True Love form at the first hint of a crush, or in purely arranged pairs (by O, or by people who decide they'd be a good match then ask O to make then soulmates), but for some reason you want people to date around and fall in temporary love. (The part of me that likes the eternal monogamous bond dislikes the premarital sex.) Given that, why not have people date casually until they're 25 or so, then slowly settle down and marry in their early thirties? If nothing else, it would spare O the headache of preventing people from growing apart when they become adults.

Comment author: moshez 31 October 2012 08:23:21PM 0 points [-]

You can assume that O will make sure to intervene just little enough that two people who are not right for each other will figure it out before they are 18.

In response to Building Weirdtopia
Comment author: moshez 31 October 2012 05:43:54PM 1 point [-]

I tried the exercise, and came up with an interesting werdtopia. http://moshez.wordpress.com/2011/06/22/going-outside/

Comment author: handoflixue 30 October 2012 05:57:21PM 3 points [-]

My mat class provided the simple "how to choose" heuristic that you want X to be alone. So if you have "x+1" on one side, you'll need to subtract 1 to get it by itself. X+1-1=5-1. X=4.

I can see how this wouldn't get explicit attention, since I'd suspect it becomes intuitive after a point, and you just don't think to ask that question. I can't see how one could get through even basic algebra without developing this intuition, though o.o

Comment author: moshez 30 October 2012 08:03:31PM 4 points [-]

Yes, clearly, a bit after I asked, I learned how to use intuition, and at some point, it became rote. But the bigger point is that this is a special case -- in logic, and in math, there are a lot of truth-preserving transformations, and choosing a sequence of transformations is what doing math is. That interesting interface between logic-as-rigid and math-as-something-exploratory is a big part of the fun in math, and what led me to do enough math that led to a published paper. Of course, after that, I went into software engineering, but I never forgot that initial sensation of "oh my god that is awesome" the first time Moshe_1992 learned that there is no such thing as "moving the 1 from one side of the equation to the other" except as a high-level abstraction.

Comment author: moshez 30 October 2012 05:47:51PM 10 points [-]

"I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations."

I can now say that my K-12 education was, at least in this one way, better than yours. I must have been 14 at the time, and the realization that you can do that hit me like a ton of bricks, followed closely by another ton of bricks -- choosing what to add is not governed by the laws of math -- you really can add anything, but not everything is equally useful.

E.g., "solve for x, x+1=5"

You can choose to add -1 to the equation, getting "x+1+-1=5+-1", simplify both sides and get "x=4" and yell "yay", but you can also choose to add, say, 37, and get (after simplification) "x+38=42" which is still true, just not useful. My immediate question after that was "how do you know what to choose" and, long story short, 15 years later I published a math paper... :)

Comment author: Ben_Jones 06 February 2009 03:43:34PM -1 points [-]

Untranslatable 2 is the thought sharing sex.

Sprite, you are, by definition, wrong.

Comment author: moshez 24 October 2012 11:23:43PM 6 points [-]

<error>"By definition" argument detected in a discussion not about math.</error>

The software was using "untranslatable" as a short hand for "the current version of the software cannot translate a term and so is giving it a numeric designation so you will be able to see if we use it again", probably not even saying "no future version of the software will be able to translate it", not to mention a human who spent non-trivial amount of thought on the topic (in TWC future, there's no AI, which means human thought will do some things no software can do).

View more: Prev | Next