Meetup : Weekly meetup, Champaign IL: Cafe Paradiso

1 aspera 26 November 2012 06:06PM

Discussion article for the meetup : Weekly meetup, Champaign IL: Cafe Paradiso

WHEN: 28 November 2012 08:00:00PM (-0600)

WHERE: 801 South Lincoln Avenue, Urbana, IL

Let's meet at 8pm. We decided last time that we'd like to start talking about Timeless decision theory. It's a big topic, but try to come to the meeting with questions or discussion points. Also, let's talk about doing something social next week.

Discussion article for the meetup : Weekly meetup, Champaign IL: Cafe Paradiso

Comment author: mantis 21 November 2012 08:54:13PM 1 point [-]

I think that's probably more practical than trying to make it continuous, considering that our nervous systems are incapable of perceiving infinitesimal changes.

Comment author: aspera 23 November 2012 05:40:16AM 0 points [-]

Yes, we are running on corrupted hardware at about 100 Hz, and I agree that defining broad categories to make first-cut decisions is necessary.

But if we were designing a morality program for a super-intelligent AI, we would want to be as mathematically consistent as possible. As shminux implies, we can construct pathological situations that exploit the particular choice of discontinuities to yield unwanted or inconsistent results.

Comment author: Unknown 09 July 2008 03:12:30AM 3 points [-]

In fact, an anti-Occam prior is impossible. As I've mentioned before, as long as you're talking about anything that has any remote resemblance to something we might call simplicity, things can decrease in simplicity indefinitely, but there is a limit to increase. In other words, you can only get so simple, but you can always get more complicated. So if you assign a one-to-one correspondence between the natural numbers and potential claims, it follows of necessity that as the natural numbers go to infinity, the complexity of the corresponding claims goes to infinity as well. And if you assign a probability to each claim, while making your probabilities sum to 1, then the probability of the more and more complex claims will go to 0 in the limit.

In other words, Occam's Razor is a logical necessity.

Comment author: aspera 23 November 2012 05:27:11AM 1 point [-]

I think it would be possible to have an anti-Occam prior if the total complexity of the universe is bounded.

Suppose we list integers according to an unknown rule, and we favor rules with high complexity. Given the problem statement, we should take an anti-Occam prior to determine the rule given the list of integers. It doesn't diverge because the list has finite length, so the complexity is bounded.

Scaling up, the universe presumably has a finite number of possible configurations given any prior information. If we additionally had information that led us to take an Anti-Occam prior, it would not diverge.

Comment author: aspera 16 November 2012 08:20:48PM -1 points [-]

I'm also looking for a discussion of the symmetry related to conservation of probability through Noether's theorem. A quick Google search only finds quantum mechanics discussions, which relate it to spatial invariances, etc.

If there's no symmetry, it's not a conservation law. Surely someone has derived it carefully. Does anyone know where?

Comment author: mantis 10 September 2012 05:08:40PM 0 points [-]

If dust specks have a value of 0, then what's the smallest amount of discomfort that has a nonzero value instead?

I don't know exactly where I'd make the qualitative jump from the "discomfort" scale to the "pain" scale. There are so many different kinds of unpleasant stimuli, and it's difficult to compare them. For electric shock, say, there's probably a particular curve of voltage, amperage and duration below which the shock would qualify as discomfort, with a zero value on the pain scale, and above which it becomes pain (I'll even go so far as to say that for short periods of contact, the voltage and amperage values lies between those of a violet wand and those of a stun gun). For localized heat, I think it would have to be at least enough to cause a small first-degree burn; for localized cold, enough to cause the beginnings of frostbite (i.e. a few living cells lysed by the formation of ice crystals in their cytoplasm). For heat and cold over the whole body, it would have to be enough to overcome the body's natural thermostat, initiating hypothermia or heatstroke.

It occurs to me that I've purposefully endured levels of discomfort I would probably regard as pain with a non-zero value on the torture scale if it was inflicted on me involuntarily, as a result of working out at the gym (which has an expected payoff in health and appearance, of course), and from wearing an IV for two 36-hour periods in a pharmacokinetic study for which I'd volunteered (it paid $500); I would certainly do so again, for the same inducements. Choice makes a big difference in our subjective experience of an unpleasant stimulus.

50 years of torture for one person is probably not as bad as 25 years of torture for a trillion people.

Of course not; by the scale I posited above, 50 years for one person isn't even as bad as 25 years for two people.

If we keep doing this (halving the torture length, multiplying the number of people by a trillion) then are we always going from bad to worse?

No, but the length has to get pretty tiny (probably somewhere between a millisecond and a microsecond) before we reverse the direction.

And do we ever get to the point where each individual person tortured experiences about as much discomfort as our replacement dust speck?

Yes, we do; in fact, we eventually get to a point where each person "tortured" experiences no discomfort at all, because the nervous system is not infinitely fast nor infinitely sensitive. If you're using temperature for your torture, heat transfer happens at a finite speed; no matter how hot or cold the material that touches your skin, there's a possible time of contact short enough that it wouldn't change your skin temperature enough to cause any discomfort at all. Even an electric shock could be brief enough not to register.

Comment author: aspera 16 November 2012 08:06:02PM 1 point [-]

The idea that the utility should be continuous is mathematically equivalent to the idea that an infinitesimal change on the discomfort/pain scale should give an infinitesimal change in utility. If you don't use that axiom to derive your utility funciton, you can have sharp jumps at arbitrary pain thresholds. That's perfectly OK - but then you have to choose where the jumps are.

Comment author: aspera 09 November 2012 08:17:19PM 0 points [-]

I think that in physics we would deal with this as a mapping problem. Jonh's and Mary's beliefs about the planet live in different spaces, and we need to pick a basis on which to project them in order to compare them. We use language as the basis. But then when we try to map between concepts, we find that the problem is ill posed: it doesn't have a unique solution because the maps are not all 1:1.

Comment author: aspera 09 November 2012 12:02:21AM 13 points [-]

Nice job writing the survey - fun times. I kind of want to hand it out to my non-LW friends, but I don't want to corrupt the data.

Comment author: JonathanLivengood 08 November 2012 02:41:52AM 0 points [-]

Rather than consulting Wikipedia, the SEP article on consequentialism is probably the best place to start for an overview.

Comment author: aspera 08 November 2012 10:57:35PM 0 points [-]

Thanks, I'll check it out.

Comment author: aspera 07 November 2012 10:41:19PM *  0 points [-]

Bravo, Eliezer. Anyone who says the answer to this is obvious is either WAY smarter than I am, or isn't thinking through the implications.

Suppose we want to define Utility as a function of pain/discomfort on the continuum of [dust speck, torture] and including the number of people afflicted. We can choose whatever desiderata we want (e.g. positive real valued, monotonic, commutative under addition).

But what if we choose as one desideratum, "There is no number n large enough such that Utility(n dust specks) > Utility(50 yrs torture)." What does that imply about the function? It can't be analytic in n (even if n were continuous). That rules out multaplicative functions trivially.

Would it have singularities? If so, how would we combine utility functions at singular values? Take limits? How, exactly?

Or must dust specks and torture live in different spaces, and is there no basis that can be used to map one to the other?

The bottom line: is it possible to consistently define utility using the above desideratum? It seems like it must be so, since the answer is obvious. It seems like it must not be so, because of the implications for the utility function as the arguments change.

Edit: After discussing with my local meetup, this is somewhat resolved. The above desiderata require the utility to be bounded in the number of people, n. For example, it could be a staurating exponential function. This is self-consistent, but inconsistent with the notion that because experience is independent, utilities should add.

Interestingly, it puts strict mathematical rules on how utility can scale with n.

Comment author: aspera 07 November 2012 09:48:06PM 0 points [-]

Also, I suggest you read Torture vs Dust Specks. I found it to be very troubling, and would love to talk about it at the meeting.

View more: Prev | Next