Comment author: Usul 05 January 2016 08:07:09AM 0 points [-]

I was bringing the example into the presumed finite universe in which we live, where Maximum Utility = The Entire Universe. If we are discussing a finite-quantity problem than infinite quantity is ipso facto ruled out.

Comment author: Nebu 24 January 2016 10:44:33PM *  0 points [-]

I guess I'm asking "Why would a finite-universe necessarily dictate a finite utility score?"

In other words, why can't my utility function be:

  • 0 if you give me the entire universe minus all the ice cream.
  • 1 if you give me the entire universe minus all the chocolate ice cream.
  • infinity if I get chocolate ice cream, regardless of how much chocolate ice cream I receive, and regardless of whether the rest of the universe is included with it.
Comment author: XiXiDu 07 October 2010 09:10:05AM 11 points [-]

I've already got that book, I have to read it soon :-)

Here is more from Greg Egan:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect that something broadly similar applies to minds and the class of things they can understand: other beings might think faster than us, or have easy access to a greater store of facts, but underlying both mental processes will be the same basic set of general-purpose tools. So if we ever did encounter those billion-year-old aliens, I’m sure they’d have plenty to tell us that we didn’t yet know — but given enough patience, and a very large notebook, I believe we’d still be able to come to grips with whatever they had to say.

What's really cool about all this is that I just have to wait and see.

Comment author: Nebu 24 January 2016 10:28:00PM 1 point [-]

I suspect that if we're willing to say human minds are Turing Complete[1], then we should also be willing to say that an ant's mind is Turing Complete. So when imagining a human with a lot of patience and a very large notebook interacting with a billion year old alien, consider an ant with a lot of patience and a very large surface area to record ant-pheromones upon, interacting with a human. Consider how likely it is that human would be interested in telling the ant things it didn't yet know. Consider what topics the human would focus on telling the ant, and whether it might decide to hold back on some topics because it figures the ant isn't ready to understand those concepts yet. Consider whether it's more important for the patience to lie within the ant or within the human.

1: I generally consider human minds to NOT be Turing Complete, because Turing Machines have infinite memory (via their infinite tape), whereas human minds have finite memory (being composed of a finite amount of matter). I guess Egan is working around this via the "very large notebook", which is why I'll let this particular nitpick slide for now.

Comment author: Gvaerg 04 February 2014 08:04:26AM 0 points [-]

Marker is the closest to the state of the art. Hodges is a bit verbose and for beginners. Poizat is a little idiosyncratic (just look at the Introduction!).

I am also interested in the basis of MIRI's recommendation. Perhaps they are not too connected to actual mathematicians studying it, as model theory is pretty much a fringe topic.

Comment author: Nebu 22 January 2016 03:58:04AM 0 points [-]

Why not link to the books or give their ISBNs or something?

There are at least two books on model theory by Hodges: ISBN:9780521587136 and ISBN:9780511551574

Comment author: Wei_Dai 07 February 2013 11:09:07PM 0 points [-]

I still don't see how it's relevant, since I don't see a reason why we would want to create an AI with a utility function like that. The problem goes away if we remove the "and then turning yourself off" part, right? Why would we give the AI a utility function that assigns 0 utility to an outcome where we get everything we want but it never turns itself off?

Comment author: Nebu 05 January 2016 08:50:07AM 0 points [-]

Why would we give the AI a utility function that assigns 0 utility to an outcome where we get everything we want but it never turns itself off?

The designer of that AI might have (naively?) thought this was a clever way of solving the friendliness problem. Do the thing I want, and then make sure to never do anything again. Surely that won't lead to the whole universe being tiled with paperclips, etc.

Comment author: Usul 05 January 2016 05:28:57AM *  3 points [-]

Let's taboo "perfect", and "utility" as well. As I see it, you are looking for an agent who is capable of choosing The Highest Number. This number does not exist. Therefore it can not be chosen. Therefore this agent can not exist. Because numbers are infinite. Infinity paradox is all I see.

Alternately, letting "utility" back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe. There will be an number which corresponds to this. Your opponent can choose a number higher than this but he will find the utility he seeks does not exist.

Comment author: Nebu 05 January 2016 07:56:46AM 1 point [-]

Alternately, letting "utility" back in, in a universe of finite time, matter, and energy, there does exist a maximum finite utility which is the sum total of the time, matter, and energy in the universe.

Why can't my utility function be:

  • 0 if I don't get ice cream
  • 1 if I get vanilla ice cream
  • infinity if I get chocolate ice cream

?

I.e. why should we forbid a utility function that returns infinity for certain scenarios, except insofar that it may lead to the types of problems that the OP is worrying about?

Comment author: handoflixue 30 April 2012 08:13:44PM 3 points [-]

Lesson learned: If you want useful feedback, avoid making it a bet/competition.

Comment author: Nebu 20 December 2015 06:55:09AM 0 points [-]

But what about prediction markets?

In response to That Alien Message
Comment author: Thomas_Ryan 22 May 2008 09:50:30AM 12 points [-]

Okay, I'm a few days fresh from reading your Bayesian Reasoning explanation. So I'm new.

Is the point that the Earth people are collectively the AI?

Comment author: Nebu 18 December 2015 06:23:45AM 0 points [-]

Yes, this is a parable about AI safety research, with the humans in the story acting as the AI, and the aliens acting as us.

Comment author: Lumifer 16 December 2015 03:55:35PM 3 points [-]

Hm, OK. So you are saying that the degree of rationalism is an unobservable (hidden) variable and what we can observe (winning or losing) is contaminated by noise (luck). That's a fair way of framing it.

The interesting question then becomes what kind of accuracy can you achieve in the real world given that the noise level are high, information available to you is limited, and your perception is imperfect (e.g. it's not uncommon to interpret non-obvious high skill as luck).

Comment author: Nebu 18 December 2015 06:10:51AM 1 point [-]

Right, I suspect just having heard about someone's accomplishments would be an extremely noisy indicator. You'd want to know what they were thinking, for example by reading their blog posts.

Eliezer seems pretty rational, given his writings. But if he repeatedly lost in situations where other people tend to win, I'd update accordingly.

Comment author: Lumifer 14 December 2015 03:42:38PM 2 points [-]

Even ignoring the issue that "rationalist" is not a binary variable, I don't know how in practice will you be able to tell whether someone is a rationalist or not. Your definition depends on counterfactuals and without them you can't disentangle rationalism and luck.

Comment author: Nebu 16 December 2015 08:19:37AM 0 points [-]

I assume that you accept the claim that it is possible to define what a fair coin is, and thus what an unfair coin is.

If we observe some coin, at first, it may be difficult to tell if it's a fair coin or not. Perhaps the coin comes from a very trustworthy friend who assures you that it's fair. Maybe it's specifically being sold in a novelty store and labelled as an "unfair coin" and you've made many purchases from this store in the past and have never been disappointed. In other words, you have some "prior" probability belief that the coin is fair (or not fair).

As you see the coin flip, you can keep track of its outcomes, and adjust your belief. You can ask yourself "Given the outcomes I've seen, is it more likely that the coin is fair? or unfair?" and update accordingly.

I think the same applies for rationalist here. I meet someone new. Eliezer vouches for her as being very rational. I observe her sometimes winning, sometimes not winning. I expend mental effort and try to judge how easy/difficult her situation was and how much effort/skill/rationality/luck/whatever it would have taken her to win in that situation. I try to analyze how it came about that she won when she won, or lost when she lost. I try to dismiss evidence where luck was a big factor. She bought a lottery ticket, and she won. Should I update towards her being a rationalist or not? She switched doors in Monty Hall, but she ended up with a goat. Should I update towards her being a rationalist or not? Etc.

Comment author: Lumifer 13 December 2015 11:18:01PM 1 point [-]

I'm using "should" to help define what the word "Rationalist" means.

There is a bit of a problem here in that the list of the greatest rationalists ever will be headed by people like Genghis Khan and Prophet Muhammad.

Comment author: Nebu 14 December 2015 05:41:16AM *  0 points [-]

People who win are not necessarily rationalists. A person who is a rationalist is more likely to win than a person who is not.

Consider someone who just happens to win the lottery vs someone who figures out what actions have the highest expected net profit.

Edit: That said, careful not to succumb to http://rationalwiki.org/wiki/Argument_from_consequences maybe Genghis Khan really was one of the greatest rationalists ever. I've never met the guy nor read any of his writings, so I wouldn't know.

View more: Prev | Next