Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 29 November 2015 03:59:14PM *  1 point [-]

I ran into some research in which the rate of information transmission of various natural languages was compared.

I'm interested into that research. Can you link it?

Comment author: redding 29 November 2015 04:46:46PM 4 points [-]

Not sure if this is what KevinGrant was referring to, but this article discusses the same phenomenon

http://rosettaproject.org/blog/02012/mar/1/language-speed-vs-density/

Comment author: Houshalter 24 September 2015 05:56:41AM *  -1 points [-]

I'm not sure where this comes from, when the VNM theorem gets so many mentions on LW.

I understand the VNM theorem. I'm objecting to it.

A utility function is, by definition, that which the corresponding rational agent maximizes the expectation of

If you want to argue "by definition", then yes, according to your definition utility functions can't be used in anything other than expected utility. I'm saying that's silly.

simply an encoding of the actions which a rational agent would take in hypothetical scenarios

Not all rational agents, as my post demonstrates. An agent following median maximizing would not be describable by any utility function maximized with expected utility. I showed how to generalize this to describe more kinds of rational agents. Regular expected utility becomes a special case of this system. I think generalizing existing ideas and mathematics is a desirable thing sometimes.

It is not "optimal as the number of bets you take approaches infinity"

Yes, it is. If you assign some subjective "value" to different outcomes, and to different things, then maximizing expected u̶t̶i̶l̶i̶t̶y̶ value, will maximize it, as the number of decisions approaches infinity. For every bet I lose at certain odds, I will gain more from others some predictable percent of the time. On average it cancels out.

This might not be the standard way of explaining expected utility, but it's very simple and intuitive, and shows exactly where the problem is. It's certainly sufficient for the explanation in my post.

Humans do not have utility functions. We do not exhibit the level of counterfactual self-consistency that is required by a utility function.

That's quite irrelevant. Sure humans are irrational and make inconsistencies and errors in counterfactual situations. We should strive to be more consistent though. We should strive to figure out the utility function that most represents what we want. And if we program an AI, we certainly want it to behave consistently.

Yes, it is common, especially on LW and in discussions of utilitarianism, to use the term "utility" loosely, but don't conflate that with utility functions by creating a chimera with properties from each. If the "utility" that you want to talk about is vaguely-defined (e.g., if it depends on some account of subjective preferences, rather than on definite actions under counterfactual scenarios), then it probably lacks all of useful mathematical properties of utility functions, and its expectation is no longer meaningful.

Again, back to arguing by definition. I don't care what the definition of "utility" is. If it would please you to use a different word, then we can do so. Maybe "value function" or something. I'm trying to come up with a system that will tell us what decisions we should make, or program an AI to make. One that fits our behavior and preferences the best. One that is consistent and converges to some answer given a reasonable prior.

You haven't made any arguments against my idea or my criticisms of expected utility. It's just pedantry about the definition of a word, when it's meaning in this context is pretty clear.

Comment author: redding 24 September 2015 04:29:34PM 0 points [-]

You say you are rejecting Von Neumann utility theory. Which axiom are you rejecting?

https://en.wikipedia.org/wiki/Von_Neumann–Morgenstern_utility_theorem#The_axioms

Comment author: redding 14 September 2015 07:52:17PM 7 points [-]

I think this is pretty cool and interesting, but I feel compelled to point out that all is not as it seems:

Its worth noting, though, that just the evaluation function is a neural network. The search, while no long iteratively deepening, is still recursive. Also, the evaluation function is not a pure neural network. It includes a static exchange evaluation.

It's also worth noting that doubling the amount of computing time usually increasing a chess engine's score by about 60 points. International masters usually have a rating below 2500. Though this is sketchy, the top chess engines are rated at around 3300. Thus, you could make a top-notch engine approximately 10,000 times slower and achieve the same performance.

Now, that 3300 figure is probably fairly inaccurate. Also, its quite possible that if the developer tweaked their recursive search algorithm, they could improve it. Thus that 10,000 figure I came to above is probably fairly inaccurate. Regardless, it is not clear to me that the neural network itself is proving terribly useful.

Comment author: redding 14 September 2015 12:49:20AM 0 points [-]

Just to clarify, I feel that what you're basically saying that often what is called the base-rate fallacy is actually the result of P(E|!H) being too high.

I believe this is why Bayesians usually talk not in terms of P(H|E) but instead use Bayes Factors.

Basically, to determine how strongly ufo-sightings imply ufos, don't look at P(ufos | ufo-sightings). Instead, look at P(ufos | ufo-sightings) / P(no-ufos | ufo-sightings).

This ratio is the Bayes factor.

Comment author: redding 10 September 2015 09:00:09PM 2 points [-]

I'm currently in debate and this is one of (minor) things that annoy me about it. The reason I can still enjoy debate (as a competitive endeavor) is that I treat it more like a game than an actual pursuit of truth.

I am curious though whether you think this actively harms peoples ability to reason or whether this just provides more numerous examples how most people reason - i.e. is this primarily a sampling problem?

Comment author: redding 03 September 2015 01:03:18PM 2 points [-]

Could we ever get evidence of a "read-only" soul? I'm imagining something that translates biochemical reactions associated with emotions into "actual" emotions. Don't get me wrong, I still consider myself an atheist, but it seems to me that how strongly one believes in a soul that is only affected by physical reality is based purely on their prior probability.

Comment author: redding 31 August 2015 02:03:40PM 2 points [-]

Thanks for taking the time to contribute!

I'm particularly interested in "Goals interrogation + Goal levels".

Out of curiosity, could you go a little more in-depth regarding what "How to human" would entail? Is it about social functioning? first aid? psychology?

I'd also be interested in "Memory and Notepads", as I don't really take notes outside of classes.

With "List of Effective Behaviors", would that be behaviors that have scientific evidence for achieving certain outcomes ( happiness, longevity, money, etc.), or would that primarily be anecdotal?

That last one "Strike to the heart of question" reminds me very much of the "void" from the 12 virtues, which always struck me as very important, but frustratingly vaguely described. I think you really hit the nail on the head with "am I giving the best answer to the best question I can give". I'm not really sure where you could go with this, but I'm eager to see.

Comment author: redding 24 August 2015 04:37:41PM 1 point [-]

Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.

Comment author: pragmatist 30 July 2015 09:56:54AM *  2 points [-]

From a decision-theory perspective, I should essentially just ignore the possibility that I'm in the first 100 rooms - right?

Well, what do you mean by "essentially ignore"? If you're asking if I should assign substantial credence to the possibility, then yeah, I'd agree. If you're asking whether I should assign literally zero credence to the possibility, so that there are no possible odds -- no matter how ridiculously skewed -- I would accept to bet that I am in one of those rooms... well, now I'm no longer sure. I don't exactly know how to go about setting my credences in the world you describe, but I'm pretty sure assigning 0 probability to every single room isn't it.

Consider this: Let's say you're born in this universe. A short while after you're born, you discover a note in your room saying, "This is room number 37". Do you believe you should update your belief set to favor the hypothesis that you're in room 37 over any other number? If you do, it implies that your prior for the belief that you're in one of the first 100 rooms could not have been 0.

(But. on the other hand, if you think you should update in favor of being in room x when you encounter a note saying "You are in room x", no matter what the value of x, then you aren't probabilistically coherent. So ultimately, I don't think intuition-mongering is very helpful in these exotic scenarios. Consider my room 37 example as an attempt to deconstruct your initial intuition, rather than as an attempt to replace it with some other intuition.)

Theoretically there are as many multiples of 10 as not (both being equinumerous to the integers), but if we define rationality as the "art of winning", then shouldn't I guess "not in a multiple of 10"?

Perhaps, but reproducing this result doesn't require that we consider every room equally likely. For instance, a distribution that attaches a probability of 2^(-n) to being in room n will also tell you to guess that you're not in a multiple of 10. And it has the added advantage of being a possible distribution. It has the apparent disadvantage of arbitrarily privileging smaller numbered rooms, but in the kind of situation you describe, some such arbitrary privileging is unavoidable if you want your beliefs to respect the Kolmogorov axioms.

Comment author: redding 30 July 2015 11:53:31AM 0 points [-]

What I mean by "essentially ignore" is that if you are (for instance) offered the following bet you would probably accept: "If you are in the first 100 rooms, I kill you. Otherwise, I give you a penny."

I see your point regarding the fact that updating using Bayes' theorem implies your prior wasn't 0 to begin with.

I guess my question is now whether there are any extended versions of probability theory. For instance, Kolmogorov probability reverts to Aristotelian logic for the extremes P=1 and P=0. Is there a system of though that revers to probability theory for finite worlds but is able to handle infinite worlds without privileging certain (small) numbers?

I will admit that I'm not even sure saying that guessing "not a multiple of 10" follows the art of winning, as you can't sample from an infinite set of rooms either in traditional probability/statistics without some kind of sampling function that biases certain numbers. At best we can say that whatever finite integer N you choose as N goes to infinity the best strategy is to pick "multiple of 10". By induction we can prove that guessing "not a multiple of 10" is true for any finite number of rooms but alas infinity remains beyond this.

Comment author: MrMind 29 July 2015 07:53:13AM 0 points [-]

This is an old problem in probability theory, and there are different solutions.

PT is developed first in finite model, so it's natural that its extension to infinite models can be done in a few different ways.

Comment author: redding 29 July 2015 10:29:16PM 0 points [-]

Could you point me to some solutions?

View more: Next