And yet again I am reminded why I do not frequent this supposedly rational forum more. Rationality swishes by over most peoples head here, except for a few really smart ones. You people make it too complicated. You write too much. Lots of these supposedly deep intellectual problems have quite simple answers, such as this Ellsberg paradox. You just have to look and think a little outside their boxes to solve them, or see that they are unsolvable, or that they are wrong questions.
I will yet again go away, to solve more useful and interesting problems on my own.
Oh, and Orthonormal, here is my correct final answer to you: You do not understand me, and this is your fault.
Bayesian reasoning is for maximizing the probability of being right. Kelly´s criterion is for maximizing aggregated value.
And yet again, the distributions of the probabilities are different, because they have different variance, and difference in variance give different aggregated value, which is what people tend to try to optimize.
Aggregating value in this case is to get more pies, and fewer boots to the head. Pies are of no value to you when you are dead from boots to the head, and this is the root cause for preferring lower variance.
This isn´t much of a discussion when you just ignore and deny my argument instead of trying to understand it.
No, because expected value is not the same thing as variance.
Betting on red gives 1/3 winnings, exactly.
Betting on green gives 1/3 +/- x winnings, and this is a variance, which is bad.
Preferring red is rational, because it is a known amount of risk, while each of the other two colours have unknown risks.
This is according to Kellys criterion and Darwinian evolution. Negative outcomes outweigh positive ones because negative ones lead to sickness and death through starvation, poorness, and kicks in the head.
This is only valid in the beginning, because when the experiment is repeated, the probabilities of blue and green become clearer.
There often is not any difference at all between flirting and friendliness. People vary very much in their ways. And yet we are supposed to easily tell the difference, with threat of imprisonment for failing.
The main effects I have seen and experienced, is that flirting typically involve more eye contact, and that a lot of people flirt while denying they do it, and refusing to to tell what they would do if they really flirted, and disparaging others for not knowing the difference.
My experience is also that ordinary people are much more direct and clear in the difference between flirting and friendship, while academic people muddle it.
yet we are supposed to easily tell the difference, with threat of imprisonment for failing.
It can be hard to tell the difference, and it can be easy to mess up when trying to flirt back, but it takes rather more than than simply not telling the difference between flirtation and friendliness for imprisonment. There has to be actual unwelcome steps taken that cross significant lines.
The way the mating dance typically goes is as a series of small escalations. One of the purposes this serves is to let parties make advances without as much risk of everyo...
and that a lot of people flirt while denying they do it
Or without even realising. Several years ago an acquaintance on whom I was developing a crush told me she was aware of this; this puzzled me since I thought I hadn't yet initiated anything like flirting, so I asked how she knew. Then she took my hand and replicated the way in which, a few days before, I had passed her some small object (probably a pen). I didn't realise I was doing it at the time, but in that casual gesture I was prolonging the physical contact a lot more than necessary, and once put on the receiving side it was bloody obvious what was going on.
Most places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience.
My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical.
My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.
I am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.
That was a damn good article!
It was short, to the point, and based on real data, and useful as well. So unlike the polite verbiage of karma whores. Even William of Ockham would have been proud of you.
Kim0+
I wondered how humans are grouped, so I got some genes from the world, and did an eigenvalue analysis, and this is what i found:
http://kim.oyhus.no/EigenGenes.html
As you can see, humans are indeed clustered in subspecies.
Many-Worlds explained, with pretty pictures.
http://kim.oyhus.no/QM_explaining_many-worlds.html
The story about how I deduced the Many-Worlds interpretation, with pictures instead of formulas.
Enjoy!
Yes. Quadratic regression is better, often. The problem is that the number of coefficients to adjust in the model gets squared, which goes against Ockhams razor. This is precisely the problem I am working on these days, though in the context of the oil industry.
Thank you for a very nice article.
It frustrates me to read this comment. There are some important insights in there that are being sullied by involvement in such a low status presentation. The comment is needlessly confrontational, in need of proof reading, and uses hyperbole where it does not help. It also misses the details of the situation, suggesting an "all I have is a hammer" understanding.
The social dynamics mentioned in the parent do occur and there is potential for detrimental psychological consequences for both parties of letting the status game become so unbalanced. Th...
I guess you down-voters of me felt quite rational when doing so.
And this is precisely the reason I seldom post here, and only read a few posters that I know are rational from their own work on the net, not from what they write here:
There are too many fake rationalists here. The absence of any real arguments either way to my article above, is evidence of this.
My Othello/Reversi example above was easy to understand, and a very central problem in AI systems, so it should be of interest to real rationalists interested in AI, but there is only negative reaction...
You got voted down because you were rational. You went over some peoples heads.
These are popularity points, not rationality points.
That is something we worry about from time to time, but in this case I think the downvotes are justified. Tim Tyler has been repeating a particular form of techno-optimism for quite a while, which is fine; it's good to have contrarians around.
However, in the current thread, I don't think he's taking the critique seriously enough. It's been pointed out that he's essentially searching for reasons that even a Paperclipper would preserve everything of value to us, rather than just putting himself in Clippy's place and really asking for the most efficient way...
I have an Othello/Reversi playing program.
I tried making it better by applying probabilistic statistics to the game tree, quite like antropic reasoning. It then became quite bad at playing.
Ordinary minimax with A-B did very well.
Game algorithms that ignore density of states in the game tree, and only focus on minimaxing, do much better. This is a close analogy to the experience trees of Eliezer, and therefore a hint that antropic reasoning here has some kind of error.
Kim0
What exactly makes it difficult to use Russian? I know Russian, so I will understand the explanation.
I find my native Norwegian better to express concepts in than English. If I program something especially difficult, or do some difficult math, physics, or logic, I also find Norwegian better.
However, if I do some easier task, where I have studied it in English, I find it easy to write in English, due to a "cut and paste" effect. I just remember stuff, combine it, and write it down.
Interesting, but too verbose.
The author is clearly not aware of the value of the K.I.S.S. principle, or Ockhams razor, in this context.
You are wrong. Here are some links showing that Go is not perfectly clear:
Giving it up is rational thinking, because there is no "it" there when the label is too broad.
In Bayesian inference, it is equivalent to P( A | B v C v D v ...), which is somewhat like underfitting. The space of possibilities becomes too large for it to be possible to find a good move. In games it is precisely the unclear parts of the game space that is interesting to the loosing part, because it is most likely there will be better moves there. But when it is not even possible to analyze those parts, then true optimal play regresses to quarrelin...
I agree. We seem to have the same goal, so my first advice stands, not my second.
I am currently trying to develop a language that is both simple and expressive, and making some progress. The overall design is finished, and I am now down to what instructions it should have. It is a general bi-graph, but with a sequential program structure, and no separation of program and data.
It is somewhat different from what you want, as I also need something that have measurable use of time and memory, and is provable able to run fast.
Then I would go for Turing machines, Lambda calculus, or similar. These languages are very simple, and can easily handle input and output.
Even simpler languages, like cellular automaton No.110 or Combinatory Logic might be better, but those are quite difficult to get to handle input and output correctly.
The reason simple languages, or universal machines, should be better, is that the upper bound for error in estimating probabilities is 2 to the power of the complexity of the program simulating one language in another, according to algorithmic information t...
That depends on what you mean by "best".
Is speed of calculation important? What about suitability for humans? I guess you want one where complexities are as small as possible.
Given 2 languages, L1 & L2, and their complexity measures, K1 & K2.
If K1(L2) < K2(L1) then I take that as a sign that L1 is better for use in the context of Ockhams razor. It is also a sign that L1 is more complex than L2, but that effect can be removed by doing lots of comparisons like that, so the unnecessarily complex languages loose against those that are actu...
It is universal, because every possible sequence is generated.
It is universal, because it is based on universally recursive functions.
It is universal, because it uses an universal computer.
People knowing algorithmic complexity know that it is about probability measures, spaces, universality, etc. You apparently did not, while nitpicking instead.
You are wrong because I did specify a probability space.
The probability space I specified was one where the sample space was the set of all outputs of all programs for some universal computer, and the measure was one from the book I mentioned. One could for instance choose the Solomonoff measure, from 4.4.3.
From your writings I conclude that is it quite likely that you are neither quite aware of the concept, nor understanding what I write, while believing you do.
I guess the point is to model artificial intelligences, of which we know almost nothing, so the models and problems need the robustness of logic and simpleness.
Thats why they are brittle when used for modeling people.
O.K.
One wants an universal probability space where one can find the probability of any event. This is possible:
One way of making such a space is to take all recursive functions of some universal computer, run them, and storing the output, resulting in an universal probability space because every possible set of events will be there, as the results of infinitely many recursive functions, or programs as they are called. The probabilities corresponds to the density of these outputs, these events.
A counterargument is that it is too dependent on the actual univ...
The technically precise reference was this part:
"This is algorithmic information theory,.."
But if you claim my first line was too obfuscated, I can agree.
Kim Øyhus
All recursive probability spaces converge to the same probabilities, as the information increases.
Not that those people making up probabilities knows anything about that.
If you want an universal probability space, just take some universal computer, run all programs on it, and keep those that output event A. Then you can see how many of those that output event B, and thus you can get p(B|A) whatever A and B are.
This is algorithmic information theory, and should be known by any black belt bayesian.
Kim Øyhus
Very interesting article that.
However, evolution is able to test and spread many genes at the same time, thus achieving higher efficiency than the article suggests. Sort of like spread spectrum radio.
I am quite certain its speed is lower than some statistical methods, but not by that much. I guess at something like a constant factor slower, for doubling gene concentration, as compared to 1 std deviation certainty for the goodness of the gene by Gaussian statistics.
Random binary natural testing of a gene is less accurate than statistics, but it avoids putti...
What is your evidence for this assertion?
In my analysis, evolution by sexual reproduction can be very good at rationality, collecting information of about 1 bit per generation per individual, because an individual can only be naturally selected or die 1 time.
The factors limiting the learning speed of evolution is the high cost of this information, namely death, and that this is the only kind data going into the system. And the value to be optimized is avoidance of death, which also avoids data gathering. And this optimization function is almost impossible ...
All control systems DO have models of what they are controlling. However, the models are typically VERY simple.
A good principle for constructing control systems are: Given that I have a very simple model, how do I optimize it?
The models I learned about in cybernetics were all linear, implemented as matrices, resistors and capacitors, or discrete time step filters. The most important thing was to show that the models and reality together did not result in amplification of oscillations. Then one made sure that the system actually did some controlling...
Verbal probabilities are typically impossible because the priors are unknown and important.
However: relative probabilities and similar can often be given usueful estimates, or limits.
For instance: Seeing a cat is more likely than seeing a black cat because black cats are a subset of cats.
Stuff like this is the reason that pure probability calculations are not sufficient for general intelligence.
Probability distributions however, seem to me to be sufficient. This cat example cuts the distribution in 2.
Kim Øyhus