A thing already known to computer scientists, but still useful to remember: as per Kleene's normal form theorem, a universal Turing machine is a primitive recursive function.
Meaning that if an angel gives you the encoding of a program you only need recursion, and not unbounded search, to run it.
The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine.
I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.
But in general, the proof is in the pudding: the theory works in many practical cases.
Show me. We are talking about real life ("works", "practical"), right?
Note that in finance where miscalculating risk can be a really expensive mistake that you pay for with real money, no one treats risk as a trivial consequence of a concave utility function.
But at that point, you have to start thinking whether the theory is wrong, or the humans are.
You might. It should take you about half a second to decide that the theory is wrong. If it takes you longer, you need to fix your thinking :-P
I'm not sure what you have in mind for treatment of risk in finance. People will be concerned about risk in the sense that they compute a probablility distribution of the possible future outcomes of their portfolio, and try to optimize it to limit possible losses. Some institutional actors, like banks, have to compute a "value at risk" measure (the loss of value in the portfolio in the bottom 5th percentile), and have to put up a collateral based on that.
But those are all things that happen before a utility computation, they are all consistent with valuing a portfolio based on the average of some utiity function of its monetary value. Finance textbooks do not talk much about this, they just assume that investors have some preference about expected returns and variance in returns.
Why do you think so?
It is very standard in economics, game theory, etc, to model risk aversion as a concave utility function. If you want some motivation for why, then e.g. the Von Neumann–Morgenstern utility theorem shows that a suitably idealized agent will maximize utility. But in general, the proof is in the pudding: the theory works in many practical cases.
Of course, if you want to study exactly how humans make decisions, then at some point this will break down. E.g. the decision process predicted by Prospect Theory is different from maximizing utility. So in general, the exact flavour of risk averseness exhibited by humans seems different from what Neumann-Morgenstern would predict.
But at that point, you have to start thinking whether the theory is wrong, or the humans are. :)
Not to mention all that tax evasion never actually got resolved.
She eventually gives him the carrot pen so he can delete the recording, no?
[Survey Taken Thread]
By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.
Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.
How many of you guys keep a journal? How many of you would like to? What do you specifically write down?
I feel like it should help, but I have trouble coming up with a structure with which it could: Opening up a journal, with separate sections for work done, (and TODOs for the future, and how these two diverged), exercise, and others seems more useful than one with a massed 'Dear Diary' format.
I write down one line (about 80 characters) about what things I did each day. Originally I intended to write down "accomplishments" in order to incentivise myself into being more accomplished, but it has since morphed into also being a record of notable things that happened, and a lot of free-form whining over how bad certain days are. It's kindof nice to be able to go back and figure out when exactly something in the past happens, or generally reminisce about what was going on some years ago.
(the following isn't off-topic, I promise:)
Attention, people who have a lot of free time and want to found the next reddit:
When a site user upvotes and downvotes things, you use that data to categorize that user's preferences (you'll be doing a very sparse SVD sort of operation under the hood). Their subsequent votes can be decomposed into expressions of the most common preference vectors, and their browsing can then be sorted by decomposed-votes-with-personalized-weightings.
This will make you a lot of friends (people who want to read ramblings about philosophy won't be inundated with cute kitten pictures and vice versa, even if they use the same site), make you a lot of money (better-targeted advertising pays better), solve the problem above (people who like and people who hate trollish jokes won't need to come to a consensus), and solve the problem way above ("predisposition towards rationalism" will probably be one of the top ten or twenty principal components to fall out of your SVD).
It will also create new problems (how much easier will it be to hide in a bubble of people who share your political opinions? how do you filter out redundancy?) but those can be fixed in subsequent steps.
For now it's just embarrassing that modern forums don't have either the same level of fine-grained preferences that you could find on Slashdot 15 years ago ("Funny" vs "Informative" etc) or the killfile capabilities you could find in Usenet readers 25 years ago.
There is Omilibrium, which does the vote SVD-ing thing.
Carey's list of publications doesn't look particularly bullshitty.
I looked at a random paper called "The History of Ice: How Glaciers Became an Endangered Species" and I was like: well, at least he studies something about glaciers per se, i.e. how they became endangered.
Then I clicked at the abstract and saw this:
to understand why glaciers are so inexorably tied to global warming and why people lament the loss of ice, it is necessary to look beyond climate science and glacier melting—to turn additionally to culture, history, and power relations. Probing historical views of glaciers demonstrates that the recent emergence of an “endangered glacier” narrative stemmed from various glacier perspectives dating to the eighteenth and nineteenth centuries: glaciers as menace, scientific laboratories, sublime scenery, recreation sites, places to explore and conquer, and symbols of wilderness. By encompassing so many diverse meanings, glacier and global warming discourse can thus offer a platform to implement historical ideologies about nature, science, imperialism, race, recreation, wilderness, and global power dynamics.
So again, it's not about glaciers per se, but about, uhm, the cultural symbolism of glaciers.
So it's still the same thing. When talking about "glaciology", I expect something like "here are the physical processes how glaciers are made, and how they melt", but instead the guy produces something like "here is what glaciers mean in fairy tales, and here is how glaciers are compared to penises by feminists". The difference is that to write the former, you actually have to study the glaciers, while to write the latter, you only have to collect stuff people said about glaciers.
Technically, "collecting stuff people said about something" could be called science, but then it's not a subset of glaciology but rather a subset of culturology or whatever. And even in that case it should be done more scientifically, i.e. include some numbers. For example, if we are really collecting "stuff people said about glaciers", I would like to see data about how many people believe that glaciers symbolize penises, et cetera. Without those data, the research is worthless even as a subset of culturology.
He is a historian, studying history of science. That subject is exactly about studying what people (scientists) are saying.
The Kolmogorov complexity of AGI is really low. You just specify a measure of intelligence, like the universal intelligence test. Then you specify a program which runs this test, on every possible program, testing them one at a time for some number of steps. Then it returns the best program found after some huge number of steps.
I think Shane Legg's universal intelligence itself involves Kolmogorov complexity, so it's not computable and will not work here. (Also, it involves a function V, encoding the our values; if human values are irreducibly complex, that should add a bunch of bits.)
In general, I think this approach seems too good to be true? An intelligent agent is one which preforms well in the environment. But don't the "no free lunch" theorems show that you need to know what the environment is like in order to do that? Intuitively, that's what should cause the Kolmogorov complexity to go up.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Nope. The standard notion of a UTM take the representation of a program and an input, and interprets it. With the caveat that those representations terminate!
What you say, that the number given to the UTM is the number of steps for which the machine must run, is not what is asserted by Kleene's theorem, which is about functions of natural numbers: the T relation checks, primitive recursively, the encoding of a program and of an input, which is then fed to the universal interpreter.
You do not say to a Turing machine for how much steps you need to run, because once a function is defined on an input, it will run and then stop. The fact that some partial recursive function is undefined for some input is accounted by the unbounded search, but this term is not part of the U or the T function.
The Kleene equivalence needs, as you say, unbounded search, but if the T checks, it means that x is the encoding of e and n (a program and its input), and that the function will terminate on that input. No need to say for how much steps to run the function.
Indeed, this is true of and evident in any programming language: you give to the interpreter the program and the input, not the number of steps.
See wikipedia. The point is that T does not just take the input n to the program to be run, it takes an argument x which encodes the entire list of steps the program e would execute on that input. In particular, the length of the list x is the number of steps. That's why T can be primitive recursive.