Right, but the problem with this counter example is that it isn't actually possible. A counter example that could occur would be much more convincing.
Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe... I'd be happy considering it to be extremely intelligent.
Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn't know what to put in the table. But if we're in infinite theory land, then why not just run AIXI on your infinite computer?
Back in reality, the lookup table approach isn't going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.
Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.
This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.
If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.
It's not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life -- flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won't be all that low.
Indeed, with so much data SI will have built a model of langu...
This is a tad confused.
A very simple measure on the binary strings is the uniform measure and so Solomonoff Induction will converge on it with high probability. This is easiest to think about from the Solomonoff-Levin definition of the universal prior where you take a mixture distribution of the measures according to their complexity -- thus a simple thing like a uniform prior gets a very high prior probability under the universal distribution. This is different from the sequence of bits itself being complex due to the bits being random. The confusing t...
The way it works for me is this:
First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result.
The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus...
Glial cell are actually about 1:1. A few years ago a researcher wanted to cite something to back up the usual 9:1 figure, but after asking everybody for several months nobody knew where the figure came from. So, they did a study themselves and did a count and found it to be 1:1. I don't have the reference on me, it was a talk I went to about a year ago (I work at a neuroscience research institute).
I have asked a number of neuroscientists about the importance of glia and have always received the same answer: the evidence that they are functionally import...
Here's my method: (+8 for me)
I have a 45 minute sand glass timer and a simple abacus on my desk. Each row on the abacus corresponds to one type of activity that I could be doing, e.g. writing, studying, coding, emails and surfing,... First, I decide what type of activity I'd like to do and then start the 45 minute sand glass. I then do that kind of activity until it ends. At which point I count it on my abacus and have at least a 5 minute break. There are no rules about what I have to do, I do what ever I want. But I always do it in focused 45 minut...
A whole community of rationalists and nobody has noticed that his elementary math is wrong?
1.9 gigaFLOPS doubled 8 times is around 500 gigaFLOPS, not 500 teraFLOPS.
Big difference, and one that trashes his conclusion.
There is nothing about being a rationalist that says that you can't believe in God. I think the key point of rationality is to believe in the world as it is rather than as you might imagine it to be, which is to say that you believe in the existence of things due to the weight of evidence.
Ask yourself: do you want to believe in things due to evidence?
If the answer is no, then you have no right calling yourself a "wannabe rationalist" because, quite simply, you don't want to hold rational beliefs.
If the answer is yes, then put this into practice....
The last Society for Neuroscience conference had 35,000 people attend. There must be at least 100 research papers coming out per week. Neuroscience is full of "known things" that most neuroscientists don't know about unless it's their particular area.
This professor, who has no doubt debated game theory with many other professors and countless students making all kinds of objections, gets three paragraphs in this article to make a point. Based on this, you figure that the very simple objection that you're making is news to him?
One thing that concerns me about LW is that it often seems to operate in a vacuum, disconnected from mainstream discourse.
Or how about Ray Solomonoff? He doesn't live that far away (Boston I believe) and still gives talks from time to time.
One of my favourite posts here in a while. When talking with theists I find it helpful to clarify that I'm not so much against their God, rather my core problem is that I have different epistemological standards to them. Not only does this take some of the emotive heat out of the conversation, but I also think it's the point where science/rationalism/atheism etc. is at its strongest and their system is very weak.
With respect to untheistic society, I remember when a guy I knew shifted to New Zealand from the US and was disappointed to find that relatively...
Sure, there will be a great many factors at work here in the real world that our model does not include. The challenge is to come up with a manageable collection of principles that can be observed and measured across a wide range of situations and that appears to explain the observed behaviour. For this purpose "can't be bothered" isn't a very useful principle. What we really want to know is why they can't be bothered.
For example, I know people who can be bothered going to a specific shop and queueing in line every week to get a lottery tick...
In case it hasn't already been posted by somebody, here's a nice talk about irrational behaviour and loss aversion in particular.
Yeah... I know :-\ There are various political forces at work within this community that I try to stay clear of.
One group within the community calls it "Algorithmic Information theory" or AIT, and another "Kolmogorov complexity". I talked to Hutter when we was writing that article for Scholarpedia that you cite. He decided to use the more neutral term "algorithmic complexity" so as not to take sides on this issue. Unfortunately, "algorithmic complexity" is more typically taken as meaning "computational complexity theory". For example, if you search for it under Wikipedia you will get redirected. I know, it's all kind of ridiculous and confusing...
I agree that it would be dangerous.
What I'm arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It's not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).