All of mathemajician's Comments + Replies

I agree that it would be dangerous.

What I'm arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It's not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).

Right, but the problem with this counter example is that it isn't actually possible. A counter example that could occur would be much more convincing.

Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe... I'd be happy considering it to be extremely intelligent.

3orthonormal
It's infeasible within our physics, but it's possible for (say) our world to be a simulation within a universe of vaster computing power, and to have a GLUT from that world interact with our simulation. I'd say that such a GLUT was extremely powerful, but (once I found out what it really was) I wouldn't call it intelligent- though I'd expect whatever process produced it (e.g. coded in all of the theorem-proof and problem-solution pairs) to be a different and more intelligent sort of process. That is, a GLUT is the optimizer equivalent of a tortoise with the world on its back- it needs to be supported on something, and it would be highly unlikely to be tortoises all the way down.

Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn't know what to put in the table. But if we're in infinite theory land, then why not just run AIXI on your infinite computer?

Back in reality, the lookup table approach isn't going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.

2orthonormal
You misunderstand me. I'm pointing out that a GLUT is an example of something with (potentially) immense optimization power, but whose use of computational resources is ridiculously prodigal, and which we might hesitate to call truly intelligent. This is evidence that our concept of intelligence does in fact include some notion of efficiency, even if people don't think of this aspect without prompting.

Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.

This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.

0faul_sname
If intelligence=optimization power/resources used, this might well be the case. Nonetheless, this "intelligence implosion" would still involve entities with increasing resources and thus increasing optimization power. A stupid agent with a lot of optimization power (Clippy) is still dangerous.

If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.

0benelliott
We don't actually have units of 'resources' or optimization power, but I think the idea would be that any non-stupid agent should at least triple its optimization power when you triple its resources, and possibly more. As a general rule, if I have three times as much stuff as I used to have, I can at the very least do what I was already doing but three times simultaneously, and hopefully pool my resources and do something even better.

It's not clear to me from this description whether the SI predictor is also conditioned. Anyway, if the universal prior is not conditioned, then the convergence is easy as the uniform distribution has very low complexity. If it is conditioned, then you will have no doubt observed many processes well modelled by a uniform distribution over your life -- flipping a coin is a good example. So the estimated probability of encountering a uniform distribution in a new situation won't be all that low.

Indeed, with so much data SI will have built a model of langu... (read more)

This is a tad confused.

A very simple measure on the binary strings is the uniform measure and so Solomonoff Induction will converge on it with high probability. This is easiest to think about from the Solomonoff-Levin definition of the universal prior where you take a mixture distribution of the measures according to their complexity -- thus a simple thing like a uniform prior gets a very high prior probability under the universal distribution. This is different from the sequence of bits itself being complex due to the bits being random. The confusing t... (read more)

0cousin_it
Can you explain why? What's the result saying the Solomonoff distribution "as a whole" often converges on uniform?

The way it works for me is this:

First I come up with a sketch of the proof and try to formalise it and find holes in it. This is fairly creative and free and fun. After a while I go away feeling great that I might have proven the result.

The next day or so, fear starts to creep in and I go back to the proof with a fresh mind and try to break it in as many ways as possible. What is motivating me is that I know that if I show somebody this half baked proof it's quite likely that they will point out a major flaw it. That would be really embarrassing. Thus... (read more)

Glial cell are actually about 1:1. A few years ago a researcher wanted to cite something to back up the usual 9:1 figure, but after asking everybody for several months nobody knew where the figure came from. So, they did a study themselves and did a count and found it to be 1:1. I don't have the reference on me, it was a talk I went to about a year ago (I work at a neuroscience research institute).

I have asked a number of neuroscientists about the importance of glia and have always received the same answer: the evidence that they are functionally import... (read more)

1Alan
This new finding may be correct, but the old dictum about "nullius in verba" still makes sense.

Here's my method: (+8 for me)

I have a 45 minute sand glass timer and a simple abacus on my desk. Each row on the abacus corresponds to one type of activity that I could be doing, e.g. writing, studying, coding, emails and surfing,... First, I decide what type of activity I'd like to do and then start the 45 minute sand glass. I then do that kind of activity until it ends. At which point I count it on my abacus and have at least a 5 minute break. There are no rules about what I have to do, I do what ever I want. But I always do it in focused 45 minut... (read more)

A whole community of rationalists and nobody has noticed that his elementary math is wrong?

1.9 gigaFLOPS doubled 8 times is around 500 gigaFLOPS, not 500 teraFLOPS.

Big difference, and one that trashes his conclusion.

0PhilGoetz
Whoa! You are right! I originally excluded the modern supercomputers, because they are a different category of beast - not really "computers", but networks of computers. Then I included Ranger when I saw its hardware costs were only $30 million.
2RobinZ
*headdesk* Yet another reason to favor exponential notation, I think.

There is nothing about being a rationalist that says that you can't believe in God. I think the key point of rationality is to believe in the world as it is rather than as you might imagine it to be, which is to say that you believe in the existence of things due to the weight of evidence.

Ask yourself: do you want to believe in things due to evidence?

If the answer is no, then you have no right calling yourself a "wannabe rationalist" because, quite simply, you don't want to hold rational beliefs.

If the answer is yes, then put this into practice.... (read more)

5gelisam
Uh-oh. I... I don't think I do want to believe in things due to evidence. Not deep down inside. When choosing my beliefs, I use a more important criterion than mere truth. I'd rather believe, quite simply, in whatever I need to believe in order to be happiest. I maximize utility, not truth. I am a huge fan of lesswrong, quoting it almost every day to increasingly annoyed friends and relatives, but I am not putting much of what I read there into practice, I must admit. I read it more for entertainment than enlightenment. And I take notes, for those rare cases in my life where truth actually is more important to my happiness than social conventions: when I encounter a real-world problem that I actually want to solve. This happens less often than you might think.

The last Society for Neuroscience conference had 35,000 people attend. There must be at least 100 research papers coming out per week. Neuroscience is full of "known things" that most neuroscientists don't know about unless it's their particular area.

This professor, who has no doubt debated game theory with many other professors and countless students making all kinds of objections, gets three paragraphs in this article to make a point. Based on this, you figure that the very simple objection that you're making is news to him?

One thing that concerns me about LW is that it often seems to operate in a vacuum, disconnected from mainstream discourse.

3Technologos
If he's really on top of the situation, why did he say the equilibrium was $17.50? Obviously this isn't an equilibrium, since anybody wins by defecting to $17.51. The equilibria are $19.99 and $20.00.
4billswift
This is one of the purest examples I have seen in a while of argument from authority, congratulations!

Yes, like I said, given Hamermesh's credential's, I didn't want to jump to any hasty conclusion.

However, professional game theorists do in fact get deceived by the supposed textbook correctness of their conclusions. That's why I linked the previous Regret of Rationality, which goes over why being "reasonable" and winning so sharply diverge. It's also part of why no one ever wins the "guess a third of the average guess" by guessing zero, despite its correctness proof.

If Hamermesh did have some understanding of the issues I raised, it w... (read more)

1Vladimir_Nesov
Seconded.

Or how about Ray Solomonoff? He doesn't live that far away (Boston I believe) and still gives talks from time to time.

One of my favourite posts here in a while. When talking with theists I find it helpful to clarify that I'm not so much against their God, rather my core problem is that I have different epistemological standards to them. Not only does this take some of the emotive heat out of the conversation, but I also think it's the point where science/rationalism/atheism etc. is at its strongest and their system is very weak.

With respect to untheistic society, I remember when a guy I knew shifted to New Zealand from the US and was disappointed to find that relatively... (read more)

4curiousgeorge
I enjoyed this post very much as well as I am interested in this topic. I am not a mathmatician and only had entry level college philosophy, so 80% of the discussion is over my head. I wanted to say that your comment that "most people aren't sufficiently interested in religion to be bothered with atheism" in New Zeland was very helpful. This may make no logical sense, but the meaning I took away is that if that if an individual is not sufficiently interested in religion (or feel he has sufficient reason to disbelieve Christianity in my case) then that individual should not be bothered that he is an atheist. I know the point of these discussions is to discuss and not compliment, but I wanted to say that your comment helped me tremendously.

Sure, there will be a great many factors at work here in the real world that our model does not include. The challenge is to come up with a manageable collection of principles that can be observed and measured across a wide range of situations and that appears to explain the observed behaviour. For this purpose "can't be bothered" isn't a very useful principle. What we really want to know is why they can't be bothered.

For example, I know people who can be bothered going to a specific shop and queueing in line every week to get a lottery tick... (read more)

In case it hasn't already been posted by somebody, here's a nice talk about irrational behaviour and loss aversion in particular.

Yeah... I know :-\ There are various political forces at work within this community that I try to stay clear of.

One group within the community calls it "Algorithmic Information theory" or AIT, and another "Kolmogorov complexity". I talked to Hutter when we was writing that article for Scholarpedia that you cite. He decided to use the more neutral term "algorithmic complexity" so as not to take sides on this issue. Unfortunately, "algorithmic complexity" is more typically taken as meaning "computational complexity theory". For example, if you search for it under Wikipedia you will get redirected. I know, it's all kind of ridiculous and confusing...

1steven0461
Changed title. To make things more confusing, this says algorithmic/Kolmogorov complexity is a subfield of AIT.

The title is a bit misleading. "Algorithmic complexity" is about the time and spaces resources required for computations (P != NP? etc...), whereas this web site seems to be more about "Algorithmic Information Theory", also known as "Kolmogorov Complexity Theory".

1steven0461
Are you sure? I always thought that was "computational complexity". This seems to agree with my usage, as does this.

I've never heard of this guy before, but yes, that's the same idea at work.

1PhilGoetz
He posts here all the time.

The following is the way I've approached the problem, and it seems to have worked for me. I've never tried to see if it would work with somebody else before, indeed I don't think I've ever explained this to anybody else before.

As I see it, these problems arise when what I think I should do, and what I feel like doing are in conflict with each other. Going with what you feel is easy, it's sort of like the automatic mode of operation. Overriding this and acting on what you think takes effort, and the stronger your feelings are wanting to do something else... (read more)

3CrimeThinker
If you look into the abyss and see turtles all the way down, perhaps it's all just the same turtle and you're not thinking with portals, turtlebro.
3Matt_Simpson
When I was younger and started thinking about something I wanted to do, like, say, asking out a girl, I never performed the necessary backward induction. I would always envision what the future would be like when the girl said yes and how great it would be. It was so bad that I would try to plan, but seemed to be incapable of actually doing it because I spent so much time thinking about how great the future was going to be. In reality, I didn't know what to say and the girl said no, so I concluded that envisioning the future was a bug, and tried to fight it. I guess It's lucky that I never completely got rid of this tendency, and in the mean time I've become much better at planning, (though doing any planning is an improvement, so this means little). Now it's time to make a conscious effort at cultivating these positive emotions and see what happens.
2Vladimir_Nesov
For me, it works for the things that I believe to continue being important. I can motivate myself to start intrinsically liking doing what I believe to be important to be obsessed over. This doesn't lock me in on specific subgoals, as when a subgoal is done, it's transformed into new opportunities for the continuation of a bigger project. But I'm afraid of starting to like things in which I don't intrinsically believe, which I only need to get out of my way before a deadline. From reading the above comment, I explicitly recognized that it's circular: on one hand, I have a power to control the low-level emotional response, to channel it where I deliberatively believe it should go. On the other hand, I'm afraid of the emotional response taking over, leading me away from the things I deliberatively prefer in the long term. Both effects must be real, but I expect one of them is stronger, if used with sufficient cunning. Polarizing the activities on those which I identify with, and those I apply only instrumentally creates segregated zones, in one of which deliberation channels emotion, and in the other of which deliberation is afraid of channeling emotion, as it's expected that emotion will win there. So, on a surface level, it looks like what I identify with is the area of activities where the emotion is channeled. But one step deeper, it turns out that it's actually an area deliberatively marked as being safe to channel emotions into. Emotional acceptance is the effect of the Escher-brained justification to emotionally segregate the activity, not the defining signature of the self. And thus, I resolve to try allowing motivation where I didn't before.
8Cyan
Here's a link to a YouTube video by pjeby describing a very similar technique.

Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.

In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that &quo... (read more)

8NicoleTedesco
It can be rational to accept the responsibility of high risk/high reward behavior, on specific occasions and under specific circumstances. The trick is recognizing those occasions and circumstances and also recognizing when your mind is fooling you into believing "THIS TIME IS DIFFERENT". A rational agent is Warren Buffet. An irrational agent is Ralph Cramden. Both accept high risk/high reward situations. One is rational about that responsibility. The other is not. Also, in a world of both rational and irrational agents, in a world where the rational agent must depend upon the irrational, it is sometimes rational to think irrationally!

There's an extent to which we live in such a world. Many people believe you can achieve your wildest dreams if you only try hard enough, because by golly, all those people on the TV did it!

The most effective way for you to internally understand the world and make good decisions is to be super rational. However, the most effective way to get other people to aid you on your quest for success is to practice the dark arts. The degree to which the latter matters is determined by the mean rationality of the people you need to draw support from, and how important this support is for your particular ambitions.

I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a... (read more)

I know about the birthday effect and similar. (I do math and stats for a living.) The problem is that when I try to estimate the probability of having these events happen I get probabilities that are too small.

Well, I'm getting my karma eaten so I'll return to being quiet about these events. :-)

1Strange7
http://www.unicornjelly.com/oldforums/viewtopic.php?p=135082

No, but if those thousand people don't know if they are part of the thousand or not, after all in any normal situation I wouldn't tell these stories to anybody, shouldn't they assume that they probably aren't part of the 1 in 1000 and thus adjust their posterior distribution accordingly?

I am an atheist who does not believe in the super natural. Great. Tons of evidence and well thought out reasoning on my side.

But... well... a few things have happened in my life that I find rather difficult to explain. I feel like a statistician looking at a data set with a nice normal distribution... and a few very low probability outliers. Did I just get a weird sample, or is something going on here? I figure that they are most likely to be just weird data points, but they are weird enough to bother me.

Let me give you one example. A few years ago I... (read more)

I'll answer with a koan.

Of all the people who live in the world, should the lucky thousand who witness the events that are a million times too unlikely to witness for any single individual, start believing in supernatural, while the rest shouldn't?

8Scott Alexander
http://en.wikipedia.org/wiki/Littlewood%27s_Law_of_Miracles

I grew up knowing that Santa didn't exist. My parents had to then explain to me that I couldn't tell certain kids about this because their parents wanted them to still think Santa was real until they were a bit older. I still remember being quite shocked that these parents were lying to their kids, along with grandparents and other family members, and then expecting even me to join in. I was further shocked by the fact that most of these kids never worked it out themselves and had to eventually be told by their parents or a group of their friends (being... (read more)

3Court_Merrigan
That is a good point.
thomblake120

So kids can just look at other people's deceptions. Good point!