In recent years I've become more appreciative of classical statistics. I still consider the Bayesian solution to be the correct one, however, often a full Bayesian treatment turns into a total mess. Sometimes, by using a few of the tricks from classical statistics, you can achieve nearly as good performance with a fraction of the complexity.
Valdimir,
Firstly, "maximizing chances" is an expression of your creation: it's not something I said, nor is it quite the same in meaning. Secondly, can you stop talking about things like "wasting hope", concentrating on metaphorical walls or nature's feelings?
To quote my position again: "maximise the safety of the first powerful AGI, because that's likely to be the one that matters."
Now, in order to help me understand why you object to the above, can you give me a concrete example where not working to maximise the safety of the first powerful AGI is what you would want to do?
Vladimir,
Nature doesn't care if you "maximized you chances" or leapt in the abyss blindly, it kills you just the same.
When did I ever say that nature cared about what I thought or did? Or the thoughts or actions of anybody else for that matter? You're regurgitating slogans.
Try this one, "Nature doesn't care if you're totally committed to FAI theory, if somebody else launches the first AGI, it kills you just the same."
Eli,
FAI problems are AGI problems, they are simply a particular kind and style of AGI problem in which large sections of the solution space have been crossed out as unstable.
Ok, but this doesn't change my point: you're just one small group out of many around the world doing AI research, and you're trying to solve an even harder version of the problem while using fewer of the available methods. These factors alone make it unlikely that you'll be the ones to get there first. If this correct, then your work is unlikely to affect the future of humanity.
Valdimir,
Outcompeting other risks only becomes relevant when you can provide a better outcome.
Yes, but that might not be all that hard. Most AI researchers I talk to about AGI safety think the idea is nuts -- even the ones who believe that super intelligent machines will exist in a few decades. If somebody is going to set off a super intelligent machine I'd rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven't even been considered.
If I had to sum up my position it would be: maximise the safety of the first powerful AGI, because that's likely to be the one that matters. Provably safe theoretical AGI designs aren't going to matter much to us if we're already dead.
Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:
1) Work out an extremely robust solution to the Friendly AI problem
Only once this has been done do we move on to:
2) Build a powerful AGI
Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it's unlikely that you'll be first.
Roko: Well, my thesis would be a start :-) Indeed, pick up any text book or research paper on reinforcement learning to see examples of utility being defined over histories.
Roko, why not:
U( alternating A and B states ) = 1 U( everything else ) = 0
Roko:
So allow me to object: not all configurations of matter worthy of the name "mind" are optimization processes. For example, my mind doesn't implement an optimization process as you have described it here.
I would actually say the opposite: Not all optimisation processes are worthy of the name "mind". Furthermore, your mind (I hope!) does indeed try to direct the future into certain limited supersets which you prefer. Unfortunately, you haven't actually said why you object to these things.
My problem with this post is simply that, well... I don't see what the big deal is. Maybe this is because I've always thought about AI problems in terms of equations and algorithms.
And with the Singularity at stake, I thought I just had to proceed at all speed using the best concepts I could wield at the time, not pause and shut down everything while I looked for a perfect definition that so many others had screwed up...
In 1997, did you think there was a reasonable chance of the singularity occurring within 10 years? From my vague recollection of a talk you gave in New York circa 2000, I got the impression that you thought this really could happen. In which case, I can understand you not wanting to spend the next 10 years trying to accurately define the meaning of "right" etc. and likely failing.
My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.
I once asked one of the robotics guys at IDSIA about subsumption architecture (he ran the German team that won the robo-soccer world cup a few years back) and his reply was that people like it because it works really well and is the simplist way to program many things. At the time, all of the top teams used it as far as he knew.
(p.s. don't expect follow up replies on this topic from me as I'm current in the middle of nowhere using semi-functional dial-up...)