Juno_Watt
Juno_Watt has not written any posts yet.

Not so! An AGI need not think like a human, need not know much of anything about humans, and need not, for that matter, be as intelligent as a human.
Is that a fact? No, it's a matter of definition. It's scarecely credible you are unaware that a lot of people think the TT is critical to AGI.
The problem I'm pointing to here is that a lot of people treat 'what I mean' as a magical category.
I can't see any evidence of anyone invlolved in these discussions doing that. It looks like a straw man to me.
Ok. NL is hard. Everyone knows that. But its got to be solved anyway.
Nope!
An AI you... (read more)
Let G1="Figure out the right goal to have"
If an agent has goal G1 and sufficient introspective access to know its own goal, how would avoiding arbirtrariness in its goals help it achieve goal G1 better than keeping goal G1 as its goal?
Avoiding arbitrariness is useful to epistemic rationality and therefore to instrumental rationality. If an AI has rationality as a goal it will avoid arbitrariness, whether or not that assists with G1.
And you are confusing self-improving AIs with conventional programmes.
Those are only 'mistakes' if you value human intentions. A grammatical error is only an error because we value the specific rules of grammar we do; it's not the same sort of thing as a false belief (though it may stem from, or result in, false beliefs).
You will see a grammatical error as a mistake if you value grammar in general, or if you value being right in general.
A self-improving AI needs a goal. A goal of self-improvement alone would work. A goal of getting things right in general would work too, and be much safer, as it would include getting our intentions right as a sub-goal.
GAI is a program. It always does what it's programmed to do. That's the problem—a program that was written incorrectly will generally never do what it was intended to do.
So self-correcting software is impossible. Is self improving software possible?
You've still not given any reason for the future software to care about "what you mean" over all those other calculation either.
Software that cares what you mean will be selected for by market forces.
Present day software may not have got far with regard to the evaluative side of doing what you want, but the XiXiDu's point seems to be that it is getting better at the semantic side. Who was it who said the value problem is part of the semantic problem?
A. Solve the Problem of Meaning-in-General in advance, and program it to follow our instructions' real meaning. Then just instruct it 'Satisfy my preferences', and wait for it to become smart enough to figure out my preferences.
That problem has got to be solved somehow at some stage, because something that couldn't pass a Turing Test is no AGI.
But there are a host of problems with treating the mere revelation that A is an option as a solution to the Friendliness problem.
- You have to actually code the seed AI to understand what we mean. Y
Why is that a problem? Is anyone suggesting AGI can be had for free?
... (read 917 more words →)
- The Problem of Meaning-in-General may really
I can think of only one example of someone who actually did this, and that was someone generally classed a a mystic.