Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Dwelle00

If it will help anyone, I'd like to chip in with a memory/note-taking technique I am using at the moment. Mindmaps. I find it extremely powerful for very fast information retrieval, since it's inherently hierarchical. I do digital mindmaps, using Freeplane. I use it for storing key ideas from books, articles, workflows, step by step howTos, programming snippets, even for my own thoughts. You name it.

Only downside I can think of is that my memory no longer has any incentive to retain the information I come across so you could say my memory only gets worse, using this technique. Does anyone know of any studies on long term effects on memory when storing information externally, rather than forcing your brain to do it itself?

Dwelle00

I don't know, perhaps we're not talking about the same thing. It won't be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don't think it will matter much, whether we imprint in it a goal like "don't kill humans" or not. Ultimately, the decision will be its own.

Dwelle00

Sure, but I'd be more cautious at assigning probabilities of how likely it's for a very intelligent AI to change its human-programmed values.

Dwelle00

That's why I said, that they can change it anytime they like. If they don't desire the change, they won't change it. I see nothing incoherent there.

Dwelle-20

Why? Do you think paperclip maximizers are impossible?

Yes, right now I think it's impossible to create self-improving, self-aware AI with fixed values. I never said that paperclip maximizing can't be their ultimate life goal, but they could change it anytime they like.

You don't mean that as a dichotomy, do you?

No.

Dwelle-10

Well first, I was all for creating an AI to become the next stage. I was a very singularity-happy type of guy. I saw it as a way out of this world's status quo - corruption, state of politics, etc... but the singularity would ultimately mean I and everybody else would cease to exist, at least in their true sense. You know, I have these romantic dreams, similar to Yudkowsky's idea of dancing in an orbital night club around Saturn, and such. I don't want to be fused in one, even though possibly amazing, matrix of intelligence, which I think is how the things will play out, eventually. Even though, I can't imagine what it will be like and how it will pan out, as of now I just don't cherish the idea much.

But yea, I could say that I am torn between moving on, advancing, and between more or less stagnating and in our human form.

But in answer to your question: if we were to creating an AI to replace us, I'd hate it to become paperclip maximizer. I don't think it's likely.

Dwelle00

Yea, I get it... I believe, though, that it's impossible to create an AI (self-aware, learning) that has set values, that can't change - more importantly, I am not even sure if its desired (but that depends what our goal is - whether to create AI only to perform certain simple tasks or whether to create a new race, something that precedes us (which WOULD ultimately mean our demise, anyway))

Dwelle00

Alright - that is to create completely deterministic AI system, or otherwise, to my belief, it would be impossible to predict how the AI is going to react. Anyway, I admit that I have not read much on the matter, and it's just reasoning... so thanks for your insight.

Dwelle00

A variant that does make sense and is a real concern is that as the AGI learns, it could change its definitions in unpredictable ways. Peter De Blanc talks about this here. This could lead to part of the utility function becoming undefined or to the machine valuing things that we never intended it to value - basically it makes the utility function unstable under the conditions you describe. The intuition is roughly that if you define a human in one way, according to what we currently know about physics, some new discovery made available to the AI might result in it redefining humans in new terms and no longer having them as a part of its utility function. Whatever the utility function describes is now separate from how humans appear to it.

That's what I basically meant.

Dwelle00
  1. Ok, I guess we were talking about different things, then.

  2. I don't see any point in giving particular examples. More importantly, even if I didn't support my claim, it wouldn't mean your argument was correct. The burden of proof lies on your shoulders, not mine. Anyway, here's one example, quite cliche - I would choose to sterilize myself, if I realized that having intercourse with little girls is wrong (or that having intercourse at all is wrong, whatever the reason..) Even if it was my utmost desire, and in my wholeness I believed that it is my purpose to have intercourse , I would choose to modify that desire if I realized it's wrong - or illogical, or stupid, or anything. It doesn't matter really.

THERFORE:

(A) I do not desire not to have intercourse. (B) But based on new information, I found out that having intercourse produces great evil. => I choose to alter my desire (A).

You might say that by introducing new desire (not to produce evil) I no longer desire (A), and I say, fine. Now, how do you want to ensure that the AI won't create it's own new desires based on new facts.

Load More