Comment author: Jayson_Virissimo 02 August 2015 05:33:18PM 7 points [-]

So we can conclude that although David was smart enough to escape his sandbox, he isn't yet at the level of understanding human-style humor.

Comment author: Liso 04 August 2015 05:18:28AM 0 points [-]

"human"-style humor could be sandbox too :)

Comment author: Liso 04 August 2015 04:37:42AM *  0 points [-]

I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable

From this page ->

Human values are, for example:

  • civility, respect, consideration;
  • honesty, fairness, loyalty, sharing, solidarity;
  • openness, listening, welcoming, acceptance, recognition, appreciation;
  • brotherhood, friendship, empathy, compassion, love.

  1. I think none of them we could call belief.

  2. If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)

  3. On the contrary - because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)

  4. I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)

And last but not least - if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.

We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)

Comment author: Liso 04 August 2015 05:04:04AM 0 points [-]

I like to add some values which I see not so static and which are proably not so much question about morality:

Privacy and freedom (vs) security and power.

Family, society, tradition.

Individual equality. (disparities of wealth, right to have work, ...)

Intellectual properties. (right to own?)

Comment author: Stuart_Armstrong 27 July 2015 09:59:55AM 0 points [-]

In the space of all possible values, human values have occupied a very small space, with the main change being who gets counted as moral agent (the consequences of small moral changes can be huge, but the changes themselves don't seem large in an absolute sense).

Or, if you prefer, I think it's possible the AI moral value changes will range so widely, that human value can essentially be seen as static in comparison.

Comment author: Liso 04 August 2015 04:37:42AM *  0 points [-]

I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable

From this page ->

Human values are, for example:

  • civility, respect, consideration;
  • honesty, fairness, loyalty, sharing, solidarity;
  • openness, listening, welcoming, acceptance, recognition, appreciation;
  • brotherhood, friendship, empathy, compassion, love.

  1. I think none of them we could call belief.

  2. If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans do. (how much selfish or unwelcome or dishonest could AI or human be?)

  3. On the contrary - because we are selfish (is it our moral value which we try to analyze?) we want that AI will be more open, more listening, more honest, more friend (etc) than we want or plan to be. Or at least we are now. (so are we really want that AI will be like us?)

  4. I see the question about optimal level of these values. For example would we like to see agent who will be maximal honest, welcoming and sharing to anybody? (AI at your house which welcome thieves and tell them what they ask and share all?)

And last but not least - if we will have more AI agents then some kind of selfishness and laziness could help. For example to prevent to create singleton or fanatical mob of these agents. In evolution of humankind, selfishness and laziness could help human groups to survive. And lazy paperclip maximizer could save humankind.

We need good mathematical model of laziness, selfishness, openness, brotherhood, friendship, etc. We have hard philosophical tasks with deadline. (singularity is coming and dead in word deadline could be very real)

Comment author: Stuart_Armstrong 23 July 2015 08:27:45AM 0 points [-]

Corrected, thanks!

Comment author: Liso 24 July 2015 04:26:20AM 1 point [-]

Stuart is it really your implicit axiom that human values are static, fixed?

(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)

Comment author: Dagon 22 July 2015 02:59:08PM *  1 point [-]

I think your last line was meant to be "beliefs and values" rather than "preferences and values".

And I don't know it's a question of distinguishing beliefs from values, more of a question of whether values are stable. I personally don't think most individuals have a CEV, and even if many do, there's no reason to suspect that any group has one. This is especially true for the undefined group "humanity", which usually includes some projections of not-yet-existent members.

Comment author: Liso 22 July 2015 07:57:42PM 0 points [-]

more of a question of whether values are stable.

or question if human values are (objective and) independent of humans (as subjects who could develop)

or question if we are brave enough to ask questions if answers could change us.

or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom.

Comment author: Eitan_Zohar 13 July 2015 05:47:42AM *  0 points [-]

What sort of therapy would work for me? Ruminating is probably the main cause of it. Now that I've refuted my current fears, I find that I can't wrench the quantum world out of my head. Everything I feel is now tainted by DT.

Comment author: Liso 17 July 2015 05:40:30AM 0 points [-]

I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc.

About rationality and lesswrong -> could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain?

Comment author: Liso 13 July 2015 04:43:13AM 0 points [-]

You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerous ideas needs not only discussing it but also solve your emotional responses. If you like to sleep well then it could depend more on your emotional stability than on rational knowledge.

Comment author: roystgnr 10 March 2015 02:58:34PM *  4 points [-]

Wheelbarrows are useful even if all you have is short mostly-level paths, even if you don't have paths much longer than the width of a construction site. Then once those are in use, the incentive (and the ability) to lengthen and flatten other paths is greatly increased.

Wooded parts of the Americas did have some famously long paths though I don't know how passable they would be for carts.

Comment author: Liso 12 March 2015 08:56:51AM *  0 points [-]

Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.

Comment author: Liso 19 February 2015 05:46:39AM 1 point [-]

@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality?

@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive.

@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on surroundings and we have to think very carefully which conditions are acceptable and which are not. Or we could choose what we choose in some special scene prepared for humanity by SAI.

Comment author: TRIZ-Ingenieur 10 December 2014 01:47:16AM *  2 points [-]

The high dimensionality of stunting options makes it easier to find the "right amounts" because we can apply digital stunting measures without need of fine tuning based on context. For some contexts stunting applies, for others not.

Bostrum lists several stunting means which can include a multitude of inner dimensions:

  • limit intellectual faculties (per capability/skill)
  • limit access to information (per capability/skill)
  • limit processing speed (per capability/skill)
  • limit memory (per capability/skill)
  • limit sensory input channels (stunting/boxing)
  • limit output channels (boxing)

Further stunting means are (Please suggest more):

  • stunting memorization process (blocking reinforcement cycles)
  • forgetting (formatting/partial backup restore)
  • memory manipulation
  • distraction

Stunting level adjustment is of high complexity and has many input dimensions. Core of the stunting system (e.g. hard wired narrow AI) is a blacklist of taboo topics, skills and knowledge segments. Touching a taboo triggers stunting measures.

The effectors of the stunting system can be divided into three categories:

1. Input related stunting:

Monitoring of all input channels with context analysis for taboo issues. Upon taboo interrupt following stunting options can be applied (examples for: auditory channel, visual channel and data stream):

a) Immediate stunting actions:

  • Recognizable channel blocking: beeping out, blurring out, zeroing out.
  • Unrecognizable channel censoring: silencing, hiding, removing.
  • Channel manipulation: other sound, other image, other data.
  • Throttle processing speed close to zero for a limited time (like unconsciousness)
  • Generate high priority distracting information.

b) Delayed stunting actions by adding meta information:

  • Flag this information chunk as not important.
  • Flag this information chunk for special treatment. This could be: no allowance to allocate memory, no allowance to allocate processing power, no allowance to trigger related memories, no allowance for reinforcement memorization, flagged for self-deletion.
  • Flag all information of an entire event for amnesia.

2. Processing related stunting:

Monitoring of thoughts by checking taboo relevance. This concept has similarity to tripwires for content monitoring. In education phase taboo by taboo will be presented in simplified manner that the AI can recognize it. After each learning step the taboo representation will be permanently linked to fear and negative reward expectation. If a thought triggers a taboo representation stronger than a certain threshold stunting is initiated and brings all processing steps in connected areas to a temporary halt. A conscious AGI would not feel this stunting because all other thought processes continue unimpaired. One thought out of thousands is lost.

During nighttime reinforcement and tidy up phases all information chunks flagged for deletion will be eradicated or replaced by backup versions or false memories.

3. Output related stunting:

These measures can be only the last resort if all others failed. Output stunting measures are triggered by thought monitoring: (examples for: auditory channel, actuatoric channel and data stream):

  • Channel throttling: force pauses, slow down movements, throttle bandwidth.
  • Channel blocking: muting, immobility, blocking.
  • Channel manipulation: change words, change movements, change data.

Only in extreme contexts full stunting will apply. Whenever the AI is confronted with taboo information it will only get little notion of its existence. It fears acquiring taboo skills.

To prevent an uncontrollable AGI explosion it might be best to prohibit self-improvement. Above stunting scheme allows this with a blacklist like this:

List of Taboos:

  • Killing and hurting humans.
  • Stealing and lying.
  • Perverse literature.
  • Fire, weapons, explosives, radioactivity, fusion.
  • Computers, IT, chip design, structured programming languages.
  • Genetics and nano engineering.

Bostrum is right that such a stunted AI is of limited use. But it can be a safe start along the AI path with later augmentation option. This stunted AGI is so ignorant of advanced technology that it imposes no risk and can be tested in many environments. With humble education, humanist values and motivations it would excel as service robot. Field testing in all conceivable situations will allow to verify and improve motivation and stunting system. In case of a flaw a lot of learning is needed until dangerous skill levels are reached.

Tripwires must terminate the AI in case the stunting system is bypassed.

Although the stunting system is quite complex it allows easy adjustment. The shorter the taboo list the more capabilities the AGI can acquire.

Comment author: Liso 10 December 2014 04:16:50AM 1 point [-]

This could be not good mix ->

Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying.

This taboo: "structured programming languages.", could be impossible, because structure understanding and analysing is probably integral part of general intelligence.

She could not reprogram itself in lower level programming language but emulate and improve self in her "memory". (She could not have access to her code segment but could create stronger intelligence in data segment)

View more: Prev | Next