All of Liso's Comments + Replies

Liso10

One child could have two parents (and both could answer) so 598 is questionable number.

Liso00

"human"-style humor could be sandbox too :)

Liso00

I like to add some values which I see not so static and which are proably not so much question about morality:

Privacy and freedom (vs) security and power.

Family, society, tradition.

Individual equality. (disparities of wealth, right to have work, ...)

Intellectual properties. (right to own?)

Liso00

I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable

From this page ->

Human values are, for example:

  • civility, respect, consideration;
  • honesty, fairness, loyalty, sharing, solidarity;
  • openness, listening, welcoming, acceptance, recognition, appreciation;
  • brotherhood, friendship, empathy, compassion, love.

  1. I think none of them we could call belief.

  2. If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans

... (read more)
0Liso
I like to add some values which I see not so static and which are proably not so much question about morality: Privacy and freedom (vs) security and power. Family, society, tradition. Individual equality. (disparities of wealth, right to have work, ...) Intellectual properties. (right to own?)
Liso10

Stuart is it really your implicit axiom that human values are static, fixed?

(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)

0Stuart_Armstrong
In the space of all possible values, human values have occupied a very small space, with the main change being who gets counted as moral agent (the consequences of small moral changes can be huge, but the changes themselves don't seem large in an absolute sense). Or, if you prefer, I think it's possible the AI moral value changes will range so widely, that human value can essentially be seen as static in comparison.
Liso00

more of a question of whether values are stable.

or question if human values are (objective and) independent of humans (as subjects who could develop)

or question if we are brave enough to ask questions if answers could change us.

or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom.

Liso00

I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc.

About rationality and lesswrong -> could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain?

0Fivehundred
Heh, no. I can't direct it.
Liso00

You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerou... (read more)

0Fivehundred
What sort of therapy would work for me? Ruminating is probably the main cause of it. Now that I've refuted my current fears, I find that I can't wrench the quantum world out of my head. Everything I feel is now tainted by DT.
Liso00

Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.

0[anonymous]
Wheelbarrows, hand carts are still massively useful. I used to help out with construction. It is hard enough with wheelbarrows. We did not use them on roads, just around the site.
Liso10

@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality?

@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive.

@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on sur... (read more)

Liso10

This could be not good mix ->

Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying.

This taboo: "structured programming languages.", could be impossible, because structure understanding and analysing is probably integral part of general intelligence.

She could not reprogram itself in lower level programming language but emulate and improve self in her "memory". (She could not have access to her code segment but could create stronger intelligence in data segment)

0TRIZ-Ingenieur
The taboo of lying is vital for thought monitoring. This taboo covers only the AGIs internal world representation. Based on its knowledge it never lies. By monitoring input and output channels the stunting system can detect lying and applies appropriate stunting measures. If the stunting system manipulates input channels, memories or output channels the result to an outside observer will look like lying. The AGI is not capable to tell the truth when the stunting system has removed or manipulated information for safety reasons. The outside observer can check logs of the stunting system if it is vital to him to determine the truth. Safe path to superintelligence with slow take-off by prohibiting self-improvement: The taboo to learn structured programming languages is totally independent from capabilities to understand and analyse structured programs. In principle the AGI could learn this. The stunting system censors all input channels that the AGI never observes a single line of program code. The AGI could develop a new type of structured programming language from scratch. As soon as the stunting system observes structured code fragments in memory the section responsible for this will be stunted. The generated code can be deleted or manipulated. Either way the AGI experiences that these frightful structured programming languages are complicated and come with set-back after set-back. If this AGI breaks out with free internet access it has no idea of our IT infrastructure and how to infiltrate it. If the AGI is moderately superintelligent it will need several weeks for learning hacking skills. Self-Improvement with safeguards: If AGI architecture design and optimization uses new forms of representations that are totally unlike structured programming languages we could teach self-improvement without removing the programming taboo. Code and architecture generated in this self-improvement process is incompatible to existing IT systems. If several AGIs evolve from simil
Liso10

Is "transcendence" third possibility? I mean if we realize that human values are not best and we retire and resign to control.

(I am not sure if it is not motivation selection path - difference is subtle)

BTW. if you are thinking about partnership - are you thinking how to control your partner?

Liso00

Sorry for question out of this particular topic.

When we started to discuss I liked and proposed idea to make wiki page with results from our discussion. Do you think that we have any ideas which are collectible in collaboratory wiki page?

I think we have at least one - paulfchristiano's "cheated evolution" : http://lesswrong.com/r/discussion/lw/l10/superintelligence_reading_group_3_ai_and_uploads/bea7

Could you add more?

Liso10

It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.

There are two sorts of unsafe AI: one which care and one which doesnt care.

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Friend and enemy has to analyse additional things...

0wedrifid
Just don't accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.
Liso00

The other question is: what happens once you know the AI has desire for subversion?

There is plenty of people cooperating and supporting dictators and mafians due to selfish reasons. We could expect same in this area.

"I will support dangerous AI because I will be more rich,powerful or safe then. "

Liso00

I don't understand negative point. Could it be explained please?

Is it because his position equals what he declares and not what he really think?

Or becase Bostroms mimicry (=for some reasons prefere negative, catastrophic and alarmist position) is not suposed?

Liso-20

Expose it to limited data about the world, or no data about the real world -Provide it little or no information about its operators or the society they live in -Provide it little or no information about its own hardware or software architecture

When I read "Oracle AI" I realized one possibility which is probably still overlooked.

"Dreams"

Erasing data on physical media is not perfect. Zero bit which was before nonzero could be read more slowly or quicker. It could allow SAI to see shadows of past data. Which could lead to phenomenon similar to dreams.

Liso10

I am afraid that we have not precisely defined term goal. And I think we need it.

I am trying to analyse this term.

Do you think that todays computer's have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?

Probably I could more precisely describe my idea in other way: In Bostrom's book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.

Could we think AI without goals but with ... (read more)

Liso10

Could AI be without any goals?

Would that AI be dangerous in default doom way?

Could we create AI which wont be utility maximizer?

Would that AI need maximize resources for self?

1SteveG
People have complex sets of goals, tendencies, and instincts. There has never been any entity brought into existence so far which is a utility maximizer. That renders us dangerous if we become too powerful, but we are not useless if our powers are checked. We really might not wish an AI to be an explicit utility maximizer. Oddly, starting with that design actually might not generate the most utility.
1nkh
Seems to me an AI without goals wouldn't do anything, so I don't see it as being particularly dangerous. It would take no actions and have no reactions, which would render it perfectly safe. However, it would also render the AI perfectly useless--and it might even be nonsensical to consider such an entity "intelligent". Even if it possessed some kind of untapped intelligence, without goals that would manifest as behavior, we'd never have any way to even know it was intelligent. The question about utility maximization is harder to answer. But I think all agents that accomplish goals can be described as utility maximizers regardless of their internal workings; if so, that (together with what I said in the last paragraph) implies that an AI that doesn't maximize utility would be useless and (for all intents and purposes) unintelligent. It would simply do nothing.
Liso00

Positive emotions are useful too. :)

Liso00

I think that if SAIs will have social part we need to think altruisticaly about them.

It could be wrong (and dangerous too) think that they will be just slaves.

We need to start thinking positively about our children. :)

Liso00

Just a little idea:

In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -> mission -> goals -> strategy -> tactics -> daily planning.

I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -> mision -> goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)

I am afraid that humanity has not properly defined/analysed nor vision nor mi... (read more)

Liso00

I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.

Liso00

One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.

In this moment I just wanted to improve our view at probability of only one SI in starting period.

Liso10

Think prisoner's dilemma!

What would aliens do?

Is selfish (self centered) reaction really best possibitlity?

What will do superintelligence which aliens construct?

(no discussion that humans history is brutal and selfish)

0Sebastian_Hagen
You're suggesting a counterfactual trade with them? Perhaps that could be made to work; I don't understand those well. It doesn't matter to my main point: even if you do make something like that work, it only changes what you'd do once you run into aliens with which the trade works (you'd be more likely to help them out and grant them part of your infrastructure or the resources it produces). Leaving all those stars on to burn through resources without doing anything useful is just wasteful; you'd turn them off, regardless of how exactly you deal with aliens. In addition, the aliens may still have birthing problems that they could really use help with; you wouldn't leave them to face those alone if you made it through that phase first.
Liso00

Let us try to free our mind from associating AGIs with machines.

Very good!

But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?

Liso00

When I was thinking about past discussions I was realized something like:

(selfish) gene -> meme -> goal.

When Bostrom is thinking about singleton's probability I am afraid he overlook possibility to run more 'personalities' on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble's telescope to observe diffferent objects)

And not only possibility but probably also necessity.

If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.

We need to analyze how to slightly different goals could control each other.

0diegocaleiro
I'll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don't share goals, like a human - specially like a schizophrenic one. The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other's time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves. We need other people, but Bostrom doesn't let simple things left out easily.
Liso00

moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas.

Liso-10

When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.

Now I am thankful because your comment enlarge possibilities to think about Fermi.

We could not think only self destruction - we could think modesty and self sustainability.

Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy str... (read more)

Liso-20

Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino).

If you are thinking about money making - you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)

In AI sector there is much higher probability of phase-transition (=explosion). I think that's the diference.

How?

  1. P

... (read more)
Liso00

Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.

Ants are probably good example how could organisational intelligence (?) be advantage.

According to wiki ''Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.''. See also google answer, wiki table or stackexchange.

Although we have to think careful - apex predators does not use to form large biomass. So it could be more complicated to define success of life form.

P... (read more)

Liso00

It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.

We could test it in thought experiment.

Chess game human-grandmaster against AI.

  1. it is not rapid (not checkmate in begining).
    We could also suppose one move per year to slow it down. It bring to AI next advantage because it's ability to concentrate so long time.

  2. capabilities
    a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore's law)
    b) human lose (step by step) positional an

... (read more)
Liso00

One possibility to prevent smaller group gain strategic advantage is something like operation Opera.

And it was only about nukes (see Elon Musk statement)...

Liso30

Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)

Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)

Next theorem is obvious :)

Liso20

This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be "cheated".

One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.

Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.

But also 1.5x more discoveries could bring 10x bigger profit!

We could not suppose only linear dependencies in such a complex problems.

Liso10

Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot?

Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)

And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster.

What about ten time faster philosophic growth?

Liso20

Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.

We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.

Humanity could be overtaken also by slow (and alien) superintelligence.

It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act..... (read more)

Liso10

This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer.

But I think in this moment we could (and have) analyse also trained "horses" with trained "raiders". And also trained "pairs" (or groups?)

Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)

So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).

Liso00

This is also one of points where I dont agree with Bostrom's (fantastic!) book.

We could use analogy from history: human-animal = soldier+hourse didnt need the physical iterface (like in Avatar movie) and still added awesome military advance.

Something similar we could get from better weak AI tools. (probably with better GUI - but it is not only about GUI)

"Tools" dont need to have big general intelligence. They could be at hourse level:

  • their incredible power of analyse big structure (big memory buffer)
  • speed of "rider" using quick "computation" with "tether" at your hands
1Liso
This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer. But I think in this moment we could (and have) analyse also trained "horses" with trained "raiders". And also trained "pairs" (or groups?) Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy) So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).
Liso-10

what we have in history - it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.

But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)

For example still miss (and probably will miss) in book:

a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )

b) AI Impact to religion

etc.

Liso00

But why collapsed evil AI after apocalypse?

0TRIZ-Ingenieur
It would collapse within apocalypse. It might trigger aggressive actions knowing to be eradicated itself. It wants to see the other lose. Dying is not connected with fear. If it can prevent the galaxy from being colonised by good AI it prefers perfect apocalypse. Debating aftermath of apocalypse gets too speculative to me. I wanted to point out that current projects do not have the intention to create a balanced good AI character. Projects are looking for fast success and an evil paranoic AI might result in the far end.
Liso30

Katja pls interconnect discussion parts by links (or something like TOC )

1KatjaGrace
I have been making a list of posts so far on the (initial posting)[http://lesswrong.com/lw/kw4/superintelligence_reading_group/], and linking it from the top of the post. Should I make this more salient somehow, or do it differently?
Liso10

Are you played this type of game?

[pollid:777]

I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.

You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...

0cameroncowan
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
Liso30

This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )

Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)

In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence)

And if it is then it is human or alien mind?

And could be superintelligent?

What arguments we could use to prove that none of today's corporations (or states or their secret services) is superinte... (read more)

Liso30

First of all thanx for work with this discussion! :)

My proposals:

  • wiki page for collaborative work

There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it

  • better time for europe and world?

But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)

1KatjaGrace
Thanks for your suggestions. Regarding time, it is alas too hard to fit into everyone's non-work hours. Since the discussion continues for several days, I hope it isn't too bad to get there a bit late. If people would like to coordinate to be here at the same time though, I suggest Europeans pick a more convenient 'European start time', and coordinate to meet each other then. Regarding a wiki page for collaborative work, I'm afraid MIRI won't be organizing anything like this in the near future. If anyone here is enthusiastic for such a thing, you are most welcome to begin it (though remember that such things are work to organize and maintain!) The LessWrong wiki might also be a good place for some such research. If you want a low maintenance collaborative work space to do some research together, you could also link to a google doc or something for investigating a particular question.
1TRIZ-Ingenieur
I strongly support your idea to establish a collaborative work platform. Nick Bostroms book brings so many not yet debated aspects into public debate that we should support him with input and feed back for the next edition of this book. He threw his hat into the ring and our debate will push sales for his book. I suspect he prefers to get comments and suggestions for better explanations in a structured manner.