"human"-style humor could be sandbox too :)
I like to add some values which I see not so static and which are proably not so much question about morality:
Privacy and freedom (vs) security and power.
Family, society, tradition.
Individual equality. (disparities of wealth, right to have work, ...)
Intellectual properties. (right to own?)
I think we need better definition of problem we like to study here. Probably beliefs and values are not so undistinguishable
From this page ->
Human values are, for example:
I think none of them we could call belief.
If these will define vectors of virtual space of moral values then I am not sure if AI could occupy much bigger space than humans
Stuart is it really your implicit axiom that human values are static, fixed?
(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)
more of a question of whether values are stable.
or question if human values are (objective and) independent of humans (as subjects who could develop)
or question if we are brave enough to ask questions if answers could change us.
or (for example) question if it is necessarily good for us to ask questions where answers will give us more freedom.
I am not expert. And it has to be based on facts about your neurosystem. So you could start with several experiments (blod tests etc). You could change diet, sleep more etc.
About rationality and lesswrong -> could you focus your fears to one thing? For example forgot quantum world and focus to superintelligence? I mean could you utilize the power you have in your brain?
You are talking about rationality and about fear. Your protocol could have several independent layers. You seems to think that your ideas produce your fear, but it could be also opposite. Your fear could produce your ideas (and it is definitely very probable that fear has impact on your ideas (at least on contents)). So you could analyze rational questions on lesswrong and independently solve your irrational part (=fear etc) with terapeuts. There could be physical or chemical reasons why you are concerning more than other people. Your protocol for dangerou...
Jared Diamond wrote that North america had not good animals for domestication. (sorry I dont remember in which book) It could be showstopper for using wheel massively.
@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality?
@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive.
@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on sur...
This could be not good mix ->
Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying.
This taboo: "structured programming languages.", could be impossible, because structure understanding and analysing is probably integral part of general intelligence.
She could not reprogram itself in lower level programming language but emulate and improve self in her "memory". (She could not have access to her code segment but could create stronger intelligence in data segment)
Is "transcendence" third possibility? I mean if we realize that human values are not best and we retire and resign to control.
(I am not sure if it is not motivation selection path - difference is subtle)
BTW. if you are thinking about partnership - are you thinking how to control your partner?
Sorry for question out of this particular topic.
When we started to discuss I liked and proposed idea to make wiki page with results from our discussion. Do you think that we have any ideas which are collectible in collaboratory wiki page?
I think we have at least one - paulfchristiano's "cheated evolution" : http://lesswrong.com/r/discussion/lw/l10/superintelligence_reading_group_3_ai_and_uploads/bea7
Could you add more?
It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.
There are two sorts of unsafe AI: one which care and one which doesnt care.
Ignorant is fastest - only calculate answer and doesn't care of anything else.
Friend and enemy has to analyse additional things...
The other question is: what happens once you know the AI has desire for subversion?
There is plenty of people cooperating and supporting dictators and mafians due to selfish reasons. We could expect same in this area.
"I will support dangerous AI because I will be more rich,powerful or safe then. "
I don't understand negative point. Could it be explained please?
Is it because his position equals what he declares and not what he really think?
Or becase Bostroms mimicry (=for some reasons prefere negative, catastrophic and alarmist position) is not suposed?
Expose it to limited data about the world, or no data about the real world -Provide it little or no information about its operators or the society they live in -Provide it little or no information about its own hardware or software architecture
When I read "Oracle AI" I realized one possibility which is probably still overlooked.
"Dreams"
Erasing data on physical media is not perfect. Zero bit which was before nonzero could be read more slowly or quicker. It could allow SAI to see shadows of past data. Which could lead to phenomenon similar to dreams.
I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer's have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way: In Bostrom's book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with ...
Could AI be without any goals?
Would that AI be dangerous in default doom way?
Could we create AI which wont be utility maximizer?
Would that AI need maximize resources for self?
Positive emotions are useful too. :)
I think that if SAIs will have social part we need to think altruisticaly about them.
It could be wrong (and dangerous too) think that they will be just slaves.
We need to start thinking positively about our children. :)
Just a little idea:
In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -> mission -> goals -> strategy -> tactics -> daily planning.
I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -> mision -> goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)
I am afraid that humanity has not properly defined/analysed nor vision nor mi...
toto som myslel: https://neurokernel.github.io/faq.html
Ale je to asi kustik nedorobenejsie nez som si predpokladal
Dalsie zdroje info : http://www.cell.com/current-biology/abstract/S0960-9822%2810%2901522-8 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3704784/ http://www.flycircuit.tw/ https://en.wikipedia.org/wiki/Drosophila_connectome
I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.
One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.
In this moment I just wanted to improve our view at probability of only one SI in starting period.
Think prisoner's dilemma!
What would aliens do?
Is selfish (self centered) reaction really best possibitlity?
What will do superintelligence which aliens construct?
(no discussion that humans history is brutal and selfish)
Let us try to free our mind from associating AGIs with machines.
Very good!
But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?
When I was thinking about past discussions I was realized something like:
(selfish) gene -> meme -> goal.
When Bostrom is thinking about singleton's probability I am afraid he overlook possibility to run more 'personalities' on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble's telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas.
When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.
Now I am thankful because your comment enlarge possibilities to think about Fermi.
We could not think only self destruction - we could think modesty and self sustainability.
Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy str...
Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino).
If you are thinking about money making - you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)
In AI sector there is much higher probability of phase-transition (=explosion). I think that's the diference.
How?
P
Well, no life form has achieved what Bostrom calls a decisive strategic advantage. Instead, they live their separate lives in various environmental niches.
Ants are probably good example how could organisational intelligence (?) be advantage.
According to wiki ''Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass.''. See also google answer, wiki table or stackexchange.
Although we have to think careful - apex predators does not use to form large biomass. So it could be more complicated to define success of life form.
P...
It seems to again come down to the possibility of a rapid and unexpected jump in capabilities.
We could test it in thought experiment.
Chess game human-grandmaster against AI.
it is not rapid (not checkmate in begining).
We could also suppose one move per year to slow it down. It bring to AI next advantage because it's ability to concentrate so long time.
capabilities
a) intellectual capabilities we could suppose at same level during the game (if it is played in one day, otherwise we have to think Moore's law)
b) human lose (step by step) positional an
One possibility to prevent smaller group gain strategic advantage is something like operation Opera.
And it was only about nukes (see Elon Musk statement)...
Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)
Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)
Next theorem is obvious :)
This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be "cheated".
One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.
Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.
But also 1.5x more discoveries could bring 10x bigger profit!
We could not suppose only linear dependencies in such a complex problems.
Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot?
Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)
And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster.
What about ten time faster philosophic growth?
Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act.....
This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer.
But I think in this moment we could (and have) analyse also trained "horses" with trained "raiders". And also trained "pairs" (or groups?)
Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)
So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).
This is also one of points where I dont agree with Bostrom's (fantastic!) book.
We could use analogy from history: human-animal = soldier+hourse didnt need the physical iterface (like in Avatar movie) and still added awesome military advance.
Something similar we could get from better weak AI tools. (probably with better GUI - but it is not only about GUI)
"Tools" dont need to have big general intelligence. They could be at hourse level:
what we have in history - it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.
But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)
For example still miss (and probably will miss) in book:
a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )
b) AI Impact to religion
etc.
But why collapsed evil AI after apocalypse?
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game...
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )
Could we count will (motivation) of today's superpowers = megacorporations as human's or not? (and in which level could they control economy?)
In other worlds: Is Searle's chinese room intelligent? (in definition which The Book use for (super)intelligence)
And if it is then it is human or alien mind?
And could be superintelligent?
What arguments we could use to prove that none of today's corporations (or states or their secret services) is superinte...
First of all thanx for work with this discussion! :)
My proposals:
There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it
But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)
One child could have two parents (and both could answer) so 598 is questionable number.