Comment author: Liso 26 November 2014 06:34:11AM 0 points [-]

Just a little idea:

In one advertising I saw interesting pyramid with these levels (from top to bottom): vision -> mission -> goals -> strategy -> tactics -> daily planning.

I think if we like to analyse cooperation between SAI and humanity then we need interdisciplinary (philosophy, psychology, mathematics, computer science, ...) work on (vision -> mision -> goals) part. (if humanity will define vision, mission and SAI will derive goals then it could be good)

I am afraid that humanity has not properly defined/analysed nor vision nor mission. And more groups and individuals has more contradictory vision, mission and goals.

One big problem with SAI is not SAI but that we will have BIG POWER and we still dont know what we really want. (and what we really want to want)

Bostrom's book seems to have paradigm that goal is something on top, rigid and stable. Could not be dynamic and flexible like vision. Probably it could be true that one stupidly defined goal (paperclipper) could be unchangeable and ultimate. But we probably have more possibilities to define SAI's personality.

In response to Meetup : Bratislava
Comment author: Liso 11 November 2014 09:22:59PM 0 points [-]
Comment author: Sebastian_Hagen 09 November 2014 07:27:53PM *  0 points [-]

You're suggesting a counterfactual trade with them?

Perhaps that could be made to work; I don't understand those well. It doesn't matter to my main point: even if you do make something like that work, it only changes what you'd do once you run into aliens with which the trade works (you'd be more likely to help them out and grant them part of your infrastructure or the resources it produces). Leaving all those stars on to burn through resources without doing anything useful is just wasteful; you'd turn them off, regardless of how exactly you deal with aliens. In addition, the aliens may still have birthing problems that they could really use help with; you wouldn't leave them to face those alone if you made it through that phase first.

Comment author: Liso 10 November 2014 09:50:58AM *  0 points [-]

I am suggesting, that methastasis method of growth could be good for first multicell organisms, but unstable, not very succesful in evolution and probably refused by every superintelligence as malign.

Comment author: diegocaleiro 08 November 2014 05:48:57PM 0 points [-]

I'll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don't share goals, like a human - specially like a schizophrenic one.

The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other's time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.

We need other people, but Bostrom doesn't let simple things left out easily.

Comment author: Liso 10 November 2014 09:26:08AM *  0 points [-]

One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.

In this moment I just wanted to improve our view at probability of only one SI in starting period.

Comment author: Sebastian_Hagen 07 November 2014 09:22:31PM *  1 point [-]

I fully agree to you. We are for sure not alone in our galaxy.

That is close to the exact opposite of what I wrote; please re-read.

AGI might help us to make or world a self stabilizing sustainable system.

There are at least three major issues with this approach, any one of which would make it a bad idea to attempt.

  1. Self-sustainability is very likely impossible under our physics. This could be incorrect - there's always a chance our models are missing something crucial - but right now, the laws of thermodynamics strongly point at a world where you need to increase entropy to compute, and so the total extent of your civilization will be limited by how much negentropy you can acquire.

  2. If you can find a way to avoid 1., you still risk someone else (read: independently evolved aliens) with a less limited view gobbling up the resources, and then knocking on your door to get yours too. There's some risk of this anyway, but deliberately leaving all these resources lying around means you're not just exposed to greedy aliens in your past, you're also exposed to ones that svolve in the future. The only sufficient response to that would be if you can't just get unlimited computation and storage out of limitd material resources, but you also get an insurmountable defense to let you keep it against a less restrained attacker. This is looking seriously unlikely!

  3. Let's say you get all of these, unlikely though they look right now. Ok, so what leaving the resources around does in that scenario is to relinquish any control about what newly evolved aliens get up to. Humanity's history is incredibly brutal and full of evil. The rest of our biosphere most likely has a lot of it, too. Any aliens with similar morals would have been incredibly negligent to simply let things go on naturally for this long. And as for us, with other aliens, it's worse; they're fairly likely to have entirely incompatible value systems, and may very well develop into civilizations that we would continue a blight on our universe ... oh, and also they'd have impenetrable shields to hide behind, since we postulated those in 2. So in this case we're likely to end up stuck with the babyeaters or their less nice siblings as neighbors. Augh!

And beyond that, I don't think it even makes the FAI problem any easier. There's nothing inherently destabilizing about an endowment grab. You research some techs, you send out a wave of von neumann probes, make some decisions about how to consolidate or distribute your civilization according to your values, and have the newly built intergalactic infrastructure implement your values. That part is unrelated to any of the hard parts of FAI, which would still be just as hard if you somehow wrote your AI to self-limit to a single solar system. The only thing that gets you is less usefulness.

Comment author: Liso 07 November 2014 09:47:55PM 1 point [-]

Think prisoner's dilemma!

What would aliens do?

Is selfish (self centered) reaction really best possibitlity?

What will do superintelligence which aliens construct?

(no discussion that humans history is brutal and selfish)

Comment author: TRIZ-Ingenieur 06 November 2014 09:29:03PM *  2 points [-]

Let us try to free our mind from associating AGIs with machines. They are totally different from automata. AGIs will be creative, will learn to understand sarcasm, will understand that women in some situations say no and mean yes.

On your command to add 10 to x an AGI would reply: "I love to work for you! At least once a day you try to fool me - I am not asleep and I know that + 100 would be correct. ShalI I add 100?"

Comment author: Liso 07 November 2014 09:17:50PM 0 points [-]

Let us try to free our mind from associating AGIs with machines.

Very good!

But be honest! Aren't we (sometimes?) more machines which serve to genes/instincts than spiritual beings with free will?

Comment author: diegocaleiro 06 November 2014 07:57:04AM *  2 points [-]

Though Bostrom seems right to talk about better transmission - which could have been parsed into more reliable, robust, faster, compact, nested etc.... - he stops short of looking deep into what made cultural transmission better. To claim that a slight improvement in (general) mental faculties did it would be begging the question. Brilliant though he is, Bostrom is "just" a physicist, mathematical logician, philosopher, economist, computational neuroscientist who invented the field of existential-risks and revolutionized anthropics, so his knowledge of cultural evolution and this transition is somewhat speculative. That's why we need other people :) In that literature we have three main contenders for what allowed human prowess to reshape earth:

Symbolic ability: the ability to decently process symbols - which have a technical definition hard to describe here - and understand them in a timely fashion is unique to humans and some other currently extinct anthropoids. Terrence Deacon argues for this being what matters in The Symbolic Species.

Iterative recursion processing: This has been argued in many styles.

  • Chomsky argued the primacy of recursion as a requisite ability for human language in the late fifties

  • Pinker endorses this in his Language Instinct and in The Stuff of Thought

  • The Mind Is A Computer metaphor (Lakoff 1999) has been widely adopted and very successful memetically, and though it has other distinctions, the main distinction from "Mind Is A Machine" is that recursion is involved in computers, but not in all machines. The computational theory of mind thrived in the hands of Pinker, Koch, Dennet, Kahneman and more recently Tononi. Within LW and among programmers Mind is a Computer is frequently thought to be the fundamental metaphysics of mind, and a final shot at the ontological constituent of our selves - a perspective I considered naïve here.

Ability to share intentions: the ability to share goals and intentions and parallelize in virtue of doing so with other co-specimens. Tomasello (2005)

Great books on evolutionary transmission are Not By Genes Alone, The Meme Machine and LWer Tim Tyler's Memetics.

Comment author: Liso 07 November 2014 09:05:05PM *  0 points [-]

When I was thinking about past discussions I was realized something like:

(selfish) gene -> meme -> goal.

When Bostrom is thinking about singleton's probability I am afraid he overlook possibility to run more 'personalities' on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble's telescope to observe diffferent objects)

And not only possibility but probably also necessity.

If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.

We need to analyze how to slightly different goals could control each other.

Comment author: KatjaGrace 04 November 2014 02:08:28AM 2 points [-]

If you had a super-duper ability to design further cognitive abilities, which would you build first? (suppose that it's only super enough to let you build other super-duper abilities in around a year, so you can't just build a lot of them now) (p94)

Comment author: Liso 07 November 2014 08:36:55PM *  0 points [-]

moral, humour and spiritual analyzer/emulator. I like to know more about these phenomenas.

Comment author: TRIZ-Ingenieur 06 November 2014 10:58:49PM 0 points [-]

I fully agree to you. We are for sure not alone in our galaxy. But I disagree to Bostrums instability thesis either extinction or cosmic endowment. This duopolar final outcome is reasonable if the world is modelled by differential equations which I doubt. AGI might help us to make or world a self stabilizing sustainable system. An AGI that follows goals of sustainability is by far safer than an AGI thriving for cosmic endowment.

Comment author: Liso 07 November 2014 08:34:07PM -1 points [-]

When we discuss about evil AI I was thinking (and still count it as plausible) about possibility that self destruction could be not evil act. That Fermi paradox could be explained as natural law = best moral answer for superintelligence at some level.

Now I am thankful because your comment enlarge possibilities to think about Fermi.

We could not think only self destruction - we could think modesty and self sustainability.

Sauron's ring could be superpowerfull, but clever Gandalf could (and have!) resist offer to use it. (And use another ring to destroy strongest one).

We could think hidden places (like Lothlorien, Rivendell) in universe where clever owners use limited but nondestructive powers.

Comment author: KatjaGrace 28 October 2014 01:28:10AM 4 points [-]

If someone has some money, they can invest it to get more money. Do you know what the difference is between money and intelligence that makes it plausible to expect an abrupt intelligence explosion, but reasonable to expect steady exponential growth for financial investment returns?

Comment author: Liso 01 November 2014 11:52:14AM -1 points [-]

Market is more or less stabilized. There are powers and superpowers in some balance. (gain money sometimes could be illusion like bet (and get) more and more in casino).

If you are thinking about money making - you have to count sum of all money in society. If investments means bigger sum of values or just exchange in economic wars or just inflation. (if foxes invest more to hunting and eat more rabbits, there could be more foxes right? :)

In AI sector there is much higher probability of phase-transition (=explosion). I think that's the diference.

How?

  1. Possibility: There could be probably enough HW and we just wait for spark of new algorithm.

  2. Possibility: If we count agriculture revolution as explosion - we could also count massive change in productivity from AI (which is probably obvious).

View more: Prev | Next