Liso comments on Superintelligence 5: Forms of Superintelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act... (like slowly loosing pieces in chess game)
If strong entities in our world will (are?) driving by poorly designed goals - for example "maximize profit" then they could really be very dangerous to humanity.
I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities.
We have not underestimate any form and not underestimate any method of our possible doom.
With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter.
Algorithms are not independent from entities which have enough money or enough political power to use it.
BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by programmer. I am afraid that in our society it will be manager or politician who will/is design goal. (we have find way that there be also philosopher and mathematician)
In my oppinion first (if not singleton) superintelligence will be (or is) most probably 'mixed form'. Some group of well organized people (dont forget lawyers) with big database and supercomputer.
Next stages after intelligence explosion could have any other forms.