GoteNoSente

Posts

Sorted by New

Wiki Contributions

Comments

It is not at all clear to me that most of the atoms in a planet could be harnessed for technological structures, or that doing so would be energy efficient. Most of the mass of an earthlike planet is iron, oxygen, silicon and magnesium, and while useful things can be made out of these elements, I would strongly worry that other elements that are needed also in those useful things will run out long before the planet has been disassembled. By historical precedent, I would think that an AI civilization on Earth will ultimately be able to use only a tiny fraction of the material in the planet, similarly to how only a very small fraction of a percent of the carbon in the planet is being used by the biosphere, in spite of biological evolution having optimized organisms for billions of years towards using all resources available for life.

The scenario of a swarm of intelligent drones eating up a galaxy and blotting out its stars I think can empirically be dismissed as very unlikely, because it would be visible over intergalactic distances. Unless we are the only civilization in the observable universe in the present epoch, we would see galaxies with dark spots or very strangely altered spectra somewhere. So this isn't happening anywhere.

There are probably some historical analogs for the scenario of a complete takeover, but they are very far in the past, and have had more complex outcomes than intelligent grey goo scenarios normally portray. One instance I can think of is the Great Oxygenation Event. I imagine an observer back then might have envisioned that the end result of the evolution of cyanobacteria doing oxygenic photosynthesis would be the oceans and lakes and rivers all being filled with green slime, with a toxic oxygen atmosphere killing off all other life. While indeed this prognosis would have been true to a first order approximation - green plants do dominate life on Earth today - the reality of what happened is infinitely more complex than this crude picture suggests. And even anaerobic organisms survive to this day in some niches.

The other historical precedent that comes to mind would be the evolution of organisms that use DNA to encode genetic information using the specific genetic code that is now universal to all life, in whatever pre-DNA world existed at the beginning of life. These seem to have indeed completely erased all other kinds of life (claims of a shadow biosphere of more primitive organisms are all dubious to my knowledge), but also have not resulted in a less complex world.

In chess, I think there are a few reasons why handicaps are not more broadly used:

  1. Chess in its modern form is a game of European origin, and it is my impression that European cultures have valued "equal starting conditions for everyone" always higher than "similar chances for everyone to get their desired outcome". This might have made use of handicaps less appealing, because with handicaps, the game starts from a position that is essentially known to be lost for one side.
  2. There is no good way to combine handicaps in chess with Elo ratings, making it impossible to have rated handicap games. It is also not easy to use handicap results informally to predict optimal handicap between players who haven't met (if John can give me a knight, and I can give f7 pawn and move to James, it is not at all clear what the appropriate handicap for John against James would be). This is different in Go. 
  3. Material handicaps significantly change the flow of the game (the stronger side can try to just trade down into a winning endgame, and for larger handicaps, this becomes easy to execute), and completely invalidate opening theory. This is different in Go and also in more chess-like games such as Shogi, where I understand handicaps are more popular.
  4. Professional players (grandmasters and above) are probably strong enough to convert even a small material handicap like pawn and move fairly reliably into a win against any human (computers, at a few hundred elo points above the best humans, can give probably about pawn to top players and win, at tournament time controls). This implies any handicap system would use only very few handicaps in games between players strong enough that their games are of public interest (Go professionals I understand have probably 3-4 handicap stones between weak and best professional, and maybe two stones vs the best computers). I think that would have been different in the 19th century, when material handicaps in chess were more popular than today.


That said, chess does use handicaps in some settings, but they are not material handicaps. In informal blitz play, time handicaps are sometimes used, often in a format where players start at five minutes for the game and lose a minute if they win, until one of the players arrives at zero minutes. Simultaneous exhibitions and blindfold play are also handicaps that are practiced relatively widely. Judging just by the number of games played in each handicap mode, I'd say though that time handicap is by far the most popular variant at the club player level.

Isn't the AI box game (or at least its logical core) played out a million times a day between prisoners and correctional staff, with the prisoners losing almost all the time? Real prison escapes (i.e. inmate escape other than did not return from sanctioned time outside) are in my understanding extremely rare.

I think the most important things that are missing in the paper currently are these three points:

1. Comparison to the best Leela Zero networks

2. Testing against strong (maybe IM-level) humans at tournament time controls (or a clear claim that we are talking about blitz elo, since a player who does no explicit tree search does not get better if given more thinking time).

3. Games against traditional chess computers in the low GM/strong IM strength bracket would also be nice to have, although maybe not scientifically compelling. I sometimes do those for fun with LC0 and it is utterly fascinating to see how LC0 with current transformer networks at one node per move manages to very often crush this type of opponent by pure positional play, i.e. in a way that makes winning against these machines look extremely simple.

I do not see why any of these things will be devalued in a world with superhuman AI.

At most of the things I do, there are many other humans who are vastly better at doing the same thing than me. For some intellectual activities, there are machines who are vastly better than any human. Neither of these stops humans from enjoying improving their own skills and showing them off to other humans.

For instance, I like to play chess. I consider myself a good player, and yet a grandmaster would beat me 90-95 percent of the time. They, in turn, would lose on average 8.5-1.5 in a ten game match against a world-championship level player. And a world champion will lose almost all of their games against Stockfish running on a smartphone. Stockfish running on a smartphone, in turn, will lose most of its games against Stockfish running on a powerful desktop computer or against Leela Chess Zero running on something that has a decent GPU. I think those opponents would probably, in turn, lose almost all of their games against an adversary that has infinite retries, i.e. that can target and exploit weaknesses perfectly. That is how far I am away from playing chess perfectly.

And yet, the emergence of narrow superintelligence in chess has increased and not diminished my enjoyment of the game. It is nice to be able to play normally against a human, and to then be able to find out the truth about the game by interactively looking at candidate moves and lines that could have been played using Leela. It is nice to see a commented world championship game, try to understand the comments, make up one's own mind about them, and then explore using an engine why the alternatives that one comes up with (mostly) don't work.

If we get superintelligence, that same accessibility of tutoring at beyond the level of any human expert will be available in all intellectual fields. I think in the winning scenario, this will make people enjoy a wide range of human activities more, not less.

As an additional thought regarding computers, it seems to me that participant B could be replaced by a weak computer in order to provide a consistent experimental setting. For instance, Leela Zero running just the current T2 network (no look-ahead) would provide an opponent that is probably at master-level strength and should easily be able to crush most human opponents who are playing unassisted, but would provide a perfectly reproducible and beatable opponent.
 

I think having access to computer analysis would allow the advisors (both honest and malicious) to provide analysis far better than their normal level of play, and allow the malicious advisors in particular to set very deep traps. The honest advisor, on the other hand, could use the computer analysis to find convincing refutations of any traps the dishonest advisors are likely to set, so I am not sure whether the task of the malicious side becomes harder or easier in that setup. I don't think reporting reasoning is much of a problem here, as a centaur (a chess player consulting an engine) can most certainly give reasons for their moves (even though sometimes they won't understand their own advice and be wrong about why their suggested move is good).

It does make the setup more akin to working with a superintelligence than working with an AGI, though, as the gulf between engine analysis and the analysis that most/all humans can do unassisted is vast.

Answer by GoteNoSente20

I could be interested in trying this, in any configuration. Preferred time control would be one move per day. My lichess rating is about 2200.

Are the advisors allowed computer assistance, do the dishonest and the honest advisor know who is who in this experiment, and are the advisors allowed to coordinate? I think those parameters would make a large difference potentially in outcome for this type of experiment.

It is possible to play funny games against it, however, if one uses the fact that it is at heart a story telling, human-intent-predicting system. For instance, this here works (human white):

1. e4 e5 2. Ke2 Ke7 3. Ke3 Ke6 4. Kf3 Kf6 5. Kg3 Kg6 6. Kh3 Kh6 7. Nf3 Nf6 8. d4+ Kg6 9. Nxe5# 1-0

A slight advantage in doing computer security research won't give an entity the ability to take over the internet, by a long shot, especially if it does not have backing by nation state actors. The NSA for instance, as an organisation, has been good at hacking for a long time, and while certainly they can and have done lots of interesting things, they wouldn't be able to take over the world, probably even if they tried and did it with the backing of the full force of the US military.

Indeed, for some computer security problems, even superintelligence might not confer any advantage at all! It's perfectly possible, say, that a superintelligence running on a Matrioshka brain a million years hence will find only modest improvements upon current best attacks against the full AES-128. Intelligence allows one to do math better and, occasionally, to find ways and means that side-step mathematical guarantees, but it does not render the adversary omnipotent; an ASI still has to accept (or negotiate around) physical, mathematical and organizational limits to what it can do. In that sense, a lot of the ASI safety debate I think runs on overpowered adversaries, which will in the long run be bad both in terms of achieving ASI safety (because in an overpowered adversary model, real dangers risk remaining unidentified and unfixed) and in terms of realizing the potential benefits of creating AGI/ASI.

Load More