Posts

Sorted by New
1Q
1
1Q
1

Wiki Contributions

Comments

Sorted by
Brit10

An interesting article on emergent capabilities and AI based weapons systems that doesn't seem to have been on any lesswrong radars:

https://tomdispatch.com/emergent-ai-behavior-and-human-destiny/

Brit22

Are people aware that the British government will be ejected in a year's time, barring a miracle?

A year could do a lot of good (for example the summit and focusing it on not-kill-everyoneism)

But beyond a year it will depend on Labour not reverting to the mean and losing focus. They are probably very worried about AI's saying mean things to disadvantaged groups or displacing workers too quickly - a trap the taskforce hasn't fallen into. This lack of focus is not because Labour are useless but because they are simply not as unusually open to rationalist adjacent arguments as the Sunak (and Johnson) regimes.

Brit10

Will international AI alignment cooperation trump the rights of weaker countries?

TLDR - Real cooperation on International AI regulation may only be possible through a much more peaceful but unsentimental foreign policy 

In 1987 President Reagan said to the United Nations "how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world."  Isn't an unaligned Artificial General Intelligence that alien threat? And it's easy - and perhaps overly obvious and comforting - to say that humanity would unite, but now we have this threat what would that unity look like?

Here's one not necessarily comforting thought, the weak (nations) will get trampled further by the strong (nations).  If cooperation rather than competition among power is vital then wouldn't we need to prioritise keeping powerful and potentially powerful countries - at least in AI terms - over other ideological concerns.  To see what this looks like let's look at some of those powerful countries:

  •  China - the obvious one, would we need to annoy the national security hawks over Taiwan, but also decent, humane liberals over Tibet and Sichuan? 
  • Russia - Ukraine would annoy just about everybody
  • Israel - Well this happens already because of domestic considerations, but it might reverse domestic political calculations on:
  • UK - the British are a big player in AI (and seemingly more important than the EU) so would needling them about Northern Ireland really be worth ticking off the one reliable ally the US has with clout?

This is before looking at the role of countries that may be important in relation to AI and who the US wouldn't want going rogue on regulation but who neighbour China - such as Japan, South Korea and the chip superpower Taiwan.

Brit51

Wikipedia is a trusted brand for introducing new topics and has great placement with search engines. There are three potential headaches though.

(1) The Neutral Point of View (NPOV) rules mean in theory that one side of the argument can't dictate how a topic is dealt with, so even without a concerted effort there may creep in weasel words and various areas of balance. 93% chance of happening. It will be low impact on bias providing odd headaches but potentially improving article. About a 30% chance of making some of the article unreadable to a newcomer and 15% chance of lead being unreadable.

(2) A determined and coordinated group of editors with an agenda (or even a determined individual, which won't apply on an article as watched as AI alignment but may on more specialised subsidiary articles) can seriously change an article particularly over the long term. Another commentator has said that this process seems to have happened with the Effective Altruism article. So if (when) alignment becomes controversial it will attract detractors and this may be a determined group. 70% chance of attracting at least one determined individual and further 70% chance of them making sustained efforts on less watched articles. 30% chance of attracting a coordinated group of editors.

(3) Wikipedia culture skews to the American left. This will probably work for AI alignment as it seems to be on track to become a cultural marker for the blue side, but it may create a deeply hostile environment on Wikipedia if it's something that liberalism finds problematic for example as an obstacle to Democratic donors in tech or as a rival to greenhouse warming worries (I don't think either will happen, just that there are still plausible routes for the American left to become hostile to AI alignment). 15% chance of this happening, but the article will over time become actively harmful to awareness of AI alignment if it does.

I'd say there are two mitigations other than edit warring that I see. There may be many others.

(1) Links to other AI alignment resources, particularly in citations (these tend to survive unless there's a particularly effective and malevolent editor). Citation embedding will mean that the arguments can be still seen by more curious readers.

(2) Creating or reinforcing a recognised site which is acknowledged as a go to introduction. Wikipedia only stays first if there are no regular sites.

I think this is a great achievement and I wish I had the sense to be part of it, so none of this detracts from the achievement or recognition that it was much needed. And despite implied criticism of Wikipedia, I think it's a wonderful resource, just with its dangers.