Stephen Fowler

Wiki Contributions

Comments

Sorted by

I'd like access to it. 

I agree that the negative outcomes from technological unemployment do not get enough attention but my model of how the world will implement Transformative AI is quite different to yours.

Our current society doesn't say "humans should thrive", it says "professional humans should thrive"

Let us define workers to be the set of humans whose primary source of wealth comes from selling their labour. This is a very broad group that includes people colloquially called working class (manual labourers, baristas, office workers, teachers etc) but we are also including many people who are well remunerated for their time, such as surgeons, senior developers or lawyers. 

Ceteris paribus, there is a trend that those who can perform more valuable, difficult and specialised work can sell their labour at a higher value. Among workers, those who earn more are usually "professionals". I believe this is essentially the same point you were making.

However, this is not a complete description of who society allows to "thrive". It neglects a small group of people with very high wealth. This is the group of people who have moved beyond needing to sell their labour and instead are rewarded for owning capital. It is this group who society says should thrive and one of the strongest predictors of whether you will be a member is the amount of wealth your parents give you.

The issue is that this small group is owns a disproportionate proportion of shares in frontier AI companies. 

Assuming we develop techniques to reliably align AGIs to arbitrary goals, there is little reason to expect private entities to intentionally give up power (doing so would be acting contrary to the interests of their shareholders).

Workers unable to compete with artificial agents will find themselves relying on the charity and goodwill of a small group of elites. (And of course, as technology progresses, this group will eventually include all workers.)

Those lucky enough to own substantial equity in AI companies will thrive as the majority of wealth generated by AI workers flows to them.

In itself, this scenario isn't an existential threat. But I suspect many humans would consider their descendants being trapped into serfdom is a very bad outcome.

I worry a focus on preventing the complete extinction of the human race means that we are moving towards AI Safety solutions which lead to rather bleak futures in the majority of timelines.[1]
 

  1. ^

    My personal utility function considers permanent techno-feudalism forever removing the agency of the majority of humans is only slightly better than everyone dying.

    I suspect that some fraction of humans currently alive also consider a permanent loss of freedom to be only marginally better (or even worse) than death.

Assuming I blend in and speak the local language, within an order of magnitude of 5 million (edit: USD)

I don't feel your response meaningfully engaged with either of my objections.

I strongly downvoted this post.

1 . The optics of actually implementing this idea would be awful. It would permanently damage EA's public image and be raised as a cudgel in every single expose written about the movement. To the average person, concluding that years in the life of the poorest are worth less than those of someone in a rich, first world country is an abhorrent statement, regardless of how well crafted your argument is. 

2.1 It would be also be extremely difficult for rich foreigners to objectively assess the value of QALYs in the most globally impoverished nations, regardless of good intentions and attempts to overcome biases. 

2.2 There is a fair amount of arbitrariness to metrics chosen to value someones life. You've mentioned womens rights, but we could look alternatively look at the suicide rate as a lower bound on the number of women in a society who believe more years of their life has negative value. By choosing this reasonable sounding metric, we can conclude that a year of a womans life in South Korea is much worse than a year of a womans life in Afghanistan. How confident are you that you'll be able to find metrics which accurately reflect the value of a year of someones life?

The error in reasoning comes from making a utilitarian calculation without giving enough weight to the potential for flaws within the reasoning machine itself. 

what does it mean to keep a corporation "in check"
I'm referring to effective corporate governance. Monitoring, anticipating and influencing decisions made by the corporation via a system of incentives and penalties, with the goal of ensuring actions taken by the corporation are not harmful to broader society.

do you think those mechanisms will not be available for AIs
Hopefully, but there are reasons to think that the governance of a corporation controlled (partially or wholly) by AGIs or controlling one or more AGIs directly may be very difficult. I will now suggest one reason this is the case, but it isn't the only one.

Recently we've seen that national governments struggle with effectively taxing multinational corporations. Partially this is because the amount of money at stake is so great, multinational corporations are incentivized to invest large amounts of money into hiring teams of accountants to reduce their tax burden or pay money directly to politicians in the form of donations to manipulate the legal environment. It becomes harder to govern an entity as that entity invest more resources into finding flaws in your governance strategy.

Once you have the capability to harness general intelligence, you can invest a vast amount of intellectual "resources" into finding loopholes in governance strategies. So while many of the same mechanisms will be available for AI's, there's reason to think they might not be as effective.

I'm not confident that I could give a meaningful number with any degree of confidence. I lack expertise in corporate governance, bio-safety and climate forecasting. Additionally, for the condition to be satisfied that corporations are left "unchecked" there would need to be a dramatic Western political shift that makes speculating extremely difficult. 

I will outline my intuition for why (very large, global) human corporations could pose an existential risk (conditional on the existential risk from AI being negligible and global governance being effectively absent).


1.1 In the last hundred years, we've seen that (some) large corporations are willing to cause harm on a massive scale if it is profitable to do so, either intentionally or through neglect. Note that these decisions are mostly "rational" if your only concern is money.

Copying some of the examples I gave in No Summer Harvest:

1.2 Some corporations have also demonstrated they're willing to cut corners and take risks at the expense of human lives.

2. Without corporate governance, immoral decision making and risk taking behaviour could be expected to increase. If the net benefit of taking an action improves because there are fewer repercussions when things go wrong, they should reasonably be expected to increase in frequency.

3. In recent decades there has been a trend (at least in the US) towards greater stock market concentration. For large corporations to pose and existential risk, this trend would need to continue until individual decisions made by a small group of corporations can affect the entire world.

I am not able to describe the exact mechanism of how unchecked corporations would post an existential risk, similar to how the exact mechanism for an AI takeover is still speculation. 

You would have a small group of organisations responsible for deciding the production activities of large swaths of the globe. Possible mechanism include:

  • Irreparable environmental damage.
  • A widespread public health crisis due to non-obvious negative externalities of production.
  • Premature widespread deployment of biotechnology with unintended harms.

I think if you're already sold on the idea that "corporations are risking global extinction through the development of AI" it isn't a giant leap to recognise that corporations could potentially threaten the world via other mechanisms. 

"This argument also appears to apply to human groups such as corporations, so we need an explanation of why those are not an existential risk"

I don't think this is necessary. It seems pretty obvious that (some) corporations could pose an existential risk if left unchecked.

Edit: And depending on your political leanings and concern over the climate, you might agree that they already are posing an existential risk.

I might be misunderstanding something crucial or am not expressing myself clearly.

I understand TurnTrout's original post to be an argument for a set of conditions which, if satisfied, prove the AI is (probably) safe. There are no restrictions on the capabilities of the system given in the argument.

You do constructively show "that it's possible to make an AI which very probably does not cause x-risk" using a system that cannot do anything coherent when deployed.

But TurnTrout's post is not merely arguing that it is "possible" to build a safe AI.

Your conclusion is trivially true and there are simpler examples of "safe" systems if you don't require them to do anything useful or coherent. For example, a fried, unpowered GPU is guaranteed to be "safe" but that isn't telling me anything useful.

I can see that the condition you've given, that a "curriculum be sampled uniformly at random" with no mutual information with the real world is sufficient for a curriculum to satisfy Premise 1 of TurnTrouts argument.

But it isn't immediately obvious to me that it is a sufficient and necessary condition (and therefore equivalent to Premise 1). 

Right, but that isn't a good safety case because such an AI hasn't learnt about the world and isn't capable of doing anything useful. I don't see why anyone would dedicate resources to training such a machine.

I didn't understand TurnTrouts original argument to be limited to only "trivially safe" (ie. non-functional) AI systems.

Load More