Perplexed comments on Safety Culture and the Marginal Effect of a Dollar - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (105)
Self-modifying programs seems like a bit of a red herring. Most likely groups of synthetic agents will become capable of improving the design of machine minds before individual machines can do that. So, you would then have a self-improving ecosystem of synthetic intelligent agents.
This probably helps with the wirehead problem, and with any Godel-like problems associated with a machine trying to understand its entire mind.
Today, companies that work on editing/refactoring/lint etc tools are already using their own software to build the next generation of programming tools. There are still humans in the loop - but the march of automation is working on that gradually.
I agree that a multi-agent systems perspective is the most fruitful way of looking at the problem. And I agree that coalitions are far less susceptible to the pathologies that can arise with mono-maniacal goal systems. A coalition of agents is rational in a different, softer way than is a single unified agent. For example, it might split its charitable contributions among charities. Does that weaker kind of rationality mean that coalitions should be denigrated? I think not.
To answer Nancy's question, there is a huge and growing body of knowledge about controlling multi-agent systems. Unfortunately, so far as I know, little of it deals with the scenario in which the agents are busily constructing more agents.
That does happen quite a bit in genetic and memetic algorithms - and artificial life systems.
I checked with the Gates Foundation. 7549 grants and counting!
It seems as though relatively united agents can split their charitable contributions too.
A note, though... if I had a billion dollars and decided just to give it to whoever recommended as their top-rated international charities, due to most charities' difficulty in converting significant extra funds into the same level of effect, I would end up giving 1+10+50+1+0.3+5=67.3 million to 6 different charities and then become confused at what to do with my 932.7 million dollars.
I know the Gates Foundation does look like a coalition of agents rather than a single agent, but it doesn't look like a coalition of 7549+ agents. I'd guess at most about a dozen and probably fewer Large Components.
Their fact sheet says 24 billion dollars.