Wiki Contributions

Comments

Depending on the agent implementation you may find that it is demotivated to achieve any useful outcome if they are power limited. Half-assing things seems pointless and futile, they aren't sane actions in the world. E.g. trying to put out a fire when all you have is a squirt gun.

I'm someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.

Some advise, I would try and frame your effort as "Understanding AGI risk". While you think there is risk currently, having an open mind about the status of the risk is important. If AGI turns out to be existential risk-free then it could help with climate adaptation, even if it is not in time for climate mitigation.

Edit: You could frame it just as understanding AI, and put together independent briefs on each project for policy makers to understand the likely impacts both positive and negative and the state of play. Getting a good reputation and maintaining independence might be hard though.

My view is that you have to build AIs with a bunch of safeguards to stop it destroying *itself* while it doesn't have great knowledge of the world or the consequences of its actions. So some of the arguments around companies/governments skimping on safety don't hold in the naive sense.

So things like how do you :

  • Stop a robot jumping off something too high
  • Stop an AI DOSing it's own network connection
  • Stop a robot disassembling itself

When it is not vastly capable. Solving these things would give you a bunch of knowledge of safeguards and how to build them. I wrote about some of problems here

It is only when you expect a system to radically gain capability without needing any safeguards, does it makes sense to expect there to be a dangerous AI created by a team with no experience of safe guards or how to embed them.

As a data point for why this might be occurring. I may be an outlier, but I've not had much luck getting replies or useful dialogue from X-risk related organisations in response to my attempts at communications.

My expectation, currently. is that if I apply I won't get a response and I will have wasted my time trying to compose an application. I won't get any more information than I previously had.

If this isn't just me, you might want to encourage organisations to be more communicative.

My view is more or less the one Eliezer points to here:
The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.

There are probably no fire alarms for "nice AI designs" either, just like there are no fire alarms for AI in general.

Why should we expect people to share "nice AI designs"?

For longer time frames where there might be visible development, the public needs to trust that the political regulators of AI to have their interests at heart. Else they may try and make it a party political issue, which I think would be terrible for sane global regulation.

I've come across pretty strong emotion when talking about AGI even when talking about safety, which I suspect will come bubbling to the fore more as time goes by.

It may also help morale of the thoughtful people trying to make safe AI.

I think part of the problem is that corporations are the main source of innovation and they have incentives to insert themselves into the things they invent so that they can be trolls and sustain their business.

Compare email and facebook messenger for two different types of invention, with different abilities to extract tolls. However if you can't extract a toll, it is unlikely you can create a business around innovation in an area.

I had been thinking about metrics for measuring progress towards shared agreed outcomes as a method of co-ordination between potentially competitive powers to avoid arms races.

I passed around the draft to a couple of the usual suspects in the ai metrics/risk mitigation in hopes of getting collaborators. But no joy. I learnt that Jack Clark of OpenAI is looking at that kind of thing as well and is a lot better positioned to act on it, so I have hopes around that.

Moving on from that I'm thinking that we might need a broad base of support from people (depending upon the scenario) so being able to explain how people could still have meaningful lives post AI is important for building that support. So I've been thinking about that.

To me closed loop is impossible not due to taxes but due to desired technology level. I could probably go buy a plot of land and try and recreate iron age technology. But most likely I would injure myself, need medical attention and have to reenter society.

Taxes aren't also an impediment to close looped living as long as waste from the tax is returned. If you have land with a surplus of sunlight or other energy you can take in waste and create useful things with it (food etc). The greater loop of taxes has to be closed as well as well as the lesser loop.

Load More