Brian Murphy

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Answer by Brian Murphy20

The first and selfish answer (probably shared by countless others would be "I'm interested in working on that."

Am I qualified? Maybe; maybe not. I suspect I won't know what makes an effective AI safety planner until somebody actually starts to do it.

I make this observation. It looks to me that the potential emergence of AGI has two fronts. The first is raw scientific development. Programmers, engineers and cognitive scientists just "doing their thing;" understanding our world by replicating and modifying parts of it. The second is the one that a vast majority of people can already see; specific-task AI devices getting stronger/faster and better connected. If it cannot be done today, within months a person can talk to the air around them, order a cheeseburger that will be cooked, assembled, delivered, and paid for completely by automated, unconscious agents. Who am I to say that with enough forward development and integration of such automated systems, we would not see emergent automated behavior, just as fantastic or dangerous as a "thinking" machine might display.

Such a watchdog group can be potentially useful already, if they allow some economic skill-power to assist with current technology issues (i.e. workplace automation, and the unavoidable employment changes that causes.)

This is a long winded "I agree." We should not wait for someone else to organize our protective stance from the agents we build specifically to be better at tasks than ourselves, be they specific or general. Multiple, experienced folk should always be asking "What is the driving goal of this AGI? What are it's success/failure conditions? What information does it have access too? Where are the means to interrupt it, if it finds an unfriendly solution to its hurdles?