Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
exkn10

Some really intriguing insights and persuasive arguments in this post, but I feel like we are just talking about the problems that often come with significant technological innovations.

It seems like, for the purposes of this post, AGI is defined loosely as a "strong AI"  which is technological breakthrough that is dangerous enough to be a genuine threat to human survival.  Many potential technological breakthroughs can have this property and in this post it feels as if AGI is being reduced to some sort of potentially dangerous and uncontrollable software virus.

While this question is important and I get why the community is so anxiously focused on it, I don't find it to be a the most interesting question.

The more interesting question to me is how and when will these systems become true AGI's that are conscious in some similar way to us and capable of creating new knowledge (with universal reach) in the way we do?

I think we get there, (and maybe sooner rather than later?) but how we do and the explanations uncovered will be one of the among the most fascinating and revelatory discoveries in history.

exkn10

Interesting and useful concept, technological leverage.

I'm curious what Googetasoft is?  

OK I can see a strong AI algorithm being able to do many things we consider intelligence, and I can see how the technological leverage it would have in our increasingly digital / networked world would be far greater than many previous technologies.   

This is the story of all new technological advancements, bigger benefits as well as bigger problems and dangers that need to be addressed or solved or else bigger bad things can happen.  There will be no end to these types of problems going forward if we are to continue to progress, and there is no guarantee we can solve them, but there is no law of physics saying we can't.  

The efforts on this front are good, necessary, and should demand our attention, but I think this whole effort isn't really about AGI.

I guess I don't understand how scaling up or tweaking the current approach will lead AI's that are uncontrollable or "run away" from us?  I'm actually rather skeptical of this.

I agree regular AI can generate new knowledge but only an AGI will do so creatively and and recognize it as so.  I don't think we are close to creating that kind of AGI yet with the current approach as we don't really understand how creativity works.

That being said, it can't be that hard if evolution was able to figure it out.