To me the very notion of an AI system not having any goals at all seems inherently wrong. If the system is doing something - even if that something is just reasoning - the system must have some means of deciding what to do out of the infinite pool of things that could possibly be done. Whatever that system is defines the goal.
Goal-directed behaviour can be as simple as what a central heating thermostate does. An AI could very possibly have no internal representation of what its own goal is, but if it is carrying out computations, it almost certainly has something which directs it on what sort of computations it's expected to carry out, and that is quite enough to define a goal for it.
The main difference is having a thorough map of the territory. The stage before having worthwhile ideas is going on a mapping exercise - finding out what is already known about a topic, and learning what existing workers in the field are able to do, and how they understand it. As you learn how the different aspects of the field are connected, it’s possible to start having your own useful ideas - and most of the time you’ll find your idea is already known and part of the field. But as you continue to map and explore, you may come across ideas that don’t see... (read more)