The minor nature of its goals is the whole point. It is not meant to do what we want because it empathizes with our values and is friendly, but because the thing we actually want it to do really is the best way to accomplish the goals we gave it. Also I would not consider making a cheese cake to be a trivial goal for an AI, there is certainly more to it then the difficult task of distinguishing a spoon from a fork, so this is surely more than just an "intelligent rock".
Make 1 reasonably good cheese cake as judged by a person within a short soft deadline while minimizing the cost to resources made available to it and with out violating property laws as judged by the legal system of the local government within some longer deadline.
To be clear the following do not contribute any additional utility:
The short soft deadline is about 1 day. Finishing in 5 hours is slightly better than 6. At 1 day there is a steep inflection. Finishing in more than 25 hours is slightly better than doing nothing.
The long deadline is 1 year.
Committing a property law violation is only slightly worse than doing nothing.
Since we are allowing a technology that does not presently exist I will assume there is another technological change. Specifically there are remotely controllable humanoid robots and people are comfortable interacting with them.
The AGI has available to it: 1 humanoid robot, more money than the cost of ingredients of a cheese cake available at a nearby grocery store, a kitchen, communication access to the person who will judge the reasonableness of the cheese cake and who may potentially provide other resources if requested and who's judgment may be influenced.
As a possible alternative scenario we could assume there are thousands of other similar AGI that each are tasked with making a different pastry. I propose we call them each Marvin.
Let the confectionery conflagration commence!
Theorems are not generally presented in math journals in the way they were discovered, so I am not sure machine learning from journal articles would greatly help in discovery. The issue is really that going from question to answer is a different process from verifying an answer is correct, or guiding a reader through such a verification which is what a proof is.
A perhaps less lofty, but still incredibly useful, goal would be automating a process for simplifying proofs
Or alternatively convincing mathematicians to narrate their own mental process of discovery.
The Euler formula for polyhedra is possibly the most blatant such example.
Huh? There are no counterexamples to the Euler characteristic of a polyhedra being 2, and the theorem has generalized beautifully. If anything conditions have been loosened as new versions of the theorem have been used in more places.
While there may be problems with what I have suggested, I do not think the scenario you describe is a relevant consideration for the following reasons...
As you describe it the ai is still required to make a cheese cake, it just makes a poor one.
It should not take more than an hour to make a cheese cake, and the ai is optimizing for time. Also the person may eat some cheese cake after it is made, so the ai must produce the virus, infect the person, and have the virus alter the person's mind within 1 hour while making a poor cheese cake.
Whatever resources the ai expends on the virus must be less then the added cost of making a reasonably good cheese cake rather than a poor one.
The legal system only has to identify a property law violation, which producing a virus and infecting people would be, so the virus must be undetected for more than 1 year.
Since it is of no benefit to the ai if the virus kills people, the virus must by random chance kill people as a totally incidental side effect.
I would not claim that it is completely impossible for this to produce a virus leading to human extinction, and some have declared any probability of human extinction to effectively be of negative infinite utility, but I do not think this is reasonable, since there is always some probability of human extinction, and moreover I do not think the scenario you describe contributes significantly to that.