"Dear Oracle, consider the following two scenarios. In scenario A an infallible oracle tells me I am making no major mistakes right now (literally in these words). In scenario B, an infallible oracle tells me I am making a major mistake right now (but doesn't tell me what it is). Having received this information, I adjust my decisions accordingly. Is the outcome of scenario A better, in terms of my subjective preferences?"
We can also do hard mode where "better in terms of my subjective preferences" is considered ill-defined. In this case I finish the question by "...for the purpose of the thought experiment, imagine I have access to another infallible oracle that can answer any number of questions. After talking to this oracle, will I reach the conclusion scenario A would be better?"
Think of this scenario: I ask "is everything I am doing the optimal for my subjective preferences?" Now think at your question. It is provable that the oracle answer yes (or no) to my question if and only if the oracle answer yes (or no) to your question, and vice-versa. This make my question a better choice since it is less complex (less bits). If you try to schematize some possible cases, you will see that the answer of the oracle in my example and yours is always the same.
Consider the futures of humanity which I would, upon reflection, endorse as among the best of utopias, and consider the simplest Turing Machines which encode them. If you apply (some function which turns their states after n steps into a real number and concatenate them), would the output of such calculation belong to (this randomly chosen half of the real numbers)?
I'm sure this can be worded more carefully, but right now this may force the oracle to simulate all the futures of humanity which I would consider to be among the best of utopias.
This is a clever answer, but we don't know how the oracle works - they're supposedly omniscient and there's some chance that they can pull the answer magically from thin air (or some other clever method of derivation that doesn't require any simulation), in which case you just wasted a very valuable question.
Wait until a situation comes up where you are torn between two alternatives with potential high loss/gain. Then ask: "Among situations A and B should I choose A?"
Asking "should" is likely a bad idea in this case as it makes the answer hard to interpret. I would rather ask "Is it better for X if I choose A"?
What are the winning numbers in arkansas mega millions lottery
"Is it optimal, according to my values, to devote my time and energy to working on [thing I am devoting my time and energy to working on]?" is probably going to be the right question to ask for quite a lot of people.
I think it would not be a very useful question to ask. What are the chances that a flawed, limited human brain could stumble upon the absolute optimal set of actions one should take, based on a given set of values? I can't concieve of a scenario where the oracle would say "Yes" to that question.
how old am i?
There is also the "obvious" answer that says, find a 50/50 gamble (on stock market, prediction market or whatever), loan as much money as you can and gamble everything. It becomes better if there are people you can convince to invest as well (either just by being known as trustworthy or by having a way to demonstrate the existence of the oracle).
(Outlines a strategy/meta-strategy for choosing questions, with one example.)
"If I* had to answer every question I was asked with "yes" or "no" would it be better to answer "yes" or "no" - give the answer that it would be better to give.**"
*"I" can be replaced with "everyone".
**For every question I answer with yes or no, which answer is better to give?
(Note that this might run into a problem with double negation.)
The strategy is, rather than selecting one question that's very important, choose a set of questions. This can be used by always going with the answer the oracle gives, or using it as a prior/evidence.
If you could ask just one question to an omniscient oracle knowing that