An important remark: a program that is better at a task is not necessearily more complex than a program that is worse. Case in point, AlphaGo: definitely better than almost every human at go, but definitely less complex than a human mind.
Anyway, accepting the premise:
1 is demonstrably false, for any reasonable definition of intelligence (a Turing machine that can solve a problem that another TM cannot solve);
2 is surely true, given that a program can increase in complexity given more memory and a way to do unsupervised learning;
3 is too dependent to the implementation detail to judge, but it may be trivially true for a sufficiently large gap to reach.
The Kolmogorov complexity ("K") of a string ("S") specifies the size of the smallest Turing machine that can output that string. If a Turing machine (equivalently, by the Church-Turing thesis, any AI) has size smaller than K, it can rewrite its code as much as it wants to, it won't be able to output S. To be specific, of course it can output S by enumerating all possible strings, but it won't be able to decide on S and output it exclusively among the options available. Now suppose that S is the source code for an intelligence strictly better than all those with complexity <K. Now, we are left with 3 options: