In machine learning, one often wants to uniformly approximate an arbitrary continuous function arbitrarily well using polynomials, neural networks, or something else. But in the case of complex-valued functions, this is more difficult. For example, the limit of holomorphic functions in the topology of uniform convergence on compact sets is...
Machine learning algorithms such as neural networks are supposed to have some sort of universal uniform approximation theorem that shows that they can (at least in principle) learn any possible data set without simply overfitting to the training data. The standard universal approximation theorem applies to shallow neural networks with...
In this post, I shall describe a fitness function that can be locally maximized without gradient computations. This fitness function is my own. I initially developed this fitness function in order to evaluate block ciphers for cryptocurrency technologies, but I later found that this fitness function may be used to...
In this post, I shall first describe a new word embedding algorithm that I came up with called a matrix product optimized (MPO) word embedding, and I will prove a theorem that completely interprets this word embedding in the simplest case. While it is probably infeasible to mathematically characterize a...
Here are some empirical observations that I have made on August 14, 2023 to August 19, 2023 that are characteristics of the interpretability of my own matrix dimensionality reduction algorithm. These phenomena that we observe do not occur on all inputs (they sometimes occur partially); and it would be nice...
In this post, we shall define my new dimensionality reduction for tensors in V⊗n where n≥3, and we shall make an empirical observation about the structure of the dimensionality reduction. There are various simple ways of adapting this dimensionality reduction algorithm to tensors in V1⊗⋯⊗Vn and even mixed quantum states...