All of pando's Comments + Replies

pando10

Thanks for helping clear this up! That makes a lot of sense.

pando10

Yeah, I'm being very hypothetical when discussing constant money supply. For the purposes of this discussion, just assume that somehow bankers decided to not increase the money supply.

Are you in agreement then that over the long term, the total world index fund must approximate the total growth in money supply (I guess assuming constant money velocity)? If not can you help me understand why not?

Also related: can GDP increase somehow if money supply is fixed and money velocity is fixed?

7Yair Halberstadt
Yes in this hypothetical, stock indexes would stay roughly constant in nominal terms, but would rise just as fast in real terms. And GDP will increase in real terms if money supply is fixed, but not in nominal terms. Both of these are because we'd have deflation.
pando10

Yes that makes sense, but is there some reason we should expect the total market cap to continue growing to huge multiples of the money supply (assuming continued technological improvement but fixed money supply)?

2Yair Halberstadt
Added an extra section to discuss that. The idea of increasing GDP but fixed money supply just isn't realistic, at least so long as central banks target a fixed rate of inflation.
pando10

Very nice post. It is certainly useful to do this exercise of manually encoding language rules into the weights of a transformer in order to better understand the machinery involved.

"The ultimate ambition of this work would be to go toe-to-toe with a comparably-sized Transformer model trained in the traditional way on a modern-sized data set. This might require several people-years of focused effort though."

There is a long history of attempting to parse natural language with hand design rules and heuristics. The general consensus now is that hand engin... (read more)

1MadHatter
There are a number of ways to combine this approach with learning, but I haven't had time to try any of them yet. Some ideas I have thought of: * Use hard-coded weights, plus some random noise, to initialize the weights of a transformer that you then train in the traditional fashion * Doesn't really help with interpretability or alignment, but might(???) help with performance * Write out all the weight and bias parameters as combinations of semes and outer products of semes, then learn seme embeddings by gradient descent * Semantic seme embeddings could be initialized from something like WordNet relationships, or learned with word2vec, to automate those guys * You could do smallish amounts of gradient descent to suggest new rules to add, but then add them by hand * Still would be very slow * Perhaps it is possible to start with a strong learned transformer and gradually identify human-legible rules that it is using, and replacing those specific parts with hard-coding * Could prove very difficult!!! * It seems almost certain to me that hard-coding weights would at least help us build the muscles needed to recognize what is going on, to the extent that we are able to