Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Azai20

This reminds me of the kind of min-maxing good chess players tend to do when coming up with moves. They come up with a few good moves, then consider the strongest responses the opponent could make, branching out into a tree. To keep the size of the tree manageable, they only consider the best moves they can think of. A common beginner mistake is to play a move that looks good, as long as you assume the opponent "plays along". (Like, threaten a piece and then assume the opponent won't do anything to remove the threat.) I think this could be called strawmanning your opponent. If you steelman your opponent, you tend to play better, because your beliefs about the effectiveness of your moves will be more accurate. But you should of course also be sharpening your beliefs about which moves the opponent is likely to make, which involves thinking about your possible responses, etc.

Azai310

I'd like to note that queries like "history" and "ancient" result in images with much more yellow and orange in them (pyramids, pottery, old paper, etc.). I checked by blurring screenshots a lot in an image editor, and the average color for "history" seems to be a shade of orange, while that of "futuristic" is a shade of blue. Blue and yellow/orange are complementary color combinations, so I wonder if that plays any role in reinforcing the blue-future and yellow-past associations.

Answer by Azai10

Here's some motions toward an answer. I'll consider an informally specified market model, as opposed to a real market. Whether my reasoning applies in real life depends on how much real life resembles the model.

In particular, consider as model an efficient market. Assume the price of any stock X is precisely its expected utility according to all evidence available to the market. Then the only way for the price to go up is if new evidence arrives. This evidence could be the observation that the company associated with the stock continues to exist and produce income, that it continues to produce income, or that the income it produces is is going up.

Those bits of evidence remind me of the risk perspective. Sure, investors may believe that a company, if it continues to exist, will one day be worth more money than they could ever invest today. But if they think there's a 5% chance each year that the company stops existing, then this can severely limit the expected utility of owning the stock right now. (You might ask, "What if I assign a nonzero probability to the class of futures where the company exists forever and the utility of holding its stock grows without bound?" which makes the expected utility infinite, and makes me suspect that defining utility over infinite spans of time is tricky.)

I feel like the time-discounting hypothesis makes a lot of sense and is probably part of the truth. To make sense of it within my toy model, I'd have to look at what time-discounting actually means in terms of utility. A reasonable assumption within this idealized model seems to be that the "utility" of anything you possess is equal to the maximum expected utility of anything you could do with it / exchange it for, including exchanges over time. (This is like assuming perfect knowledge of everything your could do with your possessions. Utility can never go up under this assumption, similar to how no legal move can improve a chess position in the eyes of a perfect player.) This means the utility of owning a stock is somewhere between the utility of its buy price and its sell price, as expected. And the utility of 1 dollar right now is no less than the expected utility of a dollar in 2030, given that you could just hold on to it. Then time-discounting is just the fact that the utility of 1 dollar right now is no less than the expected utility of buying one dollar's worth of stock X and waiting a couple years. Let's assume the stocks grow exponentially in price. That is to say, though neither the stocks nor the money increases in utility, stocks can be exchanged for more and more money over time. It seems converting money into stocks avoids futures where we lose utility. So how much should we pay for one stock? This is just determined by the ratio of the utility of the stock to the utility of a dollar. So the question becomes, why does money have any utility at all, if it is expected to fall in utility compared to stocks? And this must be because it can be exchanged for something of intrinsic utility, such as the enjoyment derived from eating a pizza. But why would someone sell you a pizza, knowing your money will decrease in utility? Because the pizza will decrease in utility even faster unless someone eats it, and the seller has too many pizzas to eat, or wants to buy other food for themself. (Plus, the money is backed by banks / governments / other systems.)

So if the average price of stocks tends to increase year by year, why are they not worth infinite money to begin with? Here's another perspective. How is the average price calculated? Presumably we're only averaging over stocks from active companies, ones that have not ceased to exist due to bankruptcy or the like. However, when evaluating the expected utility of a stock, we are averaging over all possibilities, including the possibilities where the company goes bankrupt. There is a nonzero chance that the company of a stock will cease to exist (unpredictably). So if you Kelly bet, you should not invest all your money into such a stock. As a consequence, successively lower prices are required to get you to buy each additional stock of the company.

Reflection: I imagine these concepts match with ideas from economic theory in quite a few places. I have a mathematical background myself, probably making the phrasing of this answer unusual. I'm not so sure about this whole informal, under-specified model I just made. It seems like the kind of thing that easily leads to pseudoscience, while at the same time playing around with inexact rules seems useful in early stages of getting less confused, as a sort of intuition pump. (Making the rules super strict immediately could get you stuck.)

Azai30

Taking a sentence output by AI Dungeon and feeding it into DALL-E is totally possible (if and when the DALL-E source code becomes available). I'm not sure how much money it would cost. DALL-E has about 7% of the parameters that the biggest model of GPT-3 has, though I doubt AI Dungeon uses the biggest model. Generating an entire image with DALL-E means predicting 1024 tokens/codewords, whereas predicting text is at most 1 token per letter. All in all, it seems financially plausible. I think it would be fun to see the results too.

What seems tricky to me is that a story can be much more complex than the 256-token text input that DALL-E accepts. Suppose the last sentence of the story is "He picks up the rock." This input fits into DALL-E easily, but is very ambiguous. "He" might be illustrated by DALL-E as any arbitrary male figure, even though in the story, "He" refers to a very specific character. ("The rock" is similarly ambiguous. And there are more ambiguities, such as the location and the time of day that the scene takes place in.) If you scan back a couple of lines, you may find that "He" refers to a character called Fredrick. His name is not immediately useful for determining what he looks like, but knowing his name, we can now look through the entire story to find descriptions of him. Perhaps Fredrick was introduced in the first chapter as a farmer, but became a royal knight in Chapter 3 after an act of heroism. Whether Fredrick is currently wearing his armor might depend on the last few hundred words, and what his armor looks like was probably described in Chapter 3. Whether his hair is red could depend on the first few hundred words. But maybe in the middle of the story, a curse turned his hair into a bunch of worms. 

All this is to say that to correctly interpret a sentence in a story, you potentially have to read the entire story. Trying to summarize the story could help, but can only go so far. Every paragraph of the story could contain facts about the world that are relevant to the current scene. Instead, you might want to summarize only those details of the story that are currently relevant.

Or maybe you can somehow train an AI that builds up a world model from the story text, so that it can answer the questions necessary for illustrating the current scene. It's worth noting that GPT-3 has something akin to a world model that it can use to answer questions about Earth, as well as fictional worlds it's been exposed to during training. However, its ability to learn about new worlds outside of training (so, during inference) is limited, since it can only remember the last 2000 tokens. To me this kind of seems like they need to give the AI its own memory, so that it can store long-term facts about the text to help predict the next token. I wonder if something like that has been tried yet.

One way you might be able to train such a model is to have it generate movie frames out of subtitles, since there's plenty of training data that way. Then you're pretty close to illustrating scenes from a story.