cousin_it

https://vladimirslepnev.me

Wikitag Contributions

Comments

Sorted by

I see, interesting, thank you! One more question though, my comment mentioned both text and images, with "uncanny averageness in details" applying to both. If you say there's a way to (mostly) avoid that for text by using base models instead of chat-tuned ones, what would be the analogous fix for images?

Yeah, the "4th grader" poem is way better than the example in the post. So maybe there's something to it. Can you explain why a base model will do this but a chat-tuned one won't?

That's true, it can't be as simple as making the generation more random (increasing the temperature or something). For example, a human can choose to write something highly ordered but interpreting the prompt in a creative way - an overall boost in randomness won't do that. So the thing I'm suggesting is probably really difficult.

There's another thing I wanted to note. Many people are sharing their AI creations and don't notice how they come across to others. My hypothesis is that people put a little bit of creativity into the prompt, get a result with exactly that much creativity, and happily click the share button. Work done! But if they'd tried to make the same content by hand, they'd realize that it requires a huge amount of creativity in the details, much more than they'd imagined when writing the prompt. And other people pick up on that (going back to my first comment), they sense whether the work was made with 100% creativity, or 1% creativity and 99% averaging.

Yeah, it's the usual slop. Unreadable past the first sentence.

It makes me wonder, though. Right now AI texts and images don't look like they're sampled from the distribution of human texts and images. They're importantly different. For example, if you ask an AI to write a poem about a hamster driving a jeep, or generate an image of the same, then the result will indeed involve a hamster driving a jeep - but all other details about it will be uncannily, uniformly average. The only non-averageness will be coming from the prompt. Human texts and images aren't like that, they have non-averageness on all levels.

To me this seems like the sort of thing that could be solved with math. Can there be a generative AI whose output has non-averageness on all levels, in the same proportions as human-generated content?

I think Russia has a low standard of music, but a high standard of poetry and prose. That said, Russian song lyrics aren't the best example of that. Actual Russian poetry is much stronger, if you haven't read Pushkin you're in for a treat.

Yeah, I'm trying to think about the market scenario. Still can't tell if it works. If everyone in the market used the argument you describe, would the extra info actually be nonzero for everyone?

Can you explain that section a bit? By your argument, it makes sense to pay at least 50 cents for the contract. But if that argument was correct, then everyone else in the market would also apply it, and the price of the contract would be at least 50 cents with certainty. That seems weird.

I'm a bit confused. Let's say we're deciding between actions X and Y. We set up a pair of conditional prediction markets to determine which would give us more utility. And we promise to follow the market's decision - nothing can influence our choice between X and Y, except the market's decision. In this situation, is there still a difference between conditional and causal probabilities?

religions uniquely provide a source of meaning, community, and life guidance not available elsewhere

I'm not sure things like religion should be treated with a consumer mindset. For example, would it make sense to follow EA because it's a source of meaning and community? No, the point of following EA is to do good. With religion it's often similar, for example the early Christians were the first to build orphanages. The New Testament says a person is good or bad depending on what they do, not what they receive, so if someone said they were joining Christianity to receive something, they'd get very strange looks.

I think the universal precommitment / UDT solution is right, and don't quite understand what's weird about it.

Load More