Posts

Sorted by New

Wiki Contributions

Comments

Han7y10

There's a blogger you might enjoy reading whose name is Ramin Shokrizade: http://www.gamasutra.com/blogs/author/RaminShokrizade/914048/ . He's some kind of consultant for video game monetization schemes. I think he's a little bit hyperbolic and overwrought sometimes, but he has a lot of direct experience and textual evidence collected from other designers at companies like Zynga.

I think there are a lot of psych topics that are relevant for freemium games but not normal gambling, which means they're a great zone for research. Normal gambling games like poker, blackjack, slots, and lotteries tend to play the same way from round to round, which means they get a lot of addiction potential from normal variable reinforcement-type stuff. But freemium games are allowed to exhibit significant differences in play as time goes on, which means they can give the user some free wins to start with, some powerups that will eventually deplete, stuff like that. (There was a great wrestling game I ran into where the first three bosses could be beaten by mashing buttons and the fourth one was literally impossible without cheating -- too greedy, perhaps?)

Maybe the big important thing is that a lot of people are really loss-averse and having some kind of state between rounds means the game can threaten to take things away from you if it wants to. In a lot of normal gambling situations, you can cut your losses and walk away. Sunk cost fallacy means many people are bad at deciding to do that, but a force even stronger than sunk cost fallacy is loss-averseness. Common example: "you've won an item, but your inventory is full -- delete something or you'll lose it forever."

Pachinko machines are more loosely regulated and if I remember right, some even implement F2P-like loss-averseness schemes. Remember Mann Co. Keys in Team Fortress, where the game presents you the opportunity for a reward but disguises it as a reward by itself? At one point it was in vogue for pachinko machines to tell you earned a jackpot, but make you play a ton of extra rounds to "unlock" it. (by getting lucky again) -- effectively the same scheme, not that evil by itself. What made it evil was that if you stopped playing the machine, anyone else could start up playing it and steal your jackpot.

Han7y00

I think you're right. I'm badly overlooking a subtlety because I'm narrowing "describe" down to "is a suffix of." But you're right that "describe" can be extended to include a lot of other relationships between parts of the big sentence and little sentences, and you're also right that this argument doesn't necessarily apply if you unconstrain "describe" that way. (I haven't formalized exactly what you can constrain "describe" to mean -- only that there are definitions that obviously make our sledgehammer argument break.)

I think "a sentence can be countably infinite" is implicit from the problem description because the problem implies that our "giant block of descriptions" sentence probably has countably infinite size. (it can't exactly be uncountably infinite)

Han7y00

I thought Gurkenglas' solution was a lovely discrete math sledgehammer approach. There's a lot of subtly different problems that Thomas could have meant and I think Gurkenglas' approach would probably be enough to tackle most of them.

(Attempting to summarize his proof: Some English sentences, like the one this problem is asking you to dig around in, are countably infinite in length. If some English sentences are countably infinite in length, and any two of them have different infinite suffixes, then there's no way the text of this sentence contains both of them.)

Han7y00

Not a long note or a detailed dissection, but just a reminder: whenever you take single-dimensional data and make it multidimensional, it becomes harder and more subjective to analyze it. (EDIT: To clarify, you can represent multidimensional data multidimensionally. But mapping multidimensional data to a lower-dimensional space usually involves finding a fit, which can introduce error. Mapping it to a lower-dimensional space is usually an important step in explaining it.) I suspect you'll find that if you have this many dimensions for people to respond by, you'll get lots of different-looking representations of the same underlying signal.

Maybe that's not bad: the default sort order is newest-to-oldest -- basically arbitrary -- and for most cases, "generally positive" and "generally negative" signal will be sorted in the correct order. But I still feel some suspicion because it's just one UI feature and it took you about two pages of words to pitch it.

Han7y80

I think there's a rule-of-thumby reading of this that makes a little bit more sense. It's still prejudiced, though.

A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren't going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they're going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn't, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering -- and they're not even superintelligent

I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it's extremely worth pointing out that these are ideas humans love aesthetically. If it's true, great for us, but it's a very pretty version of the truth. Even if I'm wrong, I'm skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn't need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.

(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don't think this defeats my point, because for the conditions of these peoples' narrative to be met, we need the superintelligence to do things we wouldn't have thought of in most of the domains relevant to creating a utopia)

Han7y20

I think it's a little bit worse than this.

A lot of people who gamble compulsively don't do it because the odds are beyond them. (It's really easy to play slots a bunch of times, lose a lot of money, and realize you lost a lot of money.) There's something neurologically strange about people who gamble frequently even though they lose, and it's hard to pinpoint it, but it seems like variable reinforcement is winning out over logic.

If you buy a large number of lottery tickets, you're pretty likely to win some sort of prize. Related example: slot machines are designed to generate a bonus round or a jackpot about once within each ~$100, and that's a pretty normal level of play for someone who does it compulsively. Also, like casino games like slots and blackjack, lottery tickets are pretty good at generating near misses and losses-disguised-as-wins, particularly scratcher and instant-ticket lotteries because those tend to involve a small pool of symbols and elaborate presentation.

There's also a giant sunk cost fallacy problem -- the problem is that understanding the sunk cost fallacy isn't enough to defeat it for a lot of people.

I would be willing to guess that a significant proportion of the people who play the lottery a lot probably have an accurate picture of the odds, but due to mental health problems they're going to continue to waste far too much money on it. I'd also be willing to guess that they generate most of the lottery proceeds just because even though they're numerically few, they buy more tickets than anyone else.

Han7y00

Thank you for the information! My brain does something weird when I see the word "actually," so I don't think I was charitable when I read your post.

Han7y00

Oh, absolutely! It's misleading for me to talk about it like this because there's a couple of different workflows:

  • train for a while to understand existing data. then optimize for a long time to try to impress the activation layer that konws the most about what the data means. (AlphaGo's evaluation network, Deep Dream) Under this process you spend a long time optimizing for one thing (network's ability to recognize) and then a long time optimizing for another thing (how much the network likes your current input)
  • train a neural network to minimize a loss function based on another neural network's evaluation, then sample its output. (DCGAN) Under this process you spend a long time optimizing for one thing (the neural network's loss function) but a short time sampling another thing. (outputs from the neural net)
  • train a neural network to approximate existing data and then just sample its output. (seq2seq, char-rnn, PixelRNN, WaveNet, AlphaGo's policy network) Under this process you spend a long time optimizing for one thing (the loss function again) but a short time sampling another thing. (outputs from the neural net)

It's kind of an important distinction because like with humans, neural networks that can improvise in linear time can be sampled really cheaply (taking deterministic time!), while neural networks that need you to do an optimization task are expensive to sample even though you've trained them.

Han7y00

I'm confused. Isn't it evident from the rest of my comment that I agree with you?

(On an unrelated note: I think my upvote button has vanished. Otherwise I would have clicked it for your post!)

Han7y40

You're probably right! (At least some of the time.)

In music, I know a lot of people who think about things the same way you do, and they sensibly learn to use versatile tools like FM synthesis because FM synthesis covers a wide range of sounds really broadly. A lot of them even know how to make human voice-like sounds using these tools.

On average if you stick to those tools you'll do pretty well. They still fall back on using physical instruments for a lot of techniques, because you can do elaborate expressive things with physical instruments a lot more easily than with the machine.

In music, machines have been getting better, but they aren't perfect yet. A lot of input devices, even well-regarded ones, don't have the build quality of instruments made for professionals. It's really hard to simulate the physical feel of an acoustic instrument without actually building an acoustic instrument -- don't ask me why, but I've shopped around a lot and I've only found a couple input devices that really feel great for me after long-term use.

In art, there are a lot of hardware limitations. It's hard to make a tablet that looks great and feels great, and talking to an art program means you're subject to a lot of latency, and -- if your tablet doesn't have a display -- you're going to see your drawing appear on a different plane than you made it on. A lot of digital artists struggle with line quality and width variation because those things can be awkward on tablet input devices -- and depending on medium, those are often super fundamental (1) to how you pick out parts and subparts of an image and (2) to how you read its form.

You will notice there are a lot of really good digital painters and a lot of really bad digital line artists. That's a part of why!

Don't get me wrong, though. I think your point totally holds for parts of art that can be rehearsed and repeated an indefinite number of times until they look right. I also think that for planning and prototyping, you need to be able to iterate really fast and it needs to be fun, or at least unobstructive. This is another one of those things that's also true for musicians: the really good musicians spend nine hours a day in the studio and there has to be something about it that motivates them to get up in the morning.

Load More