Posts

Sorted by New

Wiki Contributions

Comments

This basically means they are perfectly achieving their goal, right? Wirecutter's goal isn't to find the best product, it's to find the best product at a reasonable price. If you're a power user, you'll be willing to buy better and more expensive stuff.

Feature request: Q&A posts show a sidebar with all top-level answers and the associated usernames (example). Would be nice if the Anti-Kibitzer could hide these usernames.

The script works well on individual posts, but I find that on the lesswrong.com homepage, it displays names and vote counts for about 3 seconds before it finishes executing. Perhaps there's some way to make it run faster, or failing that, to block the page from rendering until the script finishes running?

Somewhat debatable whether this is a desirable feature, but right now the ordering of comments leaks information about their vote counts. Perhaps it would be good to randomize comment order.

Now that April 17 has passed, how much did you end up making on this bet?

I know more about StarCraft than I do about AI, so I could be off base, but here's my best attempt at an explanation:

As a human, you can understand that a factory gets in the way of a unit, and if you lift it, it will no longer be in the way. The AI doesn't understand this. The AI learns by playing through scenarios millions of times and learning that on average, in scenarios like this one, it gets an advantage when it performs this action. The AI has a much easier time learning something like "I should make a marine" (which it perceives as a single action) than "I should place my buildings such that all my units can get out of my base", which requires making a series of correct choices about where to place buildings when the conceivable space of building placement has thousands of options.

You could see this more broadly in the Terran AI where it knows the general concept of putting buildings in front of its base (which it probably learned via imitation learning from watching human games), but it doesn't actually understand why it should be doing that, so it does a bad job. For example, in this game , you can see that the AI has learned:

1. I should build supply depots in front of my base.

2. If I get attacked, I should raise the supply depots.

But it doesn't actually understand the reasoning behind these two things, which is that raising the supply depots is supposed to prevent the enemy units from running into your base. So this results in a comical situation where the AI doesn't actually have a proper wall, allowing the enemy units to run in, and then it raises the supply depots after they've already run in. In short, it learns what actions are correlated with winning games, but it doesn't know why, so it doesn't always use these actions in the right ways.

Why is this AI still able to beat strong players? I think the main reason is because it's so good at making the right units at the right times without missing a beat. Unlike humans, it never forgets to build units or gets distracted. Because it's so good at execution, it can afford to do dumb stuff like accidentally trapping its own units. I suspect that if you gave a pro player the chance to play against AlphaStar 100 times in a row, they would eventually figure out a way to trick the AI into making game-losing mistakes over and over. (Pro player TLO said that he practiced against AlphaStar many times while it was in development, but he didn't say much about how the games went.)

At some point, all traders with this belief will have already bought the stock and the price will stop going up at that point, thus making the price movement anti-inductive.

I'm tempted to correct my past self's grammar by pointing out that "e.g." should be followed by a comma.

Is it possible to self-consistently believe you're poorly calibrated? If you believe you're overconfident then you would start making less confident predictions right?

Load More