A magician asks you to choose any infinite sequence of 0s and 1s, and to start reciting it until they say stop. They will then make a prediction of the form "p% of the next n digits will be 0", and they will be correct to within 1% at least 99% of the times they perform the trick. How is the trick done?
h/t Alex Arkhipov
A solution:
We'll describe a strategy for the magician that stops you sometime within the first bits and gets an expected squared error of exactly regardless of your string. If we pick , the strategy has a squared error of at most , and so gets within 1% at least 99% of the time. This requires the magician to wait for up to bits. I suspect that's roughly optimal but haven't checked the construction carefully.
The strategy is to pick a random , then pick a random
I found this puzzle fun and I feel like I learned something useful about math when you gave it to me a few years ago.
I'll be thinking about this problem for the next hour or so unless something comes up, so I'll be writing my thoughts down here if I think they are important.
A couple of preliminary observations:
The magician must be playing a mixed strategy. This is easy to see since otherwise a copy of the magician playing against him could know when he stops and could know his prediction and deliberately falsify it no matter what.
It's not enough for the prediction after the stopping to be random, the stopping time itself must also be random. This is because if I
Is the following a correct reformulation of your problem?
Say that a magician's pure strategy is a function on finite bitstrings which returns either "go on" or "stop" with a positive integer and a frequency of zeros.
A magician's mixed strategy is a probability distribution over magician's pure strategies. (Btw, I'm not sure what kind of sigma-algebra is suitable here.)
A player's mixed strategy is a probability distribution over infinite bitstrings.
A magician's mixed strategy together with a player's mixed strategy define, in the obvious way, a probabil...
A countably infinite number of people are each assigned a hat that is either red or blue.
Without communication, they must each guess the color of their hat. The cannot know of each others guesses.
Show that there exists a strategy by which only a finite number of people will guess incorrectly.
I've formalized this problem in Coq with mathcomp. Yeah, idk why I decided to do this.
From mathcomp Require Import all_ssreflect.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Definition hats_assignment : Type := nat -> bool.
Definition policy_of_one_person (n : nat) : Type :=
{policy : hats_assignment -> bool &
forall ha : hats_assignment,
let ha' := fun m => if m == n is true then ~~ (ha m) else ha m
in policy ha == policy ha'}.
Definition strategy : Type :=
forall n : nat, policy_of_o
... For clarification, the people can all see the color of other people's hats, right? If so, then:
The people agree in advance (using the axiom of choice!) on a set of representatives for all equivalence classes in , where two bit sequences are equivalent under iff they differ in only finitely many positions.
When they are guessing, they all know the equivalence class they are in and they guess the color for their hat that is implied by the agreed upon representative of that equivalence class. By construction, only finitely many of them can guess incorr
Show that every countably infinite graph contains either a countably infinite complete subgraph, or a countably infinite edgeless subgraph.
Another nice one. I liked this one enough for a strong upvote. Here is my solution:
Say the graph is . If has no vertices of infinite degree then it has a countably infinite edgeless subgraph: we just pick any vertex, throw away its neighbors, pick another vertex, throw away its neighbors, etc. Since we can always do this, we end up picking countably infinite vertices without any edges between them, so we're done.
Now suppose has at least one vertex of infinite degree, say . Let be the subgraph consisting of the neighbors of and all the edg
This sounds obviously false to me? Why can’t you have a countably infinite graph where all vertices have degree 2?
There are paratroopers that simultaneously land on the plane at independently and identically distributed positions, drawn from a distribution with density . After landing, each paratrooper sleeps for independent and identically distributed times drawn from a distribution with density on the positive real numbers. Before they wake up they are unable to take any action other than standing at the position they've landed on.
All paratroopers have two devices with them: an infinite precision radar and a detonation button, both of which can only be used once.
Using the radar tells them the exact positions of all other paratroopers relative to the current position of the radar user. (This is important - the paratroopers don't have a compass to fix global directions such as north and south, and the radar only gives them information about relative positions.) This is the only way the paratroopers have of getting information about each other's locations: aside from all of them being able to use the radar once, they have no other way of getting information about the location of other paratroopers even if they are standing on the same spot at the same time. The radar also doesn't tell the user which paratroopers are at which locations, it only displays every location (possibly with multiplicity in the case of coincidence) at which there is a paratrooper.
The mission of the paratroopers is to press their detonation buttons all at the same location on the plane, though not necessarily at the same time. In other words, they all need to coordinate to go to the same point and press the detonation button once they are there, after which they are once again free to move as they please until the mission is either a success or a failure.
As a final technical detail, both density functions and are nonzero on their whole domain: this is for and for .
Assuming that the paratroopers can agree on a strategy before the mission begins, show that there is a strategy with which they can win almost surely (with probability equal to 1).
Cool problem, I think I may have (at least a sketch of) a solution. The way the paratroopers are allowed to move wasn't specified in the problem, but it would be too easy if they were allowed to just teleport around to arbitrary locations, so I'll assume that their movement has to be described by continuous functions.
Everyone will head towards a point chosen by the first paratrooper to wake up. Thus the first paratrooper to wake up must somehow "mark" himself. Further, he must mark himself instantly, right after waking up. This seems impossible, since th
Here is a kind of long solution that I think works (ETA: for the case )
This was posted on our house puzzle slack before, it was one of the few problems that wasn't solved though it didn't get as much love as some of the harder problems.
I'd expect that a fairly large fraction of working mathematicians wouldn't get it in 2 hours (and similarly for most of the hard Putnam problems) though I may be underestimating their ability to solve contest-like problems or it may have a simpler solution than this one.
Every paratrooper will use their radar as soon
For n = 4 the following trick works, but does not really have the same flavor as the n = 3 case:
If the soldiers land in a convex position (they are the corners of a convex quadrilateral) then meet at the intersection of the straight lines connecting the opposite soldiers. If they land in a triangle with one soldier in the interior, meet at the soldier in the middle.
Here the soldiers are forced to walk in straight lines.
It would not surprise me if there are tricks like these for some more n, but I don't see a general strategy. I think there is something you can do for odd n that is quite different from the other solutions so far, but I have not checked it fully yet and worked out what it requires from the radar.
Does the radar tell the paratroopers which other paratrooper is where, or just that there is some unknown paratrooper at each spot? In other words, are the paratroopers labeled on the radar?
Here is my solution:
For the solution is easy, as others have pointed out.
For , everyone executes the following algorithm (which works even if everyone has to move at a constant speed):
when you wake up:
if (there exists a rational angle):
let angle = ABC
if you're B:
press the button, then stand still
else:
walk towards B in a straight line
press button at the point B
else:
let {B, C} be the pair of points with the shortest distance
let B be the one with the larger
I'm not seeing what the time delays have to do with it, nor the probability distributions. Cannot they win with certainty (not just almost surely) by going to the average of all their positions?
Everyone uses the radar once they wake up. The two closest soldiers go towards each other on a straight line. Once they meet up and possibly wait for the other one to wake up they are free to visit all other soldiers in any order, when they are all together they can detonate. The important property is that the initial shortest distance is unique almost surely.
Prove or disprove the existence of random variables X_n so that the expected value of X_n goes to infinity as n goes to infinity but X_n goes to 0 almost surely (maybe not as much of a problem, but a pretty useful thing to think about for people who use EV calculations to make decisions).
Find the number of ways to tile a 1000 by 1000 grid with white and black tiles so each tile is adjacent to exactly two tiles of the same color.
First is much easier than 2nd.
Find the number of ways to tile a 1000 by 1000 grid with white and black tiles so each tile is adjacent to exactly two tiles of the same color.
The adjacency condition means that colored regions will form simple closed loops since ends and splits are disallowed. The smallest allowed region is a 2x2 tiles. The basic "textures" will be checkerboards of 2x2 regions and longer alternating stripes or concentric rings.
There are two ways to choose the color of the first corner tile at (1,1) position. The two tiles adjacent to the corner, (2,1) and (1,2) mus
About the second problem:
I've determined computationally that for an grid the number of tilings is if is even and if is odd, but I don't have a proof of this fact yet. I'll be looking for one - the answer looks quite nice so there ought to be a neat argument for seeing why it's like this.
More precisely, it looks like the possible tilings correspond to the ways of tiling the first row of the grid with 2x1 white or black dominoes. It's easy to see that the first row has to be of this form and that any such first row can't correspond to more
My University has a problem sheet question that presents the first problem in a nice way that feels more real-worldy
Find the maximum cardinality of a set of disjoint figure-eights in the plane, and prove that it is the maximum.
Let a = 1/n be the reciprocal of a positive integer n. Let A and B be two points of the plane such that the segment AB has length 1. Prove that every continuous curve joining A to B has a chord parallel to AB and of length a. Show that if a is not the reciprocal of an integer, then there is a continuous curve joining A to B which has no such chord of length a. [Source: Challenging Mathematical Problems with Elementary Solutions vol.2]
I forgot about this one. For the first problem:
You can get an injection from any collection of disjoint figure-eights on the plane to by picking any rational point in the interior of each of the two circles making up the figure, so there can only be countably many such disjoint figures. This is an injection because if another figure-eight had the same rational points as its representative, then it's easy to show that the two figures must intersect somewhere.
Let be an ordinal s.t. . Let be the cardinality of functions from to . Is ? If so, prove it.
Nice question. I'm providing my solution below, spoiler protected so people can attempt to solve the problem on their own.
By König's theorem
so the answer is no. This is also the smallest counterexample that ZFC should be able to decide, since GCH is undecided in ZFC and if GCH is true then will have a base two logarithm for any successor ordinal and any infinite that has a base two logarithm will satisfy .
Define a ribbon rile of length to be a 2D configuration of squares, constructed from a starting square by repeatedly adjoining a square above or to the right of the most recently added square. Prove that the number of tilings of a square by length- ribbon tiles is
What do the following problems all have in common?
What's the number of permutations in the symmetric group on letters without fixed points? What happens to the ratio of this number to the size of the group as ?
Suppose you draw samples with replacement from the set uniformly at random. Say the set of all elements you get is , where we don't distinguish between two draws of the same element. What's , i.e. the expected value of the cardinality of the set difference ? What happens to as ?
Two players A and B play the following zero-sum game: they take turns sampling from a probability distribution on the real numbers with a continuous probability density function. The first player to draw a value that's smaller than the maximum of all previous values drawn by both players loses. If A goes first, what's the probability that B wins the game?
Shoutout to Jane Street for this problem!
The secretary problem: there's a totally ordered finite set with distinct elements. You know the cardinality of . You play the following single-player game: you get to sample one element at a time from without replacement and uniformly at random. At any point you may choose to stop sampling and keep the last element you've sampled. You win if you halt on the maximum element of and lose otherwise.
For large, what's the strategy that maximizes the chance of victory in this game? What's the probability of victory achieved by the optimal strategy?
Consider the following game: there are players standing in a row, each with a tower of countably infinitely many red or blue hats on top of their heads. The probability of any hat being red is and likewise for any hat being blue, and the colors of the hats are jointly independent. Each player can see all of the hats on the heads of everyone other than themselves but they can't see any of their own hats.
To win the game, the players must all simultaneously guess a natural number (the number can be different for different players) and ensure that all of the hats on their own heads in the position they named from the bottom of the tower are red. For example, if three players are in the game and their guesses are , then they win if the first hat from the bottom on player 1's head, the fifth hat from the bottom on player 2's head and the thirteenth hat from the bottom on player 3's head are all red. If even one of them is blue, then they lose.
The players can all agree on a strategy to execute as a group beforehand, but can't communicate with each other once the game begins.
Prove that there is a strategy with which the players can win with probability .
Let be the set of integers such that all groups with a proper subgroup of index also have a proper subgroup of index strictly less than .
Give an explicit description of .
Show there are no integer solution to for
An excellent solution should take slightly more space than this comment ( ͡° ͜ʖ ͡°).
A good mathematician who hasn't seen the problem before should take anywhere from 30 minutes to 2 hours to solve it.
I wouldn't have necessarily characterized that as
the best elementary math problems you know?
That's fair, though I'm not sure what else to put in the title in this case. It's necessarily going to be a compressed version of what I'm actually asking for.
- A good mathematician who hasn't seen the problem before should take anywhere from 30 minutes to 2 hours to solve it.
- The solution should only involve undergraduate level maths. Difficult Putnam problems are a good benchmark for what kind of maths background should be required to solve the problems. The background required can be much less than this, but it shouldn't be more.
- For whatever reason you think the problem should be more widely known. The reason is completely up to you: the solution might include some insight which you find useful, it might be particularly elegant, it might involve some surprising elements that you wouldn't expect to appear in the context of the problem, et cetera.
- It's fine if the problem is well known, your examples don't have to be original or obscure.
'Good 0.5-2 hour undergraduate problems you think should be more widely known.'
Overall, the tricky thing about it is finding a problem that meets all those requirements.
https://www.lesswrong.com/posts/R82zuc4MZXvG2dJXY/if-your-solution-doesn-t-work-make-it-work
This has a good problem, that might take awhile if you're not familiar with linear algebra. It might or might not be useful to spreading basic understanding of linear algebra, or what (some of) that is about.
Is this good mathematician an undergrad?
The background required can be much less than this, but it shouldn't be more.
I didn't read this the first time through, and I think it suggests me posting the above link here, even if it doesn't take long enough.
This has a good problem, that might take awhile if you're not familiar with linear algebra
The problem is very simple without linear algebra - just notice that relative parities are preserved (i.e. if there are an even number of reds and an odd number of blacks, then after any sequence of moves they will always have opposite parities).
The given solution is to highlight a strategy, not the easiest way to solve the problem.
I'm looking for math problems of a specific kind. Here are the conditions I hope the problems satisfy:
Here are some examples:
If a polynomial with rational coefficients defines an injective map Q→Q, must it also define an injective map R→R? If yes then prove this is true, if no then find an explicit counterexample.
Prove that Hom(∏k∈NZ,Z)≅⨁k∈NZ. In words, prove that homomorphisms of abelian groups from the direct product of countably many copies of Z to Z themselves form a group that's isomorphic to the direct sum of countably many copies of Z.
If f:R→R is a continuous function such that the sequence f(α),f(2α),f(3α),… converges to 0 for every α>0, must it be the case that limx→∞f(x)=0? If yes then prove this is true, if no then find an explicit counterexample.
They are all relatively famous but they should give a sense of the flavor of what I'm looking for.