All of Ben's Comments + Replies

It says "I am socially clueless enough to do random inappropriate things"

In a sense I agree with you, if you are trying to signal something specific, then wearing a suit in an unusual context is probably the wrong way of doing it. But, the social signalling game is exhausting. (I am English, maybe this makes it worse than normal for me). If I am a guest at someone's house and they offer me food, what am I signalling by saying yes? What if I say no? They didn't let me buy the next round of drinks, do I try again later or take No for an answer? Are they offe... (read more)

4Jiro
Yes, but that doesn't mean that you can just avoid it and its consequences. Like war, you may not be interested in it, but it is interested in you. And if you can't avoid sometimes messing up, you can at least avoid making it worse than it has to be (such as by gratuitously wearing inappropriate clothes). Yes, but he's acting like it's a triumphant success. Voluntarily deciding "I don't want social skills" is a surrender that seriously harms you. If you can't get social skills perfect, at least do what you can. And he certainly can avoid wearing inappropriate suits, even if he might mess up deciding when to buy drinks. Genuinely communicating "I don't care and I want you to know it" without communicating bad things at the same time is countersignalling. Not just anyone can countersignal. Trump can do this because he's in a powerful position that implies a certain amount of cluefulness (and even then, his opponents are happy to jump on this sort of stuff as evidence of cluelessness).

A nice post about the NY flat rental market. I found myself wondering, does the position you are arguing against at the beginning actually exist, or it is set up only as a rhetorical thing to kill? What I mean is this:

everything’s priced perfectly, no deals to sniff out, just grab what’s in front of you and call it a day. The invisible hand’s got it all figured out—right?

Do people actually think this way? The argument seems to reduce to "This looks like a bad deal, but if it actually was a bad deal then no one would buy it. Therefore, it can't be a bad dea... (read more)

You have misunderstood me in a couple of places. I think think maybe the diagram is confusing you, or maybe some of the (very weird) simplifying assumptions I made, but I am not sure entirely.

First, when I say "momentum" I mean actual momentum (mass times velocity). I don't mean kinetic energy.

To highlight the relationship between the two, the total energy of a mass on a spring can be written as:   where p is the momentum, m the mass, k the spring strength and x the position (in units where the lowest potential point is at x=0... (read more)

1Isaac King
Well that explains why you got the wrong answer! Springs, as you now point out, work opposite the way gravity does, in that the longer a spring is, the more energy it take to continue to deform it. (Assuming we mean an ideal spring, not one that's going to switch to plastic deformation at some point.) So if we were talking about springs, you would be correct that the most efficient time to teleport the spring longer would be when it's already as long as possible. But we are not talking about springs, we are talking about gravity, which works differently. (Not only is the function going in a different direction, but also at a different rate. Gravity decreases as the inverse square of the distance, whereas spring force increases linearly with distance.) So your "simplification" is just wrong. You stated: This is false. It takes more energy to move an object up by 1 meter on the surface of Earth than it does a million km away, because gravity gets weaker as you go further away. So if you want to maximize the gain in potential energy you get from your teleportation machine, you want to use it as close to the planet as possible. (An easy way to see why this must be true is that an object's potential energy at infinity is finite, so each additional interval of distance must decrease in energy in order for the sum of all of them to stay finite.)

I am not sure that example fully makes sense.  If trade is possible then two people with 11 units of resources can get together and do a cost 20 project. That is why companies have shares, they let people chip in so you can make a Suez Canal even if no single person on Earth is rich enough to afford a Suez Canal.

I suppose in extreme cases where everyone is on or near the breadline some of that "Stag Hunt" vs "Rabbit Hunt" stuff could apply.

4Viliam
Ah, I meant something like people have 11 units of resources, but they need ~10 to survive, so 10 of them would have to invest all their savings... which is unlikely to happen, because coordination is hard. You are right that companies with shares are the way to overcome it. I was thinking deeper in history, where e.g. science could only happen because someone had a rich sponsor. Without the rich sponsors, science probably would not have happened; Newton would be too busy picking the apples. Only a small fraction of rich people becomes sponsors of science, but it's better than nothing. Perfect equality could mean that if somehow things go wrong, they will go equally wrong for everyone, so no one will have the slack necessary to find the way out... is my intuition about this scenario.

I agree with you that, if we need to tax something to pay for our government services, then inheritance tax is arguably not a terrible choice.

But a lot of your arguments seem a bit problematic to me. First, as a point of basic practicality, why 100%? Couldn't most of your aims be achieved with a lesser percentage? That would also smooth out weird edge cases.

There is something fundamentally compelling about the idea that every generation should start fresh, free from the accumulated advantages or disadvantages of their ancestors.

 

This quote stood out t... (read more)

2Viliam
Also, sometimes inequality functions a bit like division of labor. Imagine that everyone has 11 units of resources, and you need 20 to start a project. Compare to a situation where most people have 10 units of resources and one person has 30. There is no guarantee that the rich person will start the project, but the chances are probably higher than in the first scenario.

I think I am not understanding the question this equation is supposed to be answer, as it seems wrong to me.

I think you are considering the case were we draw arrowheads on the lines? So each line is either an "input" or an "output", and we randomly connect inputs only to outputs, never connecting two inputs together or two outputs? With those assumptions I think the probability of only one loop on a shape with N inputs and N outputs (for a total of 2N "puts") is  1/N.

The equation I had ( (N-2)!! / (N-1)!!) is for N "points", which are not pre-assigned... (read more)

1James Camacho
The bottom row is close to what I imagine, but without IO ports on the same edge being allowed to connect to each other (though that is also an interesting problem). These would be the three diagrams for the square: The middle one makes a single loop which is one-third of them, and n=4/2=2 in this case. My guess for how to prove the recurrence is to "glue" polygons together: There are n+1 pairs of sizes (k,n+1−k) we can glue together (if you're okay with 2-sided polygons), but I haven't made much progress in this direction. All I've found is gluing two polygons together decreases the number of loops by zero, one or two.

This is really wonderful, thank you so much for sharing. I have been playing with your code.

The probability that their is only one loop is also very interesting. I worked out something, which feels like it is probably already well known, but not to me until now, for the simplest case.

In the simplest case is one tile. The orange lines are the "edging rule". Pick one black point and connect it to another at random. This has a 1/13 chance of immediately creating a closed loop, meaning more than one loop total. Assuming it doesn't do that, the next connection ... (read more)

1James Camacho
So, I'm actually thinking about something closer to this for "one loop": This is on a single square tile, with four ports of entry/exit. What I've done is doubled the rope in each connection, so there is one connection going from the top to the bottom and a different connection going from the bottom to the top. Then you tie off the end of each connection with the start of the connection just clockwise to it. Some friends at MIT solved this problem for a maths class, and it turns out there's a nice recurrence. Let P(n,ℓ) be the probability there are ℓ loops in a random knot on a single tile with 2n sides. Then  P(n,ℓ)=2n+1P(n−1,ℓ−1)+n−1n+1P(n−2,ℓ). So, if you're looking for exactly one loop, you'd have P(n,1)=n−1n+1P(n−2,1)⟹P(n,1)={1n+1n is even0n is odd. I can't really explain where this recurrence comes from; their proof was twenty pages long. It's also too complicated to really apply to multiple tiles. But, maybe there's a more elementary proof for this recursion, and something similar can be done for multiple tiles.

That is a nice idea. The "two sides at 180 degrees" only occurred to me after I had finished. I may look into that one day, but with that many connections is needs to be automated.

In the 6 entries/exits ones above you pick one entry, you have 5 options of where to connect it. Then, you pick the next unused entry clockwise, and have 3 options for where to send it, then you have only one option for how to connect the last two. So its 5x3x1 = 15 different possible tiles.

With 14 entries/exits, its 13x11x9x7x5x3x1 = 135,135 different tiles. (13!!, for !! being ... (read more)

2Measure
You could split each full tile into its four sub-tiles, each with six connection points. Then, each sub-tile can be one of 15 flavors.
7James Camacho
Your math is correct, it's 13!! and (142)=91 for the number of tiles and connections. I wrote some code here: https://github.com/programjames/einstein_tiling Here's an example: An interesting question I have is: suppose we tied off the ends going clockwise around the perimeter of the figure. What is the probability we have exactly one loop of thread, and what is the expected number of loops? This is a very difficult problem; I know several MIT math students who spent several months on a slightly simpler problem.

I still find the effect weird, but something that I think makes it more clear is this phase space diagram:

We are treating the situation as 1D, and the circles in the x, p space are energy contours. Total energy is distance from the origin. An object in orbit goes in circles with a fixed distance from the origin. (IE a fixed total energy).

The green and purple points are two points on the same orbit. At purple we have maximum momentum and minimum potential energy. At green its the other way around. The arrows show impulses, if we could suddenly add momentum ... (read more)

1Isaac King
Teleporting an object 1 meter up gives it more energy the closer it is to the planet, because gravity gets weaker the further away it is. If you're at infinity, it adds 0 energy to move further away. I think your error is in not putting real axes on your phase space diagram. If going to the right increases your potential energy, and the center has 0 potential energy, then being to the left of the origin means you have negative potential energy? This is not how orbits work; a real orbit would never leave the top right quadrant of the phase space since neither quantity can be negative. You also simply assume that arrows of the same length are imparting the same amount of energy, but don't check; in reality, if you want the constant-energy contours to be a circle, the axes can't be linear. (Since if they were linear, an object that has half its energy as potential and half as kinetic would be at [0.5, 0.5], which is inside the unit circle.) (I'm assuming that when you say "momentum" you mean kinetic energy, but those are different things. You claim that any point on the Y axis has equal momentum and energy, but setting aside the fact that these quantities use different units, momentum is proportional to speed, while kinetic energy scales quadratically.)

I am not sure that is right. A very large percentage of people really don't think the rolls are independent. Have you ever met anyone who believed in fate, Karma, horoscopes , lucky objects or prayer? They don't think its (fully) random and independent. I think the majority of the human population believe in one or more of those things.

If someone spells a word wrong in a spelling test, then its possible they mistyped, but if its a word most people can't spell correctly then the hypothesis "they don't know the spelling' should dominate. Similarly, I think it is fair to say that a very large fraction of humans (over 50%?) don't actually think dice rolls or coin tosses are independent and random.

2Richard_Kennaway
They may well do. But they are wrong.

That is a cool idea! I started writing a reply, but it got a bit long so I decided to make it its own post in the end. ( https://www.lesswrong.com/posts/AhmZBCKXAeAitqAYz/celtic-knots-on-einstein-lattice )

I stuck to maximal density for two reaosns, (1) to continue the Celtic knot analogy (2) because it means all tiles are always compatible (you can fit two side by side at any orientation without loosing continuity). With tiles that dont use every facet this becomes an issue.

Thinking about it now, and without having checked carefully, I think this compatibilty does something topological and forces odd macrostructure. For example, if we have a line of 4-tiles in a sea of 6-tiles (4 tiles use four facets), then we cant end the line of 4 tiles without breaking ... (read more)

That's a very interesting idea. I tried going through the blue one at the end.

Its not possible in that case for each string to strictly alternate between going over and under, by any of the rules I have tried. In some cases two strings pass over/under one another, then those same two strings meet again when one has travelled two tiles and the other three. So they are de-synced. They both think its their turn to go over (or under).

The rules I tried to apply were (all of which I believe don't work):

  • Over for one tile, under for the next (along each string)
  • Ove
... (read more)

I wasn't aware of that game.  Yes it is identical in terms of the tile designs. Thank you for sharing that, it was very interesting and that Tantrix wiki page lead me to this one, https://en.wikipedia.org/wiki/Serpentiles ,  which goes into some interesting related stuff with two strings per side or differently shaped tiles.

Something related that I find interesting, for people inside a company, the real rival isn't another company doing the same thing, but people in your own company doing a different thing.

Imagine you work at Microsoft in the AI research team in 2021. Management want to cut R&D spending, so either your lot or the team doing quantum computer research are going to be redundant soon. Then, the timeline splits. In one universe, Open AI release Chat GPT, in the other PsiQuantum do something super impressive with quantum stuff. In which of those universes do th... (read more)

I think economics should be taught to children, not for the reasons you express, but because it seems perverse that I spent time at school learning about Vikings, Oxbow lakes, volcanoes, Shakespeare and Castles, but not about the economic system of resource distribution that surrounds me for the rest of my life. When I was about 9 I remember asking why 'they' didn't just print more money until everyone had enough. I was fortunate to have parents who could give a good answer, not everyone will be.

4lsusr
My favorite answer to "why 'they' didn't just print more money until everyone had enough" is that after the USA left the gold standard in 1971, the US government really did just print more money. source Meanwhile, >50% of the federal budget goes to healthcare and pensions. In this way, the US government kind of is just printing money until everyone has enough. In this way, the US government is doing what voters demand.

Stock buybacks! Thank you. That is definitely going to be a big part f the "I am missing something here" I was expressing above.

I freely admit to not really understanding how shares are priced. To me it seems like the value of a share should be related to the expected dividend pay-out of that share over the remaining lifetime of the company, with a discount rate applied on pay-outs that are expected to happen further in the future (IE dividend yields 100 years from now are valued much less than equivalent payments this year). By this measure, justifying the current price sounds hard.

Google says that the annual dividend on Nvidia shares is 0.032%. (Yes, the leading digits are 0.0). ... (read more)

8fdrocha
One quick observation about NVDA dividends that not many people might be aware of: NVDA pays a quarterly dividend of exactly once cent ($0.01) per share. They don't do this for the "usual" reason companies pay dividends (returning money to shareholders) but because by paying a non-zero dividend at all NVDA becomes part of dividend-paying company indexes and that means that ETFs that follow those indexes will buy NVDA shares. So they technically pay a dividend but for the purposes of valuation you should think of it as a non dividend paying stock. Regarding the more general question of valuation, if you want to value a company based on how much they are currently distributing to shareholders you need to consider not only dividends but also share buybacks. Buybacks are effectively just a more tax-efficient form of paying dividends. I am not sure what the total numbers are for 2024, but in August for instance NVDA announced a $50 billion buyback. And of course, the proper measure is not current distribution, but total expected discounted distributions over all time. That's hard to estimate, but for a company experiencing explosive growth it is surely higher than current distributions. 

That is very interesting! That does sound weird.

In some papers people write density operators using an enhanced "double ket" Dirac notation, where eg. density operators are written to look like |x>>, with two ">"'s. They do this exactly because the differential equations look more elegant.

I think in this notation measurements look like  <<m|, but am not sure about that. The QuTiP software (which is very common in quantum modelling) uses something like this under-the-hood, where operators (eg density operators) are stored internally using 1d vectors, and the super-operators (maps from... (read more)

Yes, in your example a recipient who doesn't know the seed models the light as unpolarised, and one who does as say, H-polarised in a given run. But for everyone who doesn't see the random seed its the same density matrix.

Lets replace that first machine with a similar one that produces a polarisation entangled photon pair, |HH> + |VV> (ignoring normalisation). If you have one of those photons it looks unpolarised (essentially your "ignorance of the random seed" can be thought of as your ignorance of the polarisation of the other photon).

If someone el... (read more)

 What is the Bayesian argument, if one exists, for why quantum dynamics breaks the “probability is in the mind” philosophy?

 

In my world-view the argument is based on Bell inequalities. Other answers mention them, I will try and give more of an introduction.

First, context. We can reason inside a theory, and we can reason about a theory. The two are completely different and give different intuitions. Anyone talking about "but the complex amplitudes exist" or "we are in one Everett branch" is reasoning inside the theory. The theory, as given in the ... (read more)

1Maxwell Peterson
Ahh. The correlations being dependent on inputs, but things appearing random to Alice and Bob, does seem trickier than whatever I was imaginining was meant by quantum randomness/uncertainty. Don't fully have my head around it yet, but this difference seems important. Thanks!

Just the greentext. Yes, I totally agree that the study probably never happened. I just engaged with the actualy underling hypothesis, and to do so felt like some summary of the study helped. But I phrased it badly and it seems like I am claiming the study actually happened. I will edit.

 I thought they were typically wavefunction to wavefunction maps, and they need some sort of sandwiching to apply to density matrices?

 Yes, this is correct. My mistake, it does indeed need the sandwiching like this  .

From your talk on tensors, I am sure it will not surprise you at all to know that the sandwhich thing itself (mapping from operators to operators) is often called a superoperator.

I think the reason it is as it is is their isn't a clear line between operators that modify the state and those that represent measurements... (read more)

2tailcalled
Oh it does surprise me, superoperators are a physics term but I just know linear algebra and dabble in physics, so I didn't know that one. Like I'd think of it as the functor over vector spaces that maps V↦V⊗V. Hm, I guess it's true that we'd usually think of the matrix exponential as mapping V⊸V to V⊸V, rather than as mapping V⊗V⊸C to V⊸V. I guess it's easy enough to set up a differential equation for the latter, but it's much less elegant than the usual form.

The way it works normally is that you have a state , and its acted on by some operator, , which you can write as  . But this doesn't give a number, it gives a new state like the old   but different. (For example if a was the anhilation operator the new state is like the old state but with one fewer photons). This is how (for example) an operator acts on the state of the system to change that state. (Its a density matrix to density matrix map).

In dimensions terms this is:  (1,1) = (1, 1) * (1,1)

(Two square matrices of size... (read more)

2tailcalled
Yes, applying a (0, 2) tensor to a (2, 0) tensor is like taking the trace of their composition if they were both regarded as linear maps. Anyway for operators that are supposed to modify a state, like annihilation/creation or time-evolution, I would be inclined to model it as linear maps/(1, 1)-tensors like in the OP. It was specifically for observables that I meant it seemed most natural to use (0, 2) tensors. I thought they were typically wavefunction to wavefunction maps, and they need some sort of sandwiching to apply to density matrices?

You are completely correct in the "how does the machine work inside?" question. As you point out that density matrix has the exact form of something that is entangled with something else.

I think its very important to be discussing what is real, although as we always have a nonzero inferential distance between ourselves and the real the discussion has to be a little bit caveated and pragmatic. 

I think the reason is that in quantum physics we also have operators representing processes (like the Hamiltonian operator making the system evolve with time, or the position operator that "measures" position, or the creation operator that adds a photon), and the density matrix has exactly the same mathematical form as these other operators (apart from the fact the density matrix needs to be normalized). 

But that doesn't really solve the mystery fully, because they could all just be called "matrices" or "tensors" instead of "operators". (Maybe it gets... (read more)

6tailcalled
I feel like for observables it's more intuitive for them to be (0, 2) tensors (bilinear forms) whereas for density matrices it's more intuitive for them to be (2, 0) tensors. But maybe I'm missing something about the math that makes this problematic, since I haven't done many quantum calculations.

There are some non-obvious issues with saying "the wavefunction really exists, but the density matrix is only a representation of our own ignorance". Its a perfectly defensible viewpoint, but I think it is interesting to look at some of its potential problems:

  1. A process or machine prepares either |0> or |1> at random, each with 50% probability. Another machine prepares either |+> or |-> based on a coin flick, where |+> = (|0> + |1>)/root2, and  |+> = (|0> - |1>)/root2. In your ontology these are actually different machines
... (read more)
2Steven Byrnes
I like “different machines that produce different states”. I would bring up an example where we replace the coin by a pseudorandom number generator with seed 93762. If the recipient of the photons happens to know that the seed is 93762, then she can put every photon into state |0> with no losses. If the recipient of the photons does not know that the random seed is 93762, then she has to treat the photons as unpolarized light, which cannot be polarized without 50% loss. So for this machine, there’s no getting away from saying things like: “There’s a fact of the matter about what the state of each output photon is. And for any particular experiment, that fact-of-the-matter might or might not be known and acted upon. And if it isn’t known and acted upon, then we should start talking about probabilistic ensembles, and we may well want to use density matrices to make those calculations easier.” I think it’s weird and unhelpful to say that the nature of the machine itself is dependent on who is measuring its output photons much later on, and how, right?
8Charlie Steiner
I wonder if this can be resolved by treating the randomness of the machines quantum mechanically, rather than having this semi-classical picture where you start with some randomness handed down from God. Suppose these machines use quantum mechanics to do the randomization in the simplest possible way - they have a hidden particle in state |left>+|right>  (pretend I normalize), they mechanically measure it (which from the outside will look like getting entangled with it) and if it's on the left they emit their first option (|0> or |+> depending on the machine) and vice versa. So one system, seen from the outside, goes into the state |L,0>+|R,1>, the other one into the state |L,0>+|R,0>+|L,1>-|R,1>. These have different density matrices. The way you get down to identical density matrices is to say you can't get the hidden information (it's been shot into outer space or something). And then when you assume that and trace out the hidden particle, you get the same representation no matter your philosophical opinion on whether to think of the un-traced state as a bare state or as a density matrix. If on the other hand you had some chance of eventually finding the hidden particle, you'd apply common sense and keep the states or density matrices different. Anyhow, yeah, broadly agree. Like I said, there's a practical use for saying what's "real" when you want to predict future physics. But you don't always have to be doing that.

I just looked up the breakfast hypothetical. Its interesting, thanks for sharing it.

So, my understanding is (supposedly) someone asked a lot of prisoners "How would you feel if you hadn't had breakfast this morning?", did IQ tests on the same prisoners and found that the ones who answered "I did have breakfast this morning." or equivalent were on average very low in IQ. (Lets just assume for the purposes of discussion that this did happen as advertised.)

It is interesting. I think in conversation people very often hear the question they were expecting, and ... (read more)

1Lorec
If we're discussing the object-level story of "the breakfast question", I highly doubt that the results claimed here actually occurred as described, due [as the 4chan user claims] to deficits in prisoner intelligence, and that "it's possible [these people] lack the language skills to clearly communicate about [counterfactuals]". Did you find an actual study, or other corroborating evidence of some kind, or just the greentext?

The question of "why should the observed frequencies of events be proportional to the square amplitudes" is actually one of the places where many people perceive something fishy or weird with many worlds. [https://www.sciencedirect.com/science/article/pii/S1355219809000306 ]

To clarify, its not a question of possibly rejecting the square-amplitude Born Rule while keeping many worlds. Its a question of whether the square-amplitude Born Rule makes sense within the many worlds perspective, and it if doesn't what should be modified about the many worlds perspective to make it make sense.

4avturchin
It looks like even Everett had his own derivation of Born rule from his model, but in his model there is no "many worlds" but just evolution of unitary function. As I remember, he analyzed memories of an agent - so he analyzed past probabilities, but not future probabilities. This is an interesting fact in the context of this post where the claim is about the strangeness of the future probabilities.  But even if we exclude MWI, pure classical inflationary Big World remains with multiple my copies distributed similarly to MWI-branches. This allow something analogues to quantum immortality to exist even without MWI. 

I agree with this. Its something about the guilt that makes this work. Also the sense that you went into it yourself somehow reshapes the perception.

I think the loan shark business model maybe follows the same logic. [If you are going to eventually get into a situation where the victim pays or else suffers violence, then why doesn't the perpetrator just skip the costly loan step at the beginning and go in threat first? I assume that the existence of loan sharks (rather than just blackmailers) proves something about how if people feel like they made a bad choice or engaged willingly at some point  they are more susceptible. Or maybe its frog boiling.]

On the "what did we start getting right in the 1980's for reducing global poverty" I think most of the answer was a change in direction of China. In the late 70's they started reforming their economy (added more capitalism, less command economy): https://en.wikipedia.org/wiki/Chinese_economic_reform.

Comparing this graph on wiki https://en.wikipedia.org/wiki/Poverty_in_China#/media/File:Poverty_in_China.svg , to yours, it looks like China accounts for practically all of the drop in poverty since the 1980s.

Arguably this is a good example for your other point... (read more)

I don't think the framing "Is behaviour X exploitation?" is the right framing. It takes what (should be) an argument about morality and instead turns it into an argument about the definition of the word "exploitation" (where we take it as given that, whatever the hell we decide exploitation "actually means" it is a bad thing). For example see this post: https://www.lesswrong.com/posts/yCWPkLi8wJvewPbEp/the-noncentral-fallacy-the-worst-argument-in-the-world. Once we have a definition of "exploitation" their might be some weird edge cases that are technicall... (read more)

4Darmani
I made the grave error of framing this post in a way that invites a definition debate. While we are well familiar that a definition debate is similar to a debate over natural categories, which is a perfectly fine discussion to have, the discussion here has suffered because several people came in with competing categories. I strongly endorse Ben's post, and will edit the top post to incorporate it.

The teapot comparison (to me) seems to be a bad. I got carried away and wrote a wall of text. Feel free to ignore it!

First, lets think about normal probabilities in everyday life. Sometimes there are more ways for one state to come about that another state, for example if I shuffle a deck of cards the number of orderings that look random is much larger than the number of ways (1) of the cards being exactly in order.

However, this manner of thinking only applies to certain kinds of thing - those that are in-principle distinguishable. If you have a deck of bl... (read more)

1Lorec
First of all, thank you for taking the time to write this [I genuinely wish people sent me "walls of text" about actually interesting topics like this all day]. I need to spend more time thinking about what you've written here and clearly describing my thoughts, and plan to do this over the next few days now that I've gotten a few critical things [ eg this ] in order. But for now: I'm pretty sure you are missing something critical that could make the thing you are trying to think about seem easier. This thing I think you are missing has to do with the the cleaving of the "centered worlds" concept itself. Different person, different world. Maybe trees still fall and make sounds without anyone around to hear, physically. But in subjective anthropics, when we're ranking people by prevalence - Caroline and Prime Intellect don't perceive the same sound, and they won't run similar-looking logging operations. Am I making sense?

I found this post to be a really interesting discussion of why organisms that sexually reproduce have been successful and how the whole thing emerges. I found the writing style, where it switched rapidly between relatively serious biology and silly jokes very engaging.

Many of the sub claims seem to be well referenced (I particularly liked the swordless ancestor to the swordfish liking mates who had had artificial swords attached).

Answer by Ben20

"Stock prices represent the market's best guess at a stock's future price."

But they are not the same as the market's best guess at its future price. If you have a raffle ticket that will, 100% for definite, win $100 when the raffle happens in 10 years time, the the market's best guess of its future price is $100, but nobody is going to buy it for $100, because $100 now is better than $100 in 10 years.

Whatever it is that people think the stock will be worth in the future, they will pay less than that for it now. (Because $100 in the future isn't as good as ... (read more)

3tryactions
You can replace "best guess at a stock's future price" with "best guess at a stock's future price, time-discounted using a risk free rate" and the essential question still remains.  This is Wikipedia's framing of the equity premium puzzle.

Other Ant-worriers are out there!

""it turned out this way, so I guess it had to be this way" doesn't resolve my confusion"

Sorry, I mixed the position I hold (that they maybe work like bosons) and the position I was trying to argue for, which was an argument in favor of confusion.

I can't prove (or even strongly motivate) my "the imaginary mind-swap procedure works like a swap of indistinguishable bosons" assumption, but, as far as I know no one arguing for Anthropic arguments can prove (or strongly motivate) the inverse position - which is essential for man... (read more)

3Lorec
And I can't prove there isn't a teapot circling Mars. It is a very strange default or prior, that two things that look distinct, would act like numerically or logically indistinguishable entities.

I suspect there is a large variation between countries in how safely taxi drivers drive relative to others.

In London my impression is that the taxis are driven more safely than non-taxis. In Singapore it appears obvious to casual observation that taxis are much less safely driven than most of the cars.

At least in my view, all the questions like the "Doomsday argument" and "why am I early in cosmological" history are putting far, far too much weight on the anthropic component.

If I don't know how many X's their are, and I learn that one of them is numbered 20 billion then sure, my best guess is that there are 40 billion total. But its a very hazy guess.

If I don't know how many X's will be produced next year, but I know 150 million were produced this year, my best guess is 150 million next year. But is a very hazy guess.

If I know that the population of X's... (read more)

4Lorec
Welcome to the Club of Wise Children Who Were Anthropically Worried About The Ants. I thought it was just me. Just saying "it turned out this way, so I guess it had to be this way" doesn't resolve my confusion, in physical or anthropic domains. The boson thing is applicable [not just as a heuristic but as a logical deduction] because in the Standard Model, we consider ourselves to know literally everything relevant there is to know about the internal structures of the two bosons. About the internal structures of minds, and their anthropically-relevant differences, we know far less. Maybe we don't have to call it "randomness", but there is an ignorance there. We don't have a Standard Model of minds that predicts our subjectively having long continuous experiences, rather than just being Boltzmann brains.

I remember reading something about the Great Leap Forward in China (it may have been the Cultural Revolution, but I think it was the Great Leap Forward) where some communist party official recognised that the policy had killed a lot of people and ruined the lives of nearly an entire generation, but they argued it was still a net good because it would enrich future generations of people in China.

For individuals you weigh up the risk/rewards of differing your resource for the future. But, as a society asking individuals to give up a lot of potential utility for unborn future generations is a harder sell. It requires coercion.

I think we might be talking past each other. I will try and clarify what I meant.

Firstly, I fully agree with you that standard game theory should give you access to randomization mechanisms. I was just saying that I think that hypotheticals where you are judged on the process you use to decide, and not on your final decision are a bad way of working out which processes are good, because the hypothetical can just declare any process to be the one it rewards by fiat.

Related to the randomization mechanisms, in the kinds of problems people worry about with pre... (read more)

I see where you are coming from. But, I think the reason we are interested in CDT (for any DT) in the first place is because we want to know which one works best. However, if we allow the outcomes to be judged not just on the decision we make, but also on the process used to reach that decision then I don't think we can learn anything useful.

Or, to put it from a different angle, IF the process P is used to reach decision X, but my "score" depends not just on X but also P then that can be mapped to a different problem where my decision is "P and X", and I u... (read more)

3Daniel Kokotajlo
I don't think I understand this yet, or maybe I don't see how it's a strong enough reason to reject my claims, e.g. my claim "If standard game theory has nothing to say about what to do in situations where you don't have access to an unpredictable randomization mechanism, so much the worse for standard game theory, I say!"

I like this framing.

An alternative framing, which I think is also part of the answer is that some art is supposed to hit a very large audience and give each a small amount of utility, and other art is supposed to hit a smaller, more specialized, audience very hard. This framing explains things like traditional daytime TV, stuff that no one really loves but a large number of bored people find kind of unobjectionable. And how that is different from the more specialist TV you might actually look forward to an episode off but might hit a smaller audience.

(Obviously some things can hit a big audience and be good, and others can be bad on both counts. But the interesting quadrants two compare are the other two).

Random thoughts. You can relatively simply get a global phase factor at each timestep if you want. I don;t think a global phase factor at each step really counts as meaningfully different though. Anyway, as an example of this:

So that, at each (classical) timestep every single element of the CA tape just moves one step to the right. (So any patterns of 1's and 0's just orbit the tape in circles forever, unchanging.). Its quite a boring CA, but a simple example.

We can take the quantum CA that is exactly the same, but with some complex pha... (read more)

After finding a Unitary that comes from one of your classical Cellular Automata then any power of that unitary will also be a valid unitary. So for example in classical logic their is a the "swap" gate for binary inputs, but in quantum computing the "square-root swap" gate also exists. 

So you can get one of your existing unitary matrices, and (for example) take its square root. That would kind of be like a quantum system doing the classical Cellular Automata, that is interrupted halfway through the first step. (Because applying the root matrix twice i... (read more)

2Optimization Process
This is a good point! I'll send you $20 if you send me your PayPal/Venmo/ETH/??? handle. (In my flailings, I'd stumbled upon this "fractional step" business, but I don't think I thought about it as hard as it deserved.) Nyeeeh, unfortunately, sort of "I know it when I see it." It's kinda neat being able to take a fractional step of a classical elementary CA, but I'm dissatisfied because... ah, because the long-run behavior of the fractional-step operator is basically indistinguishable from the long-run behavior of the corresponding classical CA. So, tentative operationalization of "basically equivalent": U is "basically equivalent" to a classical elementary CA if the long-run behavior of U is very close to the long-run behavior of some Uclassical, i.e., uh,  ∀n∈N,ϵ∈R+:∃N>n,M∈N,Ucl:∣∣UN−UMcl∣∣<ϵ ...but I can already think of at least one flaw in this operationalization, so, uh, I'm not sure. (Sorry! This being so fuzzy in my head is why I'm asking for help!)

I think the limitations to radius set by material strength only apply directly to a cylinder spinning by itself without an outer support structure. For example, I think a rotating cylinder habitat surrounded by giant ball bearings connecting it to a non-rotating outer shell can use that outer shell as a foundation, so each part of the cylinder that is "suspended" between two adjacent ball bearings is like a suspension bridge of that length, rather than the whole thing being like a suspension bridge of length equal to the total cylinder diameter. Obviously ... (read more)

-4bhauth
As a "physicist and dabbler in writing fantasy/science fiction" I assume you took the 10 seconds to do the calculation and found that a 1km radius cylinder would have ~100 kW of losses per person from roller bearings supporting it, for the mass per person of the ISS. But I guess I don't understand how you expect to generate that power or dissipate that heat.

If this was the setup I would bet on "hard man" fitness people swearing that running with the spin to run in a little more than earth normal gravity was great for building strength and endurance and some doctor somewhere would be warning people that the fad may not be good for your long term health.

Yes, its a bit weird. I was replying because I thought (perhaps getting the wrong end of the stick) that you were confused about what the question was, not (as it seems now) pointing out that the question (in your view) is open to being confused.

In probability theory the phrase "given that" is a very important, and it is (as far as I know) always used in the way used here. ["given that X happens" means "X may or may not happen, but we are thinking about the cases where it does", which is very different from meaning "X always happens"]

A more common use woul... (read more)

1darrenreynolds
I'm not sure about the off-topic rules here, but how about this: Why are some of the drinks so expensive, given that all of them are mostly water? Sometimes we use the phrase "given that" to mean, "considering that". Here, we do not mean, some of the drinks are not mostly water but we are not talking about them. We mean that literally all the drinks are mostly water.

That Iran thing is weird.

If I were guessing I might say that maybe this is happening:

Right now the more trade China has with Iran the more America might make a fuss. Either complaining politically, putting tariffs, or calling on general favours and good will for it to stop. But if America starts making a fuss anyway, or burns all its good will, then their is suddenly no downside to trading with Iran. Now substitute "China" for any and all countries (for example the UK, France and Germany, who all stayed in the Iran Nuclear Deal even after the USA pulled out).


"given that all rolls were even" here means "roll a normal 6 sided dice, but throw out all of the sequences that included odd numbers." The two are not the same, because in the case where odd numbers can be rolled, but they "kill" the sequence it makes situations involving long sequences of rolls much less likely to be included in the dataset at all.

As other comments explain, this is why the paradox emerges. By stealth, the question is actually "A: How long do I have to wait for two 6s in a row, vs B: getting two 6's, not necessarily in a row, given that I am post selecting in a way that very strongly favors short sequences of rolls".

2darrenreynolds
Yes, exactly - thank you. It depends on the interpretation of the phrase "given that all rolls were even". Most ordinary people will assume it means that all the rolls were even, but as you have succinctly explained, that is not what it means in the specialist language of mathematics. It is only when you apply the latter interpretation, that some of the rolls are odd but we throw those out afterwards, that the result becomes at first surprising.  I do find LessWrong a curious place and am not a regular here. You can post something and it will get downvoted as wrong, then someone else comes along and says exactly the same thing and it's marked as correct. Heh.
Load More