Ahhh! Yes, this is very helpful! Thanks for the explanation.
Question: if I'm considering an isolated system (~= "the entire universe"), you say that I can swap between state-vector-format and matrix-format via
. But later, you say...
If is uncoupled to its environment (e.g. we are studying a carefully vacuum-isolated system), then we still have to replace the old state vector picture by a (possibly rank ) density matrix ...
But if , how could it ever be rank>1?
(Perhaps more generally: what does it mean when a state is represented as a ran...
The usual story about where rank > 1 density matrices come from is when your subsystem is entangled with an environment that you can't observe.
The simplest example is to take a Bell state, say
|00> + |11> (obviously I'm ignoring normalization) and imagine you only have access to the first qubit; how should you represent this state? Precisely because it's entangled, we know that there is no |Psi> in 1-qubit space that will work. The trace method alluded to in the post is to form the (rank-1) density matrix of the Bell state, and...
That is... a very interesting and attractive way of looking at it. I'll chew on your longer post and respond there!
I have an Anki deck in which I've half-heartedly accumulated important quantities. Here are mine! (I keep them all as log10(value in kilogram/meter/second/dollar/whatever seems natural), to make multiplication easy.)
Quantity | Value |
---|---|
Electron mass | -30 |
Electron charge | -18.8 |
Gravitational constant | -10.2 |
Reduced Planck constant | -34 |
Black body radiation peak wavelength | -2.5 |
Mass of the earth | 24.8 |
Moon-Earth distance | 8.6 |
Earth-sun distance | 11.2 |
log10( 1 ) | 0 |
log10( 2 ) | 0.3 |
log10( 3 ) | 0.5 |
log10( 4 ) | 0.6 |
log10( 5 ) | 0.7 |
log10( 6 ) | 0.8 |
log10( 7 ) | 0.85 |
log10( 8 ) | 0.9 |
log10( 9 ) | 0.95 |
Boltzmann constant | -22. |
I thank you for your effort! I am currently missing a lot of the mathematical background necessary to make that post make sense, but I will revisit it if I find myself with the motivation to learn!
This is a good point! I'll send you $20 if you send me your PayPal/Venmo/ETH/??? handle. (In my flailings, I'd stumbled upon this "fractional step" business, but I don't think I thought about it as hard as it deserved.)
How are you defining "basically equivalent"
Nyeeeh, unfortunately, sort of "I know it when I see it." It's kinda neat being able to take a fractional step of a classical elementary CA, but I'm dissatisfied because... ah, because the long-run behavior of the fractional-step operator is basically indistinguishable from the long-run behavior of ...
I was imagining the tape wraps around! (And hoping that whatever results fell out would port straightforwardly to infinite tapes.)
I've never been familiar enough with group-theory stuff to memorize the names (which, warning, also might mean that it will take you a lot of time to write a sufficiently-dumbed-down version), but the internet suggests is related to... the Minkowski metric? I would be flabbergasted to learn that something so specific-to-our-universe was relevant to this toy mathematical contraption.
I think compared to the literature you're using an overly restrictive and nonstandard definition of quantum cellular automata.
That makes sense! I'm searching for the simplest cellular-automaton-like thing that's still interesting to study. I may have gone too far in the "simple" direction; but I'd like to understand why this highly-restricted subset of QCAs is too simple.
Specifically, it only makes sense to me to write as a product of operators like you have if all of the terms are on spatially disjoint regions.
Hmm! That's not obvious to me; if there...
Things have coalesced near the amphitheater. When the music kicks off again, we'll go northeast to... approximately here. 47.6309473, -122.3165802 JMJM+99F Seattle, Washington
Announcement 1: I, the organizer, will be 5-10min late. Announcement 2: apparently there's some music thing happening at the amphitheater! I'll set up somewhere northeast of the amphitheater when I get there, and post more precise coordinates when I have.
$10 bounty for anybody coming / passing through Capitol Hill: pick up a blind would-be attendee outside the Zeek's Pizza by 19th and Mercer. DM me your contact information, and I'll put you in touch, and I'll pay you on your joint arrival.
Update: the library is unexpectedly closed due to staffing issues. The event is now at Fuel Coffee, one block south and across the street.
If the chance of rain is dissuading you: fear not, there's a newly constructed roof over the amphitheater!
Hey, folks! PSA: looks like there's a 50% chance of rain today. Plan A is for it to not rain; plan B is to meet in the rain.
See you soon, I hope!
You win both of the bounties I precommitted to!
Lovely! Yeah, that rhymes and scans well enough for me!
Here are my experiments; they're pretty good, but I don't count them as "reliably" scanning. So I think I'm gonna count this one as a win!
(I haven't tried testing my chess prediction yet, but here it is on ASCII-art mazes.)
I found this lens very interesting!
Upon reflection, though, I begin to be skeptical that "selection" is any different from "reward."
Consider the description of model-training:
...To motivate this, let's view the above process not from the vantage point of the overall training loop but from the perspective of the model itself. For the purposes of demonstration, let's assume the model is a conscious and coherent entity. From it's perspective, the above process looks like:
- Waking up with no memories in an environment.
- Taking a bunch of actions.
- Suddenly falling unco
I was trying to say that the move used to justify the coin flip is the same move that is rejected in other contexts
Ah, that's the crucial bit I was missing! Thanks for spelling it out.
Reflectively stable agents are updateless. When they make an observation, they do not limit their caring as though all the possible worlds where their observation differs do not exist.
This is very surprising to me! Perhaps I misunderstand what you mean by "caring," but: an agent who's made one observation is utterly unable[1] to interact with the other possible-worlds where the observation differed; and it seems crazy[1] to choose your actions based on something they can't affect; and "not choosing my actions based on X" is how I would defi...
Yeah, if you have a good enough mental index to pick out the relevant stuff, I'd happily take up to 3 new bounty-candidate links, even though I've mostly closed submissions! No pressure, though!
Thanks for the links!
I paid a bounty for the Shard Theory link, but this particular comment... doesn't do it for me. It's not that I think it's ill-reasoned, but it doesn't trigger my "well-reasoned argument" sensor -- it's too... speculative? Something about it just misses me, in a way that I'm having trouble identifying. Sorry!
Yeah, I'll pay a bounty for that!
Thanks for the collection! I wouldn't be surprised if it links to something that tickles my sense of "high-status monkey presenting a cogent argument that AI progress is good," but didn't see any on a quick skim, and there are too many links to follow all of them; so, no bounty, sorry!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. His arguments are, roughly:
The relevant section seems to be 26:00-32:00. In that section, I, uh... well, I perceive him as just projecting "doomerism is bad" vibes, rather than making an argument containing falsifiable assertions and logical inferences. No bounty!
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective "I subsequently think 'yeah, that seemed well-reasoned'" criterion.
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I'll pay triple.)
(Re-reading this, I notice that my "re...
No bounty, sorry! I've already read it quite recently. (In fact, my question linked it as an example of the sort of thing that would win a bounty. So you show good taste!)
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. If I had to point at parts that seemed unreasonable, I'd choose (a) the comparison of [X-risk from superintelligent AIs] to [X-risk from bacteria] (intelligent adversaries seem obviously vastly more worrisome to me!) and (b) "why would I... want ...
Hmm! Yeah, I guess this doesn't match the letter of the specification. I'm going to pay out anyway, though, because it matches the "high-status monkey" and "well-reasoned" criteria so well and it at least has the right vibes, which are, regrettably, kind of what I'm after.
Nice. I haven't read all of this yet, but I'll pay out based on the first 1.5 sections alone.
Approved! Will pay bounty.
Thanks for the link!
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, "yeah, that seemed reasonable": no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I'll post my reasoning publicly. These three passages jumped out at me as things that I don't think would ever be written by a person with a model of AI that I remotely agree with:
...Popper's argument implies that all thinking entities--human or not, biological or artificial--must
I am thinking of mazes as complicated as the top one here! And few-shot is perfectly okay.
(I'd be flabbergasted if it could solve an ascii-art maze "in one step" (i.e. I present the maze in a prompt, and GPT-4 just generates a stream of tokens that shows the path through the maze). I'd accept a program that iteratively runs GPT-4 on several prompts until it considers the maze "solved," as long as it was clear that the maze-solving logic lived in GPT-4 and not the wrapper program.)
Several unimpressive tasks, with my associated P(GPT-4 can't do it):
@
s from start to finish).I...
I'd be interested to hear thoughts on this argument for optimism that I've never seen anybody address: if we create a superintelligent AI (which will, by instrumental convergence, want to take over the world), it might rush, for fear of competition. If it waits a month, some other superintelligent AI might get developed and take over / destroy the world; so, unless there's a quick safe way for the AI to determine that it's not in a race, it might need to shoot from the hip, which might give its plans a significant chance of failure / getting caught?
Counter...
Log of my attempts so far:
Attempt #1: note that, for any probability p, you can compute "number of predictions you made with probability less than p that came true". If you're perfectly-calibrated, then this should be a random variable with:
mean = sum(q for q in prediction_probs if q<p)
variance = sum(q*(1-q) for q in prediction_probs if q<p)
Let's see what this looks like if we plot it as a function of p. Let's consider three people:
Plot of global infant mortality rate versus time.
I donated for some nonzero X:
My attempted condensation, in case it helps future generations (or in case somebody wants to set me straight): here's my understanding of the "pay $0.50 to win $1.10 if you correctly guess the next flip of a coin that's weighted either 40% or 60% Heads" game:
You, a traditional Bayesian, say, "My priors are 50/50 on which bias the coin has. So, I'm playing this single-player 'game':
"I see that my highest-EV option is to play, betting on either H or T, doesn't matter."
Perry says, "I'm playing this zero-sum multi-player game, where my 'Knightian uncerta
I regret to report that I goofed the scheduling, and will be out of town, but @Orborde will be there to run the show! Sorry to miss you. Next time!
you say that IVF costs $12k and surrogacy costs $100k, but also that surrogacy is only $20k more than IVF? That doesn't add up to me.
Ah, yes, this threw me too! I think @weft is right that (a) I wasn't accounting for multiple cycles of IVF being necessary, and (b) medical expenses etc. are part of the $100k surrogacy figure.
sperm/egg donation are usually you getting paid to give those things
Thanks for revealing that I wrote this ambiguously! The figures in the book are for receiving donated eggs/sperm. (Get inseminated for $355, get an egg implanted in you for $10k.)
Ooh, you raise a good point, Caplan gives $12k as the per-cycle cost of IVF, which I failed to factor in. I will edit that in. Thank you for your data!
And you're right that medical expenses are part of the gap: the book says the "$100k" figure for surrogacy includes medical expenses (which you'd have to pay anyway) and "miscellaneous" (which... ???).
So, if we stick with the book's "$12k per cycle" figure, times an average of maybe 2 cycles, that gives $24k, which still leaves a $56k gap to be explained. Conceivably, medical expenses and "miscellaneous" could fill that gap? I'm sure you know better than I!
Everything in the OP matches my memory / my notes, within the level of noise I would expect from my memory / my notes.
That's a great point! My rough model is that I'll probably live 60 more years, and the last ~20 years will be ~50% degraded, so by 60 remaining life-years are only 50 QALYs. But... as you point out, on the other hand, my time might be worth more in 10 years, because I'll have more metis, or something. Hmm.
(Another factor: if your model is that awesome life-extension tech / friendly AI will come before the end of your natural life, then dying young is a tragedy, since it means you'll miss the Rapture; in which case, 1 micromort should perhaps be feared many times more than this simple model suggests. I... haven't figured out how to feel about this small-probability-of-astronomical-payoff sort of argument.)
Oh, this is genius. I love this.