Of course the right thing to do is to pull the lever. And the right time to do that is once the trolley's front wheels have passed the switch but the rear wheels haven't yet. The trolley gets derailed, saving all 6 lives.
Aside from basic math (calculus, linear algebra, probability, ODE, all with proofs), take courses in topics that feel interesting to you just by themselves. Don't count on things you learn being actually useful in real life, and accordingly don't try to prioritize courses by that metric. You'll learn what you need for your job by yourself or be taught at the job anyway, so instead spend this time building up an inventory of things to draw upon for useful metaphors. It's easier to learn what's intrinsically interesting so you'll end up learning more. For real world skills, do some academic research projects and industry internships.
This is the correct answer to the question. Bell and CHSH and all are remarkable but more complicated setups. This - entanglement no matter which basis you'll end up measuring your particle in, not known at the time of state preparation, - is what's salient about the simple 2-particle setup.
As I've argued previously, a natural selection process maps cleanly onto RL in the limit.
The URL is broken (points to edit page)
Regarding safer assets, when you put your money into a savings account (loan it to the bank), what is the bank to do with it? Presumably it has promised you interest. Or if you buy treasuries - someone must have sold them to you - what do they do now with all the cash? Just because you personally didn't put your money into stocks does't mean nobody else downstream from you did.
And because most securities aren't up for sale at any given time, a small fraction of market participants can have outsized effects on prices. Consider oil back in Apr...
Here's a (high) schools data point... https://twitter.com/EricTopol/status/1266976828549238785?s=19
Regarding HCQ, the recent large-N studies were observational and looks like patients there were given HCQ late and if they were relatively sicker. Using it early on could still work (but now there won't be an RCT for that thanks to numerous delendae).
Regarding schools, did the countries that reopened those already fare particularly worse?
I don't have that info regarding schools but also no one is systematically collecting data on anything and everything is confounded, including by control systems.
On HCQ, as I noted in the other comment on it, I'm mostly predicting/observing that the scientific community has decided it's going to reject HCQ, preventing it from becoming a consensus treatment. This is partly for 'good' reasons, partly for not-so-good reasons that have nothing to do with science, partly because they no longer know how or are not allowed to study thing...
A while back TinyCast seemed pretty friendly: https://tinycast.cultivateforecasts.com/questions/new
steer
badum-tsss
That's terrible news! It means that on top of the meager coronavirus there's another unidentified disease overcrowding the hospitals, causing respirator shortages all over the world, and threatening to kill millions of people!
> The idea of “flattening the curve” is the worst, as it assumes a large number of infections AND a large number of virus generation AND high selective pressure
Flattening _per se_ doesn't affect the evolution of the virus much. It doesn't evolve on a time grid, but rather on an event grid where an event is spreading from a person to another. As long as it spreads the same number of times it will have the same number of opportunities to evolve.
"Overreacting to underestimates" - great way of putting it!
Fewer waiting lines?
Congratulations!
If you're trying to be homo economicus and maximize your expected utility, probably it's not worth it. But if you're not, you can still do it! We did (blood and tissue).
I don't see how it would explain double descent on training time. This would imply that gradient descent on neural nets first has to memorize noise in one particular way, and then further training "fixes" the weights to memorize noise in a different way that generalizes better
For example, the (random, meaningless) weights used to memorize noise can get spread across more degrees of freedom, so that on the test their sum will be closer to 0.
The 5nm in "5nm scale" no longer means "things are literally 5nm in size". Rather, it's become a fancy way of saying something like "200x the linear transistor density of an old 1-micron scale chip". The gates are still larger than 5nm, it's just that things are now getting put on their side to make more room ( https://en.wikipedia.org/wiki/FinFET ). Some chip measures sure are slowing down, but Moore's law (referring to the number of transistors per chip and nothing else) still isn't one of them despite claims of impending doom due to "quantum effects" originally dating back to (IIRC) the eighties.
I know some people who (at least used to) maintain a group pool of cash to fund the preservation of whoever died first (at which point the pool would need to be refilled). So if you're unlucky first to die out of people, you only pay of the full price, and if you're lucky (last to die) you eventually pay about times the price, but at least you get more time to earn the money. Not sure how it was all structured legally. Of course if you're really pressed for time it may be hard to convince other people for such an arrangement.
Fu...
There aren't that "many" other companies. Talk to KrioRus, I know they explored setting up a cryonics facility in Switzerland at some point.
I'm pretty sure (epistemic status: Good Judgment Project Superforecaster) the "AI" in the name is pure buzz and the underlying aggregation algorithm is something very simple. If you want to set up some quick group predictions for free, there's https://tinycast.cultivatelabs.com/ which has a transparent and battle-tested aggregation mechanism (LMSR prediction markets) and doesn't use catchy buzzwords to market itself. For other styles of aggregation there's "the original" Good Judgment Inc, a spinoff from GJP which ac...
The books are marketed as "hard" sci-fi but it seems all the "science" (at least in the first book, didn't read the others) is just mountains of mysticism constructed around statements that can sound "deep" on some superficial level but aren't at all mysterious, like "three-body systems interacting via central forces are generally unstable" or "you can encode some information into the quantum state of a particle" (yet of course they do contain nuance that's completely lost on the author, such...
(epistemic status: physicist, do simulations for a living)
Our long-term thermodynamic model Pn is less accurate than a simulation
I think it would be fair to say that the Boltzmann distribution and your instantiation of the system contain not more/less but _different kinds of_ information.
Your simulation (assume infinite precision for simplicity) is just one instantiation of a trajectory of your system. There's nothing stochastic about it, it's merely an internally-consistent static set of configurations, connected to each other by deterministic e...
(the paper: https://journals.aps.org/pr/abstract/10.1103/PhysRev.106.620)
There's nothing magical about reversing particle speeds. For entropy to decrease to the original value you would have to know and be able to change the speeds with perfect precision, which is of course meaningless in physics. If you get it even the tiniest bit off you might expect _some_ entropy decrease for a while but inevitably the system will go "off track" (in classical chaos the time it's going to take is only logarithmic in your precision) and onto a different increasing-entropy trajectory.
Jaynes' 1957 paper has a nice formal explanation of entropy vs. velocity reversal.
design the AI in such a way that it can create agents, but only
This sort of argument would be much more valuable if accompanied by a specific recipe of how to do it, or at least a proof that one must exist. Why worry about AI designing agents, why not just "design it in such a way" that it's already Friendly!
I agree, it did seem like one of the more-unfinished parts. Still, perhaps a better starting point than nothing at all?
Check the chapter on the A_p distribution in Jaynes' book.
Losing a typical EA ... decreasing ~1000 utilons to ~3.5, so a ~28500% reduction per person lost.
You seem to be exaggerating a bit here: that's a 99.65% reduction. Hope it's the only inaccuracy in your estimates!
Here's another excellent book roughly from the same time: "The Phenomenon of Science" by Valentin F. Turchin (http://pespmc1.vub.ac.be/posbook.html). It starts from largely similar concepts and proceeds through the evolution of the nervous system to language to math to science. I suspect it may be even more AI-relevant than Powers.
Hi shminux. Sorry, just saw your comment. We don't seem to have a date set for November yet, but let me check with the others. Typically we meet on Saturdays, are you still around on the 22nd? Or we could try Sunday the 16th. Let me know.
The Planning Fallacy explanation makes a lot of sense.
I hope it's not really at 2AM.
While the situation admittedly is oversimplified, it does seem to have the advantage that anyone can replicate it exactly at a very moderate expense (a two-headed coin will also do, with a minimum amount of caution). In that respect it may actually be more relevant to real world than any vaccine/autism study.
Indeed, every experiment should get a pretty strong p-value (though never exactly 1), but what gets reported is not the actual p but whether it is above .95 (which is an arbitrary threshold proposed once by Fisher who never intended it to play the role...
(1) is obvious, of course--in hindsight. However changing your confidence level after the observation is generally advised against. But (2) seems to be confusing Type I and Type II error rates.
On another level, I suppose it can be said that of course they are all biased! But, by the actual two-tailed coin rather than researchers' prejudice against normal coins.
Treating ">= 95%" as "= 95%" is a reasoning error
Hence my question in another thread: Was that "exactly 95% confidence" or "at least 95% confidence"? However when researchers say "at a 95% confidence level" they typically mean "p < 0.05", and reporting the actual p-values is often even explicitly discouraged (let's not digress into whether it is justified).
Yet the mistake I had in mind (as opposed to other, less relevant, merely "a" mistakes) involves Type I and Type II error ra...
Well, perhaps a bit too simple. Consider this. You set your confidence level at 95% and start throwing a coin. You observe 100 tails out of 100. You publish a report saying "the coin has tails on both sides at a 95% confidence level" because that's what you chose during design. Then 99 other researchers repeat your experiment with the same coin, arriving at the same 95%-confidence conclusion. But you would expect to see about 5 reports claiming otherwise! The paradox is resolved when somebody comes up with a trick using a mirror to observe both sides of the coin at once, finally concluding that the coin is two-tailed with a 100% confidence.
What was the mistake?
How does your choice of threshold (made beforehand) affect your actual data and the information about the actual phenomenon contained therein?
suggestion posted to the Google Group:
Another idea might be to decide ahead of each meetup on a few topics for discussion to allow some time to prepare, research and think about things for some time before discussing with each other.
Also, different studies have different statistical power, so it may not be OK to simply add up their evidence with equal weights.
Was that "exactly 95% confidence" or "at least 95% confidence"?
(I highly recommend that everyone join the Google Group so that we can all communicate in a single place by email)
Does anyone else feel like trying to get this meeting a little bit more structured?
For example, something as simple as brief but prepared self-introductions covering your interests (related or unrelated to LW) and anything else about yourself that you might consider worth a mention. We partially covered it last time but it was pretty chaotic.
Or maybe someone even wants to give a brief talk about something they find exciting. Back in the day Jon...
Oh yes, and last time somebody discovered that there's free parking on Main St across from campus (the stretch between Med Center and Hotel ZaZa).
Hopefully, this time Valhalla should be open for, um, follow-up discussions. http://valhalla.rice.edu/
It seems that in the rock-scissors-paper example the opponent is quite literally an adversarial superintelligence. They are more intelligent than you (at this game), and since they are playing against you, they are adversarial. The RCT example also has a lot of actors with different conflicts of interests, especially money- and career-wise, and some can come pretty close to adversarial.
Free parking is available in the small streets across Rice Boulevard from the campus (north of it). This is also closer.
Here are some nice arguments about different what-if/why-not scenarios, not fully rigorous but sometimes quite persuasive: http://www.scottaaronson.com/democritus/lec9.html
I'm not sure if we can say much about a classical universe "in practice" because in practice we do not live in a classical universe. I imagine you could have perfect information if you looked at some simple classical universe from the outside.
For classical universes with complete information you have Newtonian dynamics. For classical universes with incomplete information about the state you can still use Newtonian dynamics but represent the state of the system with a probability distribution. This ultimately leads to (classical) statistical mecha...
Thanks! The list of assumptions seems longer than in the De Raedt et al. paper and you need to first postulate branching and unitarity (let's set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.
Luke, please correct me if I'm misunderstanding something.
The rule follows directly if you require that the wavefunction behaves like a "vector probability". Then you look for a measure that behaves like probability should (basically, nonnegative and adding up to 1). And you find that for this the wavefunction should be complex-valued and the probability should be its squared amplitude. You can also show that anything "larger" than complex numbers (e.g. quaternions) will not work.
But, as you said, the question is not how to derive the B...
Children now aren't necessarily mutually exclusive with children in the future. You're not creating disutility by starting now and then "upping your game" when technology is more accessible!