gathaung
gathaung has not written any posts yet.

gathaung has not written any posts yet.

Second question:
Do you have a nice reference (speculative feasibility study) for non-rigid coil-guns for acceleration?
Obvious idea would be to have a swarm of satellites with a coil, spread out over the solar system. Outgoing probe would pass through a series of such coils, each adding some impulse to the probe (and doing minor course corrections). Obviously needs very finely tuned trajectory.
Advantage over rigid coil-gun: acceleration spread out (unevenly) over longer length (almost entire solar system). This is good for heat dissipation (no coupling is perfect), and maintaining mega-scale rigid objects appears difficult. Satellites can take their time to regain position (solar sail / solar powered ion thruster / gravity assist). Does not help with g-forces.
Disadvantage: Need a large number of satellites in order to get enough launch windows. But if we are talking dyson swarm anyway, this does not matter.
How much do we gain compared to laser acceleration? Main question is probably: How does the required amount of heat dissipation compare?
Do you have a non-paywalled link, for posterity? I use sci-hub, but paywalls are a disgrace to science.
Also, do you have a nice reference for the bussard ramjet/ramscoop deceleration?
Obvious advantage: A priori you don't need nuclear fusion at all. You use a big em-field for cross-section and use, ultimately, drag against the interstellar medium for both deceleration and energy generation. No deceleration needed in (thinner) intergalactic medium. Entropy gain should be large enough to run mighty heat-pumps (for maintaining high field superconductors and radiating excess heat). No need to carry fuel or manage fusion; your kinetic energy at relativistic speeds has almost as much energy as antimatter. Antimatter sucks because production,... (read more)
What are your scenarios for interstellar warfare? The question obviously depends on whatever turns out to be the technically mature way of violent conflict resolution.
Let me propose a naive default guess:
Small technically mature von-neumann probe meets primitive civilization or unsettled system: probe wins.
Small technically mature von-neumann probe meets system with technically almost-mature inhabitants: probe cannot even make problems.
System with dyson swarm + AI: Unassailable on short timescales. Impossible to profitably invade. Maybe sling another star at it if you control the stellar neighbourhood.
In this scenario, interstellar warfare is a matter of land-grabbing: Spam the entire sky with probes, moving as fast as possible, dyson a fraction of stars to keep up the... (read more)
You should strive to maximize utility of your pattern, averaged over both subjective probability (uncertainty) and squared amplitude of wave-function.
If you include the latter, then it all adds up to normalcy.
If you select a state of the MWI-world according to born rule (i.e. using squared amplitude of the wave-function), then this world-state will, with overwhelming probability, be compatible with causality, entropy increase over time, and a mostly classic history, involving natural selection yielding patterns that are good at maximizing their squared-amplitude-weighted spread, i.e. DNA and brains that care about squared-amplitude (even if they don't know it).
Of course this is a non-answer to your question. Also, we have not yet finished the necessary math to prove that this non-answer is internally consistent (we=mankind), but I think this is (a) plausible, (b) the gist of what EY wrote on the topic, and (c) definitely not an original insight by EY / the sequences.
It was not my intention to make fun of Viliam; I apologize if my comment gave this impression.
I did want to make fun of the institution of Mensa, and stand by them deserving some good-natured ridicule.
I agree with your charitable interpretation about what an IQ of 176 might actually mean; thanks for stating this in such a clear form.
In Section 3, you write:
State value models require resources to produce high-value states. If happiness is the goal, using the resources to produce the maximum number of maximally happy minds (with a tradeoff between number and state depending on how utilities aggregate) would maximize value. If the goal is knowledge, the resources would be spent on processing generating knowledge and storage, and so on. For these cases the total amount of produced value increases monotonically with the amount of resources, possibly superlinearly.
I would think that superlinear scaling of utility with resources is incompatible with the proposed resolution of the Fermi paradox. Why?
Superlinear scaling of utility means (ignoring detailed numbers) that e.g. a... (read 800 more words →)
Congrats! This means that you are a Mensa-certified very one-in-a-thousand-billion-special snowflake! If you believe in the doomsday argument then this ensures either the continued survival of bio-humans for another thousand years or widespread colonization of the solar system!
On the other hand, this puts quite the upper limit on the (institutional) numeracy of Mensa... wide guessing suggests that at least one in 10^3 people have sufficient numeracy to be incapable of testifying an IQ of 176 with a straight face, which would give us an upper bound on the NQ (numeracy quotient) of Mensa at 135.
(sorry for the snark; it is not directed at you but at the clowns at Mensa, and I... (read more)
I think a nicer analogy are spectral gaps. Obviously, no reasonable finite model will be both correct and useful, outside of maybe particle physics; so you need to choose some cut-off of you model's complexity. The cheapest analogy is when you try to learn a linear model, e.g. PCA/SVD/LSA (all the same).
A good model is one that hits a nice spectral gap: Adding a couple of extra epicycles gives only a very moderate extra accuracy. If there are multiple nice spectral gaps, then you should keep in mind a hierarchy of successively more complex and accurate models. If there are no good spectral gaps, then there is no real preferred model (of... (read more)
AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like.
Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one... (read more)
Nice. To make your proposed explanation more precise:
Take a random vector on the n-dim unit sphere. Project to the nearest (+1,-1)/sqrt(n) vector; what is the expected l2-distance / angle? How does it scale with n?
If this value decreases in n, then your explanation is essentially correct, or did you want to propose something else?
Start by taking a random vector x where each coordinate is unit gaussian (normalize later). The projection px just splits into positive coordinates and negative coordinates.
We are interested in E[ / |x| sqrt(n)].
If the dimension is large enough, then we wont really need to normalize; it is enough to start with 1/sqrt(n) gaussians, as we will almost almost... (read more)