The fixed point problem is worse than you think. Take the Hungarian astrology example, with an initial easy set with both a length limitation (e.g. < 100k characters) and simplicity limitation.
Now I propose a very simple improvement scheme: If the article ends in a whitespace character, then try to classify the shortened article with last character removed.
This gives you an infinite sequence of better and better decision boundaries (each time, a couple of new cases are solved -- the ones that are of lenth 100k + $N$, end in at least $N$ whitespace, and ...
The Definition-Theorem-Proof style is just a way of compressing communication. In reality, heuristic / proof-outline comes first; then, you do some work to fill the technical gaps and match to the existing canon, in order to improve readability and conform to academic standards.
Imho, this is also the proper way of reading maths papers / books: Zoom in on the meat. Once you understood the core argument, it is often unnecessary too read definitions or theorems at all (Definition: Whatever is needed for the core argument to work. Theorem: Whatever the core a...
This paints a bleak picture for the possibility of aligning mindless AGI since behavioral methods of alignment are likely to result in divergence from human values and algorithmic methods are too complex for us to succeed at implementing.
To me it appears like the terms cancel out: Assuming we are able to overcome the difficulties of more symbolic AI design, the prospect of aligning such an AI seem less hard.
In other words, the main risk is wasting effort on alignment strategies that turn out to be mismatched to the eventually implemented AI.
The negative prices are a failure of the market / regulation, they don't actually mean that you have free energy.
That being said, the question for the most economical opportunistic use of intermittent energy makes sense.
No. It boils down to the following fact: If you take given estimates on the distribution of parameter values at face value, then:
(1) The expected number of observable alien civilizations is medium-large (2) If you consider the distribution of the number of alien civs, you get a large probability of zero, and a small probability of "very very many aliens", that integrates up to the medium-large expectation value.
Previous discussions computed (1) and falsely observed a conflict with astronomical observations, and totally failed to compute (2) from their own input data. This is unquestionably an embarrassing failure of the field.
What is logical induction's take on probabilistic algorithms? That should be the easiest test-case.
Say, before "PRIME is in P", we had perfectly fine probabilistic algorithms for checking primality. A good theory of mathematical logic with uncertainty should permit us to use such an algorithm, without random oracle, for things you place as "logical uncertainty". As far as I understood, the typical mathematician's take is to just ignore this foundational issue and do what's right (channeling Thurston: Mathematicians are in the business of producing human understanding, not formal proofs).
It’s excellent news! Your boss is a lot more likely to complain about some minor detail if you’re doing great on everything else, like actually getting the work done with your team.
Unfortunately this way of thinking has a huge, giant failure mode: It allows you to rationalize away critique about points you consider irrelevant, but that are important to your interlocutor. Sometimes people / institutions consider it really important that you hand in your expense sheets correctly or turn up in time for work, and finishing your project in time with brillia...
Is there a way of getting "pure markdown" (no wysiwyg at all) including Latex? Alternatively, a hotkey-less version of the editor (give me buttons/menus for all functionality)?
I'm asking because my browser (chromium) eats the hotkeys, and latex (testing: $\Sigma$ ) appears not to be parsed from markdown. I would be happy with any syntax you choose. For example \Sigma; alternatively the github classic of using backticks
appears still unused here.
edit: huh, backticks are in use and html-tags gets eaten.
Isn't all this massively dependent on how your utility $U$ scales with the total number $N$ of well-spent computations (e.g. one-bit computes)?
That is, I'm asking for a gut feeling here: What are your relative utilities for $10^{100}$, $10^{110}$, $10^{120}$, $10^{130}$ universes?
Say, $U(0)=0$, $U(10^100)=1$ (gauge fixing); instant pain-free end-of-universe is zero utility, and a successful colonization of the entire universe with a suboptimal black hole-farming near heat-death is unit utility.
Now, per definitionem, the utility $U(N)$ of a $N$-computatio...
What was initially counterintuitive is that even though , the series doesn't converge.
This becomes much less counterintuitive if you instead ask: How would you construct a sequence with divergent series?
Obviously, take a divergent series, e.g. , and then split the th term into .
FWIW, looking at an actual compiler, we see zero jumps (using a conditional move instead):
julia> function test(n)
i=0
while i<n
i += 1
end
return i
end
test (generic function with 1 method)
julia> @code_native test(10)
.text
Filename: REPL\[26\]
pushq %rbp
movq %rsp, %rbp
Source line: 3
xorl %eax, %eax
testq %rdi, %rdi
cmovnsq %rdi, %rax
Source line: 6
popq %rbp
retq
nop
edit: Sorry for the formatting. I don't understand how source-code markup is supposed to work now...
"what move should open with in reversi" would be considered as an admissible decision-theory problem by many people. Or in other words: Your argument that EU maximization is in NP only holds for utility functions that permit computation in P of expected utility given your actions. That's not quite true in the real world.
This, so much.
So, in the spirit of learning from other's mistakes (even better than learning from my own): I thought Ezra made his point very clear.
So, all of you people who missed Ezra's point (confounded data, outside view) on first reading:
How could Ezra have made clearer what he was arguing, short of adopting LW jargon? What can we learn from this debacle of a discussion?
Edit: tried to make my comment less inflammatory.
Ezra seemed to be arguing both at the social-shaming level (implying things like "you are doing something normatively wrong by giving Murray airtime") and at the epistemic level (saying "your science is probably factually wrong because of these biases"). The mixture of those levels muddles the argument.
In particular, it signaled to me that the epistemic-level argument was weak -- if Ezra would have been able to get away with arguing exclusively from the epistemic level, he would have (because, in my view, such arguments are more convinc...
>I was imagining a sort of staged rocket, where you ejected the casing of the previous rockets as you slow, so that the mass of the rocket was always a small fraction of the mass of the fuel.
Of course, but your very last stage is still a rocket with a reactor. And if you cannot build a rocket with 30g motor+reactor weight, then you cannot go to such small stages and your final mass on arrival includes the smallest efficient rocket motor / reactor you can build, zero fuel, and a velocity that is below escape velocity of your target solar system (once you...
If you have to use the rocket equation twice, then you effectively double delta-v requirements and square the launch-mass / payload-mass factor.
Using Stuart's numbers, this makes colonization more expensive by the following factors:
0.5 c: Antimatter 2.6 / fusion 660 / fission 1e6
0.8 c: Antimatter 7 / fusion 4.5e5 / fission 1e12
0.99c Antimatter 100 / fusion 4.3e12 / fission 1e29
If you disbelieve in 30g fusion reactors and set a minimum viable weight of 500t for an efficient propulsion system (plus negligible weight for replicators) then you get an add...
You're right, I should have made that clearer, thanks!
I would not fret too much about slight overheating of the payload; most of the launch mass is propulsion fuel anyway, and in worst-case the payload can rendezvous with the fuel in-flight, after the fuel has cooled down.
I would be very afraid of the launch mass, including solar sail / reflector loosing (1) reflectivity (you need a very good mirror that continues to be a good mirror when hot; imperfections will heat it) and (2) structural integrity.
I would guess that, even assuming technological maturity (can do anything that physics permits), you cannot kee...
This was a very fun article. Notably absent from the list, even though I would absolutely have expected it (since the focus was on evolutionary algorithms, even though many observations also apply to gradient-descent):
Driving genes. Biologically, a "driving gene" is one that cheats in (sexual) evolution, by ensuring that it is present in >50% of offspring, usually by weirdly interacting with the machinery that does meiosis.
In artificial evolution that uses "combination", "mutation" and "selection", these would be regions of parameter-space that are attracting under "combination"-dynamics, and use that to beat selection pressure.
If you assume that Dysoning and re-launch take 500 years, this barely changes the speed either, so you are very robust.
I'd be interested in more exploration of deceleration strategies. It seems obvious that braking against the interstellar medium (either dust or magnetic field) is viable to some large degree; at the very least if you are willing to eat a 10k year deceleration phase. I have taken a look at the two papers you linked in your bibliography, but would prefer a more systematic study. Important is: Do we know ways that are definitely not hard...
Computability does not express the same thing we mean with "explicit". The vague term "explicit" crystallizes an important concept, which is dependent on social and historical context that I tried to elucidate. It is useful to give a name to this concept, but you cannot really prove theorems about it (there should be no technical definition of "explicit").
That being said, computability is of course important, but slightly too counter-intuitive in practice. Say, you have two polynomial vectorfields. Are solutions (to the diffe...
It depends on context. Is the exponential explicit? For the last 200 years, the answer is "hell yeah". Exponential, logarithm and trigonometry (complex exponential) appear very often in life, and people can be expected to have a working knowledge of how to manipulate them. Expressing a solution in terms of exponentials is like meeting an old friend.
120 years ago, knowing elliptic integrals, their theory and how to manipulate them was considered basic knowledge that every working mathematician or engineer was expected to have. Back then, these wer...
Regarding insolubility of the quintic, I made a top level post with essentially the same point, because it deserves to be common knowledge, in full generality.
I guess that this is due to the historical fact that candidates in the US are supposed to be district-local, not state-local, and districts are supposed to be as small as possible. I'm not an American, so I cannot say how strong this is as a constraint for modified electoral systems.
If you had a small party/faction, with say 10% of popular vote, reaching up to maybe 30% in their strongest districts, then I would definitely see a problem: Such a party simply does not fit purely district-local representation (one-to-one mapping between districts and re...
Re PLACE: Interesting proposal. Have you considered the following problem (I'd guess you have; a link would be appreciated):
Candidates are not exchangeable. Candidate A has done a very good job in the legislature. An opposing faction may decide to coordinate to support his local opposing candidate B, in order to keep person A out of parliament.
Or, in other words: Two candidates running in the same district cannot both become part of parliament. This opens a huge amount of gaming, in order to squash small parties / factions that do not have a deep ben...
One guess for cheap signaling would be to seed stellar atmospheres with stuff that should not belong. Stellar spectra are really good to measure, and very low concentration of are visible (create a spectral line). If you own the galaxy, you can do this at sufficiently many stars to create a spectral line that should not belong. If we observed a galaxy with "impossible" spectrum, we would not immediately know that it's aliens; but we would sure point everything we have at it. And spectral data is routinely collected.
I am not an astronomer, th...
I think communicating without essentially conquering the Hubble volume is still an interesting question. I would not rule out a future human ethical system that restricts expansion to some limited volume, but does not restrict this kind of omnidirectional communication. Aliens being alien, we should not rule out them having such a value system either.
That being said, your article was really nice. Send multiplying probes everywhere, watch the solar system form and wait for humans to evolve in order to say "hi" is likely to be amazingly cheap.
Re SODA: The setup appears to actively encourage candidates to commit to a preference order. Naively, I would prefer a modification along the following lines; could you comment?
(1) Candidates may make promises about their preference order among other candidates; but this is not enforced (just like ordinary pre-election promises). (2) The elimination phase runs over several weeks. In this time, candidates may choose to drop out and redistribute their delegated votes. But mainly, the expected drop-outs will negotiate with expected survivors, in order to get...
Regarding measurement of pain:suffering ratio
A possible approach would be to use self-reports (the thing that doctor's always ask about, pain scale 1-10) vs revealed preferences (how much painkillers were requested? What trade-offs for pain relief do patients choose?).
Obviously this kind of relation is flawed on several levels: Reported pain scale depends a lot on personal experience (very painful events permanently change the scale, ala "I am in so much pain that I cannot walk or concentrate, but compared to my worst experience... let's sa...
>But the greatest merit of Occamian prior is that it vaguely resembles the Lazy prior.
...
>With that in mind, I asked what prior would serve this purpose even better and arrived at Lazy prior. The idea of encoding these considerations in a prior may seem like an error of some kind, but the choice of a prior is subjective by definition, so it should be fine.
Encoding convenience * probability into some kind of pseudo-prior such that the expected-utility maximizer is the maximum likelihood model with respect to the pseudo-prior does seem like a really us...
I have a feeling that you mix probability and decision theory. Given some observations, there are two separate questions when considering possible explanations / models:
1. What probability to assign to each model?
2. Which model to use?
Now, our toy-model of perfect rationality would use some prior, e.g. the bit-counting universal/kolmogorov/occam one, and bayesian update to answer (1), i.e. compute the posterior distribution. Then, it would weight these models by "convenience of working with them", which goes into our expected utility maximization...
I think part of the assumption is that reflection can be bolted on trivially if the pattern matching is good enough. For example, consider guiding an SMT / automatic theorem prover by deep-learned heuristics, e.g. (https://arxiv.org/abs/1701.06972)[https://arxiv.org/abs/1701.06972] . We know how to express reflection in formal languages; we know how to train intuition for fuzzy stuff; me might learn how to train intuition for formal languages.
This is still borderline useless; but there is no reason, a priori, that such approached are doomed to fail. Especially since labels for training data are trivial (check the proof for correctness) and machine-discovered theorems / proofs can be added to the corpus.
I strongly disagree that anthropics explains the unreasonable effectiveness of mathematics.
You can argue that a world, where people develop a mind and mathematical culture like ours (with its notion of "modular simplicity") should be a world where mathematics is effective in everyday phenomena like throwing a spear.
This tells us nothing about what happens if we extrapolate to scales that are not relevant to everyday phenomena.
For example, physics appears to have very simple (to our mind) equations and principles, even at scales that were irreleva...
[Meta: Even low-effort engagement, like "known + keyword" or "you misunderstood everything; read <link>" or "go on talking / thinking" is highly appreciated. Stacks grow from the bottom to the top today, unlike x86 or threads on the internet]
------------
Iterative amplification schemes work by having each version trained by previous iteration ; and, whenever version fails at finding a good answer (low confidence in the prediction), punting the question to , until it reaches the human overseer at , which is ... ,,,,,,,,,,,,,,,,,,,,
(1) As Paul noted, the question of the exponent alpha is just the question of diminishing returns vs returns-to-scale.
Especially if you believe that the rate is a product of multiple terms (like e.g. Paul's suggestion with one exponent for computer tech advances and another for algorithmic advances) then you get returns-to-scale type dynamics (over certain regimes, i.e. until all fruit are picked) with finite-time blow-up.
(2) Also, an imho crucial aspect is the separation of time-scales between human-driven research and computation do...
,,,,,,,,,,,,,,,,,,,,,,,,,,Just commenting that the progress to thermonuclear weapons represented another discontinuous jump (1-3 orders of magnitude).
Also, whether von Neumann was right depends on the probability for the cold war ending peacefully. If we retrospectively conclude that we had a 90% chance of total thermonuclear war (and just got very lucky in real life) then he was definitely right. If we instead argue from the observed outcome (or historical studies conclude that the eventual outcome was not due to luck but rather due to the inescapable logic of MAD), then he was to...
Not sure. I encountered this once in my research, but the preprint is not out yet (alas, I'm pretty sure that this will still be not enough to reach commercial viability, so pretty niche and academic and not a very strong example).
Regarding "this is not common": Of course not for problems many people care about. Once you are in the almost-optimal class, there are no more giant-sized fruit to pick, so most problems will experience that large jumps never, once or twice over all of expected human history (sorting is even if you are a supe...
I imagine the "secret sauce" line of thinking as "we are solving certain problems in the wrong complexity class". Changing complexity class of an algorithm introduces a discontinuity; when near a take-off, then this discontinuity can get amplified into a fast take-off. The take-off can be especially fast if the compute hardware is already sufficient at the time of the break-through.
In other words: In order to expect a fast take-off, you only need to assume that the last crucial sub-problem for recursive self-improvement / explosion is d...
For strong historical precedents, I would look for algorithmic advances that improved empirical average complexity class, and at the same time got a speed-up of e.g. 100 x on problem instances that were typical prior to the algorithmic discovery (so Strassen matrix-multiply is out).
Do you have any examples of this phenomenon in mind? I'm not aware of any examples with significant economic impact. If this phenomenon were common, it would probably change my view a lot. If it happened ever it would at least make me more sympathetic to the fast takeoff view and would change my view a bit.
Thanks, and sorry for presumably messing up the formatting.
The assumption I'm talking about is that the state of the rest of the universe (or multiverse) does not affect the marginal utility of there also being someone having certain experiences at some location in the uni-/multi-verse.
Now, I am not a friend of probabilities / utilities separately; instead, consider your decision function.
Linearity means that your decisions are independent of observations of far parts of the universe. In other words, you have one system over which your agent optimizes expected utility; and now compare it to the situation wher...
Real-world anectdata how one big company (medical equipment) got OK at security:
At some time they decided that security was more important now. Their in-house guy (dev->dev management -> "congrats, you are now our chief security guy") got to hire more consultants for their projects, went to trainings and, crucially, went to cons (e.g. defcon). He was a pretty nice guy, and after some years he became fluent at hacker-culture. In short, he became capable of judging consultant's work and hiring real security people. And he made some frien...
Yep. The counter-example would be Apple iOS.
I never expected it to become as secure as it did. And Apple security are clowns (institutionally, no offense inteded for the good people working there), and UI tends to beat security in tradeoffs.
Everything exposed to an attacker, and everything those subsystems interact with, and everything those parts interact with! You have to build all of it robustly!
seems false to me, if you have good isolation--which is what a project like Qubes tries to accomplish.
I agree with you here that Qubes is cool; but the fact that it is (performantly) possible was not obvious before it was cooked up. I certainly failed to come up with the idea of Qubes before hearing it (even after bluepill), and I am not ashamed of this: Qubes is brilliant (and IOMMU is cheating)....
edit timeout over, but the flags for requesting a chain-of-trust from your recursive resolver/ cache should of course by (+CD +AD +RD).
*shrugs*
Yeah, ordinary paranoia requires that you have unbound listening on localhost for your DNS needs. Because there should be a mode to ask my ISP-run recursive resolver to deliver the entire cert-chain. Thisis a big fail of DNSSEC (my favorite would be -CD +AD +RD, this flag combination should still be free and means "please recurse; please use dnssec; please don't check key validity").
Yes, and DNSSEC over UDP breaks in some networks, then you need to run it via TCP (or do big a big debugging-session in order to figure out what broke).
A...
Hey,
fun that you now post about security. So, I used to work as itsec consultant/reasearcher for some time; let me give my obligatory 2 cents.
On the level of platitudes: my personal view of security mindset is to zero in on the failure modes and tradeoffs that are made. If you additionally have a good intuition on what's impossible, then you quickly discover either failure modes that were not known to the original designer -- or, also quite frequently, the system is broken even before you look at it ("and our system archieves this kind of securit...
I doubt your optimism on the level of security that is realistically achievable. Don't get me wrong: The software industry has made huge progress (at large costs!) in terms of security. Where before, most stuff popped a shell if you looked at it funny, it is now a large effort for many targets.
Further progress will be made.
If we extrapolate this progress -- we will optimistically reach a point where impactful reliable 0day is out of reach for most hobbyists and criminals, and the domain of natsec of great powers.
But I don't see how raising this waterline w... (read more)