In economics, "we can model utility as logarithmic in wealth", even after adding human capital to wealth, feels like a silly asymptotic approximation that obviously breaks down in the other direction as wealth goes to zero and modeled utility to negative infinity.
In cosmology, though, the difference between "humanity only gets a millionth of its light cone" and "humanity goes extinct" actually does feel bigger than the difference between "humanity only gets a millionth of its light cone" and "humanity gets a fifth of its light cone"; not infinitely bigger,...
It's hard to apply general strategic reasoning to anything in a single forward pass, isn't it? If your LLM has to come up with an answer that begins with the next token, you'd better hope the next token is right. IIRC this is the popular explanation for why LLM output seems to be so much better when you just add something like "Let's think step by step" to the prompt.
Is anyone trying to incorporate this effect into LLM training yet? Add an "I'm thinking" and an "I'm done thinking" to the output token set, and only have the main "predict t...
I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".
The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soo...
That was astonishingly easy to get working, and now on my laptop 3060 I can write a new prompt and generate another 10-odd samples every few minutes. Of course, I do mean 10 odd samples: most of the human images it's giving me have six fingers on one hand and/or a vaguely fetal-alcohol-syndrome vibe about the face, and none of them could be mistaken for a photo or even art by a competent artist yet. But they're already better than any art I could make, and I've barely begun to experiment with "prompt engineering"; maybe I should have done that ...
we still need to address ocean acidification
And changes in precipitation patterns (I've seen evidence that reducing solar incidence is going to reduce ocean evaporation, independent of temperature).
There's also the "double catastrophe" problem to worry about. Even if the median expected outcome of a geoengineering process is decent, the downside variance becomes much worse.
I still suspect MCB is our least bad near- to medium-term option, and even in the long term the possibility of targeted geoengineering to improve local climates is a...
Alex has not skipped a grade or put in an some secret fast-track program for kids who went to preschool, because this does not exist.
Even more confounding: my kids have been skipping kindergarten in part because they didn't go to preschool. My wife works from home, and has spent a lot of time teaching them things and double-checking things they teach themselves.
Preschools don't do tracking any more than grade schools, so even if in theory they might provide better instruction than the average overworked parent(s), the output will be 100% totally...
Gah, of course you're correct. I can't imagine how I got so confused but thank you for the correction.
You don't need any correlation between and to have . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.
Not quite. Expected value is linear but doesn't commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you'd *only* have the mean of the result, whereas what would really be a "paradox" is if turned out to be tiny.
the thing you know perfectly well people mean when they say "chemicals"
I honestly don't understand what that thing is, actually.
To use an example from a Facebook post I saw this week:
Is P-Menthane-3,8-diol (PMD) a chemical? What about oil from the lemon eucalyptus tree? Oil of Lemon Eucalyptus is typically refined until it's 70% PMD instead of 2%; does that turn it into a chemical? What if we were to refine it all the way to 100%? What if, now that we've got 100% PMD, we just start using PMD synthesized at a chemical plant inst...
Perceived chemical-ness is a very rough heuristic for the degree of optimization a food has undergone for being sold in a modern economy (see http://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ for why this might be something you want to avoid). Very, very rough--you could no doubt list examples of 'non-chemicals' that are more optimized than 'chemicals' all day, as well as optimizations that are almost certainly not harmful. And yet I'd wager the correlation is there.
I honestly don't understand what that thing is, actually.
This was also my first response when reading the article, but on second glance I don't think that is entirely fair. The argument I want to convey with "Everything is chemicals!" is something along the lines of "The concept that you use the word chemicals for is ill-defined and possibly incoherent and I suspect that the negative connotations you associate with it are largely undeserved.", but that is not what I'm actually communicating.
Suppose I successfully convinc...
So, does 1+ω make sense? It does, for the ordinals and hyperreals only.
It make sense for cardinals (the size of "a set of some infinite cardinality" unioned with "a set of cardinality 1" is "a set with the same infinite cardinality as the first set") and in real analysis (if lim f(x) = infinity, then lim f(x)+1 = infinity) too.
What about -1+ω? That only makes sense for the hyperreals.
And for cardinals (the size of the set difference between "a set of some infinite cardinality" and "a subset of one element" is the same infinite cardinality) and in real analysis (if lim f(x) = infinity, then lim -1+f(x) = infinity) too.
I believe the answer to your second question is probably technically "yes"; if there's any way in which AZ mispredicts relative to a human, then there's some Ensemble Learning classifier that weights AZ move choices with human move choices and performs better than AZ alone. And because Go has so many possible states and moves at each state, humans would have to be much, much, much worse at play overall for us to conclude that humans were worse along every dimension.
However, I'd bet the answer is practically "no". If Alp...
Oh, well in that case the point isn't subtlely lacking, it's just easily disproven. Given any function from I^N to R, I can take the tensor product with cos(k pi x) and get a new function from I^{N+1} to R which has k times as many non-globally-optimal local optima. Pick a decent k and iterate, and you can see the number growing exponentially with higher dimension, not approaching 0.
Perhaps there's something special about the functions we try to optimize in deep learning, a property that rules out such cases? That could be. But you'...
a Leviathan could try to transcend/straddle these fruits/niches and force them upward into a more Pareto optimal condition, maybe even into the non-Nash E. states if we're extra lucky.
Remember that old Yudkowsky post about Occam's Razor, wherein he points out how "a witch did it" sounds super-simple, but the word "witch" hides a ton of hidden complexity? I'm pretty sure you're doing the same thing here with the word "could". Instead of trying to picture what an imaginary all-powerful leader could do, im...
In the BFR announcment Musk promized that tickets from intercontinental rocket travel for the price of a current economy ticket.
"Full-fare" economy, which is much more expensive than even the "typical" international economy seat tickets you're thinking of, but yes, and even outsiders don't think it's impossible. It is very sensitive to a lot of assumptions - third party spreadsheets I've seen say low thousands of dollars per ticket is possible, but it wouldn't take many assumptions to fall short before prices j...
Failing to follow good strategic advice isn't even the worst failure mode here; unless you're lucky you may not be given any strategic advice at all in response to a tactical question. If nobody notices that you're committing the XY Problem, then you may be given good advice for the tactical problem you asked about, follow it, and end up worse off than you were before with respect to the strategic problem you should have been asking about instead.
This argument doesn't seem to take into account selection bias.
We don't get into a local optimum becuase we picked a random point and wow, it's a local optimum, what are the odds!?
We get into a local optimum because we used an algorithm that specifically *finds* local optima. If they're still there in higher dimensions then we're still liable to fall into them rather than into the global optimum.
Is there some more general limit to power begetting power that would also affect AGI?
The only one which immediately comes to mind is inflexibility. Often companies shrink or fail entirely because they're suddenly subject to competition armed with a new idea. Why do the new ideas end up implemented by smaller competitors? The positive feedback of "larger companies have more people who can think of new ideas" is dominated by the negative feedbacks of "even the largest company is tiny compared to its complement" and "companies
You should provide some more explicit license, if you don't want to risk headaches for others later. "yes [It's generally intended to be Open Source]" may be enough reassurance to copy the code once, but "yes, you can have it under the new BSD (or LGPL2.1+, or whatever) license" would be useful to have in writing in the repository in case others want to create derived works down the road.
Thanks very much for creating this!
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
But attacking a territory requires long supply lines, whereas defenders are on their home turf.
But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.
But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into th...
$2 debt squared does make sense, though, it is $4 and no debt.
No, it is $$4.
If that's what you meant to write, and it's also obvious to you that you could have written 40000¢¢ instead and still been completely accurate, then I'd love to know if you have any ideas of how this computation could map to anything in the real world. I would have thought that "kilogram meters squared per second cubed" was utter nonsense if anyone had just tried to show me the arithmetic without explaining what it really meant.
If that's not what you meant to write, o...
Facebook has privacy settings, such that anyone who wants to limit their posts' direct visibility can.
Whether you should take someone else's settings as explicit consent should probably vary from person to person, but I think the "if he didn't want it to be widely seen he wouldn't have set it up to be widely seen" heuristic is probably accurate when applied to EY, even if it's not applicable to every Joe Grandpa.
Even in the Joe Grandpa case, it doesn't seem like merely avoiding citing and moving on is a good solution. If you truly fear that some...
The form of the pathologies makes a difference, no?
IIRC the worst pathology with IRV in a 3 way race is basically: you can now safely vote for a third party who can't possibly win, but if they show real support then it's time to put one of the two Schelling party candidates in front of them. So it's not worse than plurality but it's not much of an improvement. Plus, with more candidates IRV becomes more and more potentially insane.
With Schulze or Ranked Pairs the pathology is DH3: If a third party can win, you can often help them win by voting a "da...
There's a high-stakes variational calculus problem. For what seasonal temperature profile do we get the best long-term minimum for the sum of "deaths due to extreme cold" and "deaths due to tropical diseases whose vector insects are stymied by extreme cold".
The magnitude of the variation isn't nearly the same in the O2 vs CO2 cases. "16% O2 reduction is lost in the noise" is devastating evidence against the theory "0.2% O2 reduction has significant cognitive effects", but "16% CO2 reduction is lost in the noise" is weaker evidence against the theory "66% and 300% CO2 increases have significant cognitive effects".
I'm not arguing with you about implausible effect sizes, though. We should especially see significant seasonal effects in every climate where people typically seal up buildings against the cold or the heat for months at a time.
You don't know the effect because the existing experiments do not vary or hold constant oxygen levels. All you see is the net average effect, without any sort of partitioning among causes.
Existing experiments do vary oxygen levels systematically, albeit usually unintentionally, by geography. Going up 100 meters from sea level gives you a 1% drop in oxygen pressure and density. If that was enough for a detectable effect on IQ, then even the 16% lower oxygen levels around Denver should leave Coloradans obviously handicapped. IIRC altitude sickness does show a strong effect on mental performance, but only at significantly lower air pressures still.
Pakistan, for example, is so dysfunctional and clannish that iodization and polio programs have had serious trouble making any headway.
To be fair, that's not entirely Pakistanis' fault. Is paranoia about Communist fluoridation plots more or less dysfunctional than paranoia about CIA vaccination plots? Does it make a difference that only the latter has a grain of truth to it?
Less.
Fluoridation of drinking water has never been shown to be safe or effective in randomized trials and you could never get approval from the FDA today to use it. The claimed benefits are pretty small in both health and monetary terms and would be wiped out by even a fraction of an IQ point loss; the expected benefit is quite small and so conspiracy theorists incorrectly killing fluoridation would not cause much regret.
Polio vaccines on the other hand have been shown to be safe & effective, and even if the CIA were using the polio program to kill doz...
"I, for one, like Roman numerals." (can't find the original source)
This looks like a special case of a failure of intentionality. If a child knows where the marble is, they've managed first-order intentionality, but if they don't realize that Sally doesn't know where the marble is, they've failed at second order.
The orders go higher, though, and it's not obvious how much higher humans can naturally go. If
...Bob thinks about "What does Alice think about Bob?" and on rare occasions "What does Alice think Bob thinks about Alice?" but will not organically reason "What does Alice think Bob thinks abo
I'd agree that most of the best scientific ideas have been relatively simple... but that's at least partly selection bias.
Compare two possible ideas:
"People with tuberculosis should be given penicillum extract"
"People with tuberculosis should be given (2S,5R,6R)-3,3-dimethyl-7-oxo-6-(2-phenylacetamido)-4-thia-1-azabicyclo[3.2.0]heptane-2-carboxylic acid"
The first idea is no better than the second. But we'd have taken forever to come up with the second, complex idea by just sifting through all the equally-chemically-complex alternatives...
(the following isn't off-topic, I promise:)
Attention, people who have a lot of free time and want to found the next reddit:
When a site user upvotes and downvotes things, you use that data to categorize that user's preferences (you'll be doing a very sparse SVD sort of operation under the hood). Their subsequent votes can be decomposed into expressions of the most common preference vectors, and their browsing can then be sorted by decomposed-votes-with-personalized-weightings.
This will make you a lot of friends (people who want to read ramblings about phil...
I can imagine it. You just have to embed it in a non-Euclidean geometry. A great circle can be constructed from 4 straight lines, and thus is a square, and it still has every point at a fixed distance from a common center (okay, 2 common centers), and thus is a circle.
There exists an irrational number which is 100 minus delta where delta is infinitesimally small.
Just as an aside, no there isn't. Infinitesimal non-zero numbers can be defined, but they're "hyperreals", not irrationals.
I don't think this quite fits the Prisoner's Dilemma mold, since certain knowledge that the other player will defect makes it your best move to cooperate; in a one-shot Prisoner's Dilemma your payoff is improved by defecting whether the other player defects or not.
The standard 2x2 game type that includes this problem is Chicken.
The Deceptive Turn Thesis seems almost unavoidable if you start from the assumptions "the AI doesn't place an inhumanly high value on honesty" and "the AI is tested on inputs vaguely resembling the real world". That latter assumption is probably unavoidable, unless it turns out that human values can be so generalized as to be comprehensible in inhuman settings. If we're stuck testing an AI in a sandbox that resembles reality then it can probably infer enough about reality to know when it would benefit by dissembling.
My trouble with the trolley problem is that it is generally stated with a lack of sufficient context to understand the long-term implications. We're saying there are five people on this track and one person on this other track, with no explanation of why? Unless the answer really is "quantum fluctuations", utilitarianism demands considering the long-term implications of that explanation. My utility function isn't "save as many lives as possible during the next five minutes", it's (still oversimplifying) "save as many lives as po...
Patrilineal ancestor, not just ancestor. When talking about someone who lived 40 generations ago, there's a huge difference.
Were any of Silver's previous predictions generated by making a list of possibilities, assuming each was a coin flip, multiplying 2^N, and rounding? I get the impression that he's not exactly employing his full statistical toolkit here.
The obstacle to making a river is usually getting the water uphill to begin with. Regular cloud seeding of moist air currents that would otherwise head out to sea? Modifying land albedo to change airflow patterns? That's all dubious, but I can't think of any other ideas for starting a new river with new water.
If you've got a situation where the water you want to flow is already "uphill", then the technology is simply digging, and if you wanted to do enough of it you could make whole new seas.
Careful, as new seas can sometimes backfire horribly or change rapidly depending on local hydrologic, agricultural, or geological conditions - https://en.wikipedia.org/wiki/Salton_Sea
Hmm... I believe you're correct. It would be hard to revise that, too, without making the "Are you a cop? It's entrapment if you lie!" urban legend into truth. It does feel like "posing as a medical worker" should be considered a crime above and beyond "posing as a civilian".
That's one of the most amusing phrases on Wikipedia: "specific contexts such as decision making under risk". In general you don't have to make decisions and/or you can predict the future perfectly, I suppose.
"The feigning of civilian, non-combatant status" is already a subcategory of perfidy, prohibited by the Geneva Conventions. Perfidy is probably the least-prosecuted war crime there is, though.
Where did "pacifists" and the scare quotes around it come from?
Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capa...
Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.
"What will the world look like when our source of super-accurate predictions suddenly disappears" is not usually the question we'd really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?
We might not even need...
I don't know if it's the mainstream of transhumanist thought but it's certainly a significant thread.
Information hazard warning: if your state of mind is again closer to "panic attack" and "grief" than to "calmer", or if it's not but you want to be very careful to keep it that way, then you don't want to click this link.
Isn't using a laptop as a metaphor exactly an example
The sentence could have stopped there. If someone makes a claim like "∀ x, p(x)", it is entirely valid to disprove it via "~p(y)", and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating syste...
If everybody understood the problem, then allowing farmers to keep their current level of water rights but also allowing them to choose between irrigation and resale would be a Pareto improvement. "Do I grow and export an extra single almond, or do I let Nestle export an extra twenty bottles of water?" is a question which is neutral with respect to water use but which has an obvious consistent answer with respect to profit and utility.
But as is typical, beneficiaries of price controls benefit from not allowing the politicians' electorate to unde...
The "understandable"+"exploit" category would include my personal favorite introduction, the experiment in Chapter 17. From "Thursday" to "that had been the scariest experimental result in the entire history of science" is about 900 words. This section is especially great because it does the whole "deconstruction of canon"/"reconstruction of canon" bit in one self-contained section; that pattern is one the best aspects of HPMOR but usually the setup and the payoff are dozens of chapters apart, with many so interleaved with the plot that the pay... (read more)