[Link] Will Superintelligent Machines Destroy Humanity?
A summary and review of Bostrom's Superintelligence is in the December issue of Reason magazine, and is now posted online at Reason.com.
A summary and review of Bostrom's Superintelligence is in the December issue of Reason magazine, and is now posted online at Reason.com.
A new article in Biology Letters shows that under some conditions in which animals appear to behave "irrationally" (by apparently failing to conform to the Independence of Irrelevant Alternatives or even the Transitivity axioms of decision theory), the animal behavior may in fact resemble utility-maximizing strategies which also appear to...
In economics, "we can model utility as logarithmic in wealth", even after adding human capital to wealth, feels like a silly asymptotic approximation that obviously breaks down in the other direction as wealth goes to zero and modeled utility to negative infinity.
In cosmology, though, the difference between "humanity only gets a millionth of its light cone" and "humanity goes extinct" actually does feel bigger than the difference between "humanity only gets a millionth of its light cone" and "humanity gets a fifth of its light cone"; not infinitely bigger, but a lot more than you'd expect by modeling marginal utility as a constant as wealth goes to zero.
This is all subjective; others' feelings may differ.
(I'm also open in theory to valuing an appropriately-complete successor to humanity equally to humanity 1.0, whether the successor is carbon or silicon or whatever, but I don't see how "appropriately-complete" is likely so I'm ignoring the possibility above.)
It's hard to apply general strategic reasoning to anything in a single forward pass, isn't it? If your LLM has to come up with an answer that begins with the next token, you'd better hope the next token is right. IIRC this is the popular explanation for why LLM output seems to be so much better when you just add something like "Let's think step by step" to the prompt.
Is anyone trying to incorporate this effect into LLM training yet? Add an "I'm thinking" and an "I'm done thinking" to the output token set, and only have the main "predict the next token in a way that matches the training data" loss... (read more)
I have never seen any convincing argument why "if we die from technological singularity it will" have to "be pretty quick".
The arguments for instrumental convergence apply not just to Resource Acquisition as a universal subgoal but also to Quick Resource Acquistion as a universal subgoal. Even if "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else", the sooner it repurposes those atoms the larger a light-cone it gets to use them in. Even if an Unfriendly AI sees humans as a threat and "soon" might be off the table, "sudden" is still obviously good... (read more)
That was astonishingly easy to get working, and now on my laptop 3060 I can write a new prompt and generate another 10-odd samples every few minutes. Of course, I do mean 10 odd samples: most of the human images it's giving me have six fingers on one hand and/or a vaguely fetal-alcohol-syndrome vibe about the face, and none of them could be mistaken for a photo or even art by a competent artist yet. But they're already better than any art I could make, and I've barely begun to experiment with "prompt engineering"; maybe I should have done that on easier subjects before jumping into the uncanny valley of realistic human images headfirst.
Only optimizedSD/optimized_txt2img.py works for me so far, though. scripts/txt2img.py, as well as any version of img2img.py, dies on my 6GB card with RuntimeError: CUDA out of memory.
Update: in the optimization fork at https://github.com/basujindal/stable-diffusion , optimized_txt2img.py works on my GPU as well.
we still need to address ocean acidification
And changes in precipitation patterns (I've seen evidence that reducing solar incidence is going to reduce ocean evaporation, independent of temperature).
There's also the "double catastrophe" problem to worry about. Even if the median expected outcome of a geoengineering process is decent, the downside variance becomes much worse.
I still suspect MCB is our least bad near- to medium-term option, and even in the long term the possibility of targeted geoengineering to improve local climates is awfully tempting, but it's not a panacea.
As an unrelated aside, that CCC link rates "Methane Reduction Portfolio" as "Poor"; I'd have marked it "Counterproductive" for the moment. The biggest long-term global warming problem is CO2 (thanks to the short half-life of methane), and the biggest obstacle to CO2 emissions reduction is voters who think global warming is oversold. Let the problem get bigger until it can't be ignored, and then pick the single-use-only low hanging fruit.
Alex has not skipped a grade or put in an some secret fast-track program for kids who went to preschool, because this does not exist.
Even more confounding: my kids have been skipping kindergarten in part because they didn't go to preschool. My wife works from home, and has spent a lot of time teaching them things and double-checking things they teach themselves.
Preschools don't do tracking any more than grade schools, so even if in theory they might provide better instruction than the average overworked parent(s), the output will be 100% totally-ready-for-kindergarten (who will be stuck as you describe), which in the long term won't look as good as a mix of 95% not-quite-as-ready-for-kindergarten (who will catch up as you describe) and 5% ready-for-first-grade (who will permanently be a year ahead).
Gah, of course you're correct. I can't imagine how I got so confused but thank you for the correction.
You don't need any correlation between and to have . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.
Not quite. Expected value is linear but doesn't commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you'd *only* have the mean of the result, whereas what would really be a "paradox" is if turned out to be tiny.
A summary and review of Bostrom's Superintelligence is in the December issue of Reason magazine, and is now posted online at Reason.com.
A new article in Biology Letters shows that under some conditions in which animals appear to behave "irrationally" (by apparently failing to conform to the Independence of Irrelevant Alternatives or even the Transitivity axioms of decision theory), the animal behavior may in fact resemble utility-maximizing strategies which also appear to violate those axioms. The optimal strategies' current preferences are altered in response to the information conveyed by the current presence or absence of various alternatives.
A press release about the article is available from the lead author's university, U. Bristol. A news piece summarizing it is at Nature's website.
These results probably shouldn't surprise the most careful rational thinkers. For instance, "I prefer A... (read more)
The "understandable"+"exploit" category would include my personal favorite introduction, the experiment in Chapter 17. From "Thursday" to "that had been the scariest experimental result in the entire history of science" is about 900 words. This section is especially great because it does the whole "deconstruction of canon"/"reconstruction of canon" bit in one self-contained section; that pattern is one the best aspects of HPMOR but usually the setup and the payoff are dozens of chapters apart, with many so interleaved with the plot that the payoff counts as a major spoiler.
On the other hand, that section works best if you already know what P and NP and RSA cryptography are (and if you're... (read more)