Wiki Contributions

Comments

What’s more interesting is that I just switched medications from one that successfully managed the depression but not the anxiety to one that successfully manages the anxiety but not the depression

May I ask which medications?

For macroscopic rotation:

  • Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
  • Without smooth surfaces to roll on, rolling is not better than walking.

There are other uses for macroscopic rotation besides rolling on wheels, e.g. propellers, gears, flywheels, drills, and turbines. Also, how to provide nutrients to detached components, or build smooth surfaces to roll on so your wheels will be useful, seem like problems that intelligence is better at solving than evolution.

I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I invested a lot of my youth into learning to use software which is now obsolete. It feels like my youthful enthusiasm for learning new software skills, and comparative lack thereof as I get older, was an adaptation to a world where valuable skills learned in childhood could be expected to mostly remain valuable throughout life. It felt like a bit of a rug-pull how much that turned out not to be the case w.r.t. software.

But the rise of generative AI has really accelerated this trend, and I'm starting to feel adrift and rudderless. One of the biggest changes from scarcity to abundance in my life was that of interesting information, enabled by the internet. I adapted to it by re-centering my values around learning skills and creating things. As I contemplate what AI can already do, and extrapolate that into the near future, I can feel my motivation to learn and create flagging.

If, and to the extent that, we get a "good" singularity, I expect that it will have been because the alignment problem turned out to be not that hard, the sort of thing we could muddle through improvisationally. But that sort of singularity seems unlikely to preserve something as delicately balanced as the way that (relatively well-off) humans get a sense of meaning and purpose from the scarcity of desirable things. I would still choose a world that is essentially a grand theme park full of delightful experience machines over the world as it is now, with all its sorrows, and certainly I would choose theme-park world over extinction. But still ... OP beautifully crystalizes the apprehension I feel about even the more optimistic end of the spectrum of possible futures for humanity that are coming into view.

This all does seem like work better done than not done, who knows, usefulness could ensue in various ways and downsides seem relatively small.

I disagree about item #1, automating formal verification. From the paper:

9.1 Automate formal verification:

As described above, formal verification and automatic theorem proving more generally needs to be fully automated. The awe-inspiring potential of LLMs and other modern AI tools to help with this should be fully realized.

Training LLMs to do formal verification seems dangerous. In fact, I think I would extend that to any method of automating formal verification that would be competitive with human experts. Even if it didn't use ML at all, the publication of a superhuman theorem-proving AI, or even just the public knowledge that such a thing existed, seems likely to lead to the development of more general AIs with similar capabilities within a few years. Without a realistic plan for how to use such a system to solve the hard parts of AI alignment, I predict that it would just shorten the timeline to unaligned superintelligence, by enabling systems that are better at sustaining long chains of reasoning, which is one of the major advantages humans still have over AIs. I worry that vague talk of using formal verification for AI safety is in effect safety-washing a dangerous capabilities research program.

All that said, a superhuman formal-theorem-proving assistant would be a super-cool toy, so if anyone has a more detailed argument for why it would actually be a net win for safety in expectation, I'd be interested to hear it.

Correct. Each iteration of the halting problem for oracle Turing machines (called the "Turing jump") takes you to a new level of relative undecidability, so in particular true arithmetic is strictly harder than the halting problem.

The true first-order theory of the standard model of arithmetic has Turing degree . That is to say, with an oracle for true arithmetic, you could decide the halting problem, but also the halting problem for oracle Turing machines with a halting-problem-for-regular-Turing-machines oracle, and the halting problem for oracle Turing machines with a halting oracle for those oracle Turing machines, and so on for any finite number of iterations. Conversely, if you had an oracle that solves the halting problem for any of these finitely-iterated-halting-problem-oracle Turing machines, you could decide true arithmetic.

That HN comment you linked to is almost 10 years old, near the bottom of a thread on an unrelated story, and while it supports your point, I don't notice what other qualities it has that would make it especially memorable, so I'm kind of amazed that you surfaced it at an appropriate moment from such an obscure place and I'm curious how that happened.

All right, here's my crack at steelmanning the Sin of Gluttony theory of the obesity epidemic. Epistemic status: armchair speculation.

We want to explain how it could be that in the present, abundant hyperpalatable food is making us obese, but in the past that was not so to nearly the same extent, even though conditions of abundant hyperpalatable food were not unheard of, especially among the upper classes. Perhaps the difference is that, today, abundant hyperpalatable food is available to a greater extent than ever before to people in poor health.

In the past, food cultivation and preparation were much more labor intensive than in the present, so you either had to pay a much higher price for your hyperpalatable food, or put in the labor yourself. Furthermore, there were fewer opportunities to make the necessary income from sedentary work, and there wasn't much of a welfare state. Thus, if you were in poor health, you were much more likely in the past than today to be selected out of the class of people who had access to abundant hyperpalatable food. Obesity is known to be a downstream effect of various other health problems, but only if you are capable of consuming enough calories, and have access to food that you want to overeat.

Furthermore, it is plausible that some people, due to genetics or whatever, have a tendency to be in good health when they lack access to abundant hyperpalatable food, and to become obese and thus unhealthy when they have access to abundant hyperpalatable food. Thus there is a feedback loop where being healthier makes you more productive, which makes hyperpalatable food more available to you, which makes you less healthy, which makes you less productive, which makes hyperpalatable food less available to you. Plausibly, in the past, this process tended towards an equilibrium at a much lower level of obesity than it does today, because of today's greater availability of hyperpalatable food to people in poor health.

It is also plausible that our technological civilization has simply made considerable progress in the development of ever more potent gustatory superstimuli over the past century. This is a complex optimization problem, and it's not clear why we should have come close to a ceiling on it long before the present, or why just contemplating the subjective palatability of past versus present-day food would give us conscious awareness of why we are more prone to overeating the latter.

Both of these proposed causes are consistent with pre-obesity-epidemic overfeeding studies of metabolically healthy individuals failing to cause large, long-term weight gain: They suggest that the obesity epidemic is concentrated among metabolically unhealthy people who in the past simply couldn't afford to get fat, and that present-day food is importantly different.

This question is tangential to the main content of your post, so I have written it up in a separate post of my own, but I notice I am confused that you and many other rationalists are balls to the wall for cheap and abundant clean energy and other pro-growth, tech-promoting public policies, while also being alarmist about AI X-risk, and I am curious if you see any contradiction there:

Is There a Valley of Bad Civilizational Adequacy?

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rather technical, please see preprint by @K_G_Andersen, @arambaut, et al at http://virological.org/t/the-proximal-origin-of-sars-cov-2/398… for full details.
But, briefly, #SARSCoV2 has 6 mutations to its receptor binding domain that make it good at binding to ACE2 receptors from humans, non-human primates, ferrets, pigs, cats, pangolins (and others), but poor at binding to bat ACE2 receptors.
This pattern of mutation is most consistent with evolution in an animal intermediate, rather than lab escape. Additionally, the presence of these same 6 mutations in the pangolin virus argues strongly for an animal origin: https://biorxiv.org/content/10.1101/2020.02.13.945485v1…
...
Data point #3 (market cases): Many early infections in Wuhan were associated with the Huanan Seafood Market. A zoonosis fits with the presence of early cases in a large animal market selling diverse mammals. A lab escape is difficult to square with early market cases.
...
Data point #4 (environmental samples): 33 out of 585 environmental samples taken from the Huanan seafood market showed as #SARSCoV2 positive. 31 of these were collected from the western zone of the market, where wildlife booths are concentrated. 15/21 http://xinhuanet.com/english/2020-01/27/c_138735677.htm…
Environmental samples could in general derive from human infections, but I don't see how you'd get this clustering within the market if these were human derived.

One scenario I recall seeing somewhere that would reconcile lab-escape with data points 3 & 4 above is that some low-level WIV employee or contractor might have sold some purloined lab animals to the wet market. No idea how plausible that is.

Load More