All of lbThingrb's Comments + Replies

This is an appealingly parsimonious account of mathematical knowledge, but I feel like it leaves an annoying hole in our understanding of the subject, because it doesn't explain why practicing math as if Platonism were correct is so ridiculously reliable and so much easier and more intuitive than other ways of thinking about math.

For example, I have very high credence that no one will ever discover a deduction of 0=1 from the ZFC axioms, and I guess I could just treat that as an empirical hypothesis about what kinds of physical instantiations of ZFC proofs... (read more)

lbThingrb*30

To me, this is a clear example of there being no such thing as an "objective" truth about the the validity of the parallel postulate - you are entirely free to assume either it or incompatible alternatives. You end up with equally valid theories, it's just those theories are applicable to different models

This is true, but there's an important caveat: Mathematicians accepted Euclidean geometry long before they accepted non-Euclidean geometry, because they took it to be intuitively evident that a model of Euclid's axioms existed, whereas the existence of mod... (read more)

Answer by lbThingrb1-2

I have spent a long time looking in vain for any reason to think ZFC is consistent, other than that it holds in the One True Universe of Sets (OTUS, henceforth). So far I haven't found anything compelling, and I am quite doubtful at this point that any such justification exists.

Just believing in the OTUS seems to provide a perfectly satisfactory account of independence and nonstandard models, though: They are just epiphenomenal shadows of the OTUS, which we have deduced from our axioms about the OTUS. They may be interesting and useful (I rather like nonst... (read more)

What’s more interesting is that I just switched medications from one that successfully managed the depression but not the anxiety to one that successfully manages the anxiety but not the depression

May I ask which medications?

2Sable
You may! Zoloft managed the depression but not the anxiety, and Lexapro the anxiety but not the depression. For what it's worth, I have zero expectation that anyone else would share my exact response to the medications; both have helped plenty of people in the past.

For macroscopic rotation:

  • Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
  • Without smooth surfaces to roll on, rolling is not better than walking.

There are other uses for macroscopic rotation besides rolling on wheels, e.g. propellers, gears, flywheels, drills, and turbines. Also, how to provide nutrients to detached components, or build smooth surfaces to roll on so your wheels will be useful, seem like problems that intelligence is better at solving than evolution.

2bhauth
Propellers are not better than flapping wings or fins. Machines use them because they're easier to build and drive.
lbThingrb5816

I'm middle-aged now, and a pattern I've noticed as I get older is that I keep having to adapt my sense of what is valuable, because desirable things that used to be scarce for me keep becoming abundant. Some of this is just growing up, e.g. when I was a kid my candy consumption was regulated by my parents, but then I had to learn to regulate it myself. I think humans are pretty well-adapted to that sort of value drift over the life course. But then there's the value drift due to rapid technological change, which I think is more disorienting. E.g. I investe... (read more)

4ErickBall
I think our world actually has a great track record of creating artificial scarcity for the sake of creating meaning (in terms of enjoyment, striving to achieve a goal, sense of accomplishment). Maybe "purpose" in the most profound sense is tough to do artificially, but I'm not sure that's something most people feel a whole lot of anyway? I'm pretty optimistic about our ability to adapt to a society of extreme abundance by creating "games" (either literal or social) that become very meaningful to those engaged in them.

This all does seem like work better done than not done, who knows, usefulness could ensue in various ways and downsides seem relatively small.

I disagree about item #1, automating formal verification. From the paper:

9.1 Automate formal verification:

As described above, formal verification and automatic theorem proving more generally needs to be fully automated. The awe-inspiring potential of LLMs and other modern AI tools to help with this should be fully realized.

Training LLMs to do formal verification seems dangerous. In fact, I think I would extend that t... (read more)

3Herb Ingram
Formally proving that some X you could realistically build has property Y is way harder than building an X with property Y. I know of no exceptions (formal proof only applies to programs and other mathematical objects). Do you disagree? I don't understand why you expect the existence of a "formal math bot" to lead to anything particularly dangerous, other than by being another advance in AI capabilities which goes along other advances (which is fair I guess). Human-long chains of reasoning (as used for taking action in the real world) neither require nor imply the ability to write formal proofs. Formal proofs are about math and making use of math in the real world requires modelling, which is crucial, hard and usually very informal. You make assumptions that are obviously wrong, derive something from these assumptions, and make an educated guess that the conclusions still won't be too far from the truth in the ways you care about. In the real world, this only works when your chain of reasoning is fairly short (human-length), just as arbitrarily complex and long-term planning doesn't work, while math uses very long chains of reasoning. The only practically relevant application so-far seems cryptography because computers are extremely reliable and thus modeling is comparatively easy. However, plausibly it's still easier to break some encryption scheme than to formally prove that your practically relevant algorithm could break it. LLMs that can do formal proof would greatly improve cybersecurity across the board (good for delaying some scenarios of AI takeover!). I don't think they would advance AI capabilities beyond the technological advances used to build them and increasing AI hype. However, I also don't expect to see useful formal proofs about useful LLMs in my lifetime (you could call this "formal interpretability"? We would first get "informal interpretability" that says useful things about useful models.) Maybe some other AI approach will be more interpretab

Correct. Each iteration of the halting problem for oracle Turing machines (called the "Turing jump") takes you to a new level of relative undecidability, so in particular true arithmetic is strictly harder than the halting problem.

The true first-order theory of the standard model of arithmetic has Turing degree . That is to say, with an oracle for true arithmetic, you could decide the halting problem, but also the halting problem for oracle Turing machines with a halting-problem-for-regular-Turing-machines oracle, and the halting problem for oracle Turing machines with a halting oracle for those oracle Turing machines, and so on for any finite number of iterations. Conversely, if you had an oracle that solves the halting problem for any of these finitely-iterated-halting-prob... (read more)

2Noosphere89
So the answer is no, even a magical halt oracle cannot compute the true first order theory of the standard model of arithmetic, but there are machines in which you can compute the true first order theory of the standard model of arithmetic.

That HN comment you linked to is almost 10 years old, near the bottom of a thread on an unrelated story, and while it supports your point, I don't notice what other qualities it has that would make it especially memorable, so I'm kind of amazed that you surfaced it at an appropriate moment from such an obscure place and I'm curious how that happened.

gwern120

Oh, it's just from my list of reward hacking; I don't mention the others because most of them aren't applicable to train/deploy distinction. And I remember it because I remember a lot of things and this one was particularly interesting to me for exactly the reason I linked it just now - illustrating that optimization processes can hack train/deploy distinctions as a particularly extreme form of 'data leakage'. As for where I got it, I believe someone sent it to me way back when I was compiling that list.

All right, here's my crack at steelmanning the Sin of Gluttony theory of the obesity epidemic. Epistemic status: armchair speculation.

We want to explain how it could be that in the present, abundant hyperpalatable food is making us obese, but in the past that was not so to nearly the same extent, even though conditions of abundant hyperpalatable food were not unheard of, especially among the upper classes. Perhaps the difference is that, today, abundant hyperpalatable food is available to a greater extent than ever before to people in poor health.

In the pa... (read more)

This question is tangential to the main content of your post, so I have written it up in a separate post of my own, but I notice I am confused that you and many other rationalists are balls to the wall for cheap and abundant clean energy and other pro-growth, tech-promoting public policies, while also being alarmist about AI X-risk, and I am curious if you see any contradiction there:

Is There a Valley of Bad Civilizational Adequacy?

This Feb. 20th Twitter thread from Trevor Bedford argues against the lab-escape scenario. Do read the whole thing, but I'd say that the key points not addressed in parent comment are:

Data point #1 (virus group): #SARSCoV2 is an outgrowth of circulating diversity of SARS-like viruses in bats. A zoonosis is expected to be a random draw from this diversity. A lab escape is highly likely to be a common lab strain, either exactly 2002 SARS or WIV1.

But apparently SARSCoV2 isn't that. (See pic.)

Data point #2 (receptor binding domain): This point is rath
... (read more)
1Rudi C
We need to update down on any complex, technical datapoint that we don’t fully understand, as China has surely paid researchers to manufacture hard-to-evaluate evidence for its own benefit (regardless of the truth of the accusation). This is a classic technique that I have seen a lot in propaganda against laypeople, and there is every reason it should have been employed against the “smart” people in the current coronavirus situation.
5ChristianKl
Given that there's the claim from Botao Xiao's The possible origins of 2019-nCoV coronavirus, that this seafood market was located 300m from a lab (which might or might not be true), this market doesn't seem like it reduces chances.
4[anonymous]
If it was a lab-escape and the CCP knew early enough, they could simply manufacture the data to point at the market as the origin.
lbThingrbΩ220

Generalized to n dimensions in my reply to Adele Lopez's solution to #9 (without any unnecessary calculus :)

lbThingrbΩ4110

Thanks! I find this approach more intuitive than the proof of Sperner's lemma that I found in Wikipedia. Along with nshepperd's comment, it also inspired me to work out an interesting extension that requires only minor modifications to your proof:

d-spheres are orientable manifolds, hence so is a decomposition of a d-sphere into a complex K of d-simplices. So we may arbitrarily choose one of the two possible orientations for K (e.g. by choosing a particular simplex P in K, ordering its vertices from 1 to d + 1, and declaring it to be the prototypi... (read more)

2Adele Lopez
Awesome! I was hoping that there would be a way to do this!
lbThingrbΩ5180

Thanks, this is a very clear framework for understanding your objection. Here's the first counterargument that comes to mind: Minimax search is a theoretically optimal algorithm for playing chess, but is too computationally costly to implement in practice. One could therefore argue that all that matters is computationally feasible heuristics, and modeling an ideal chess player as executing a minimax search adds nothing to our knowledge of chess. OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple boar... (read more)

OTOH, doing a minimax search of the game tree for some bounded number of moves, then applying a simple board-evaluation heuristic at the leaf nodes, is a pretty decent algorithm in practice.

I've written previously about this kind of argument -- see here (scroll down to the non-blockquoted text). tl;dr we can often describe the same optimum in multiple ways, with each way giving us a different series that approximates the optimum in the limit. Whether any one series does well or poorly when truncated to N terms can't be explained by saying "... (read more)

lbThingrbΩ230

I didn't mean to suggest that the possibility of hypercomputers should be taken seriously as a physical hypothesis, or at least, any more seriously than time machines, perpetual motion machines, faster-than-light, etc. And I think it's similarly irrelevant to the study of intelligence, machine or human. But in my thought experiment, the way I imagined it working was that, whenever the device's universal-Turing-machine emulator halted, you could then examine its internal state as thoroughly as you liked, to make sure everything was consistent... (read more)

3Vanessa Kosoy
Nearly everything you said here was already addressed in my previous comment. Perhaps I didn't explain myself clearly? I wrote before that "I wonder how would you tell whether it is the hypercomputer you imagine it to be, versus the realization of the same hypercomputer in some non-standard model of ZFC?" So, the realization of a particular hypercomputer in a non-standard model of ZFC would pass all of your tests. You could examine its internal state or its output any way you like (i.e. ask any question that can be formulated in the language of ZFC) and everything you see would be consistent with ZFC. The number of steps for a machine that shouldn't halt would be a non-standard number, so it would not fit on any finite storage. You could examine some finite subset of its digits (either from the end or from the beginning), for example, but that would not tell you the number is non-standard. For any question of the form "is n larger than some known number n0?" the answer would always be "yes". Once again, there is a difference of principle. I wrote before that: "...given an uncomputable function h and a system under test f, there is no sequence of computable tests that will allow you to form some credence about the hypothesis f=h s.t. this credence will converge to 1 when the hypothesis is true and 0 when the hypothesis is false. (This can be made an actual theorem.) This is different from the situation with normal computers (i.e. computable h) when you can devise such a sequence of tests." So, with normal computers you can become increasingly certain your hypothesis regarding the computer is true (even if you never become literally 100% certain, except in the limit), whereas with a hypercomputer you cannot. Yes, I already wrote that: "Although you can in principle have a class of uncomputable hypotheses s.t. you can asymptotically verify f is in the class, for example the class of all functions h s.t. it is consistent with ZFC that h is the halting function. But
lbThingrbΩ020

This can’t be right ... Turing machines are assumed to be able to operate for unbounded time, using unbounded memory, without breaking down or making errors. Even finite automata can have any number of states and operate on inputs of unbounded size. By your logic, human minds shouldn’t be modeling physical systems using such automata, since they exceed the capabilities of our brains.

It’s not that hard to imagine hypothetical experimental evidence that would make it reasonable to believe that hypercomputers could exist. For example, suppose someone demonstr... (read more)

8Vanessa Kosoy
It is true that a human brain is more precisely described as a finite automaton than a Turing machine. And if we take finite lifespan into account, then it's not even a finite automaton. However, these abstractions are useful models since they become accurate in certain asymptotic limits that are sufficiently useful to describe reality. On the other hand, I doubt that there is a useful approximation in which the brain is a hypercomputer (except maybe some weak forms of hypercomputation like non-uniform computation / circuit complexity). Moreover, one should distinguish between different senses in which we can be "modeling" something. The first sense is the core, unconscious ability of the brain to generate models, and in particular that which we experience as intuition. This ability can (IMO) be thought of as some kind of machine learning algorithm, and, I doubt that hypercomputation is relevant there in any way. The second sense is the "modeling" we do by manipulating linguistic (symbolic) constructs in our conscious mind. These constructs might be formulas in some mathematical theory, including formulas that represent claims about uncomputable objects. However, these symbolic manipulations are just another computable process, and it is only the results of these manipulations that we use to generate predictions and/or test models, since this is the only access we have to those uncomputable objects. Regarding your hypothetical device, I wonder how would you tell whether it is the hypercomputer you imagine it to be, versus the realization of the same hypercomputer in some non-standard model of ZFC? (In particular, the latter could tell you that some Turing machine halts when it "really" doesn't, because in the model it halts after some non-standard number of computing steps.) More generally, given an uncomputable function h and a system under test f, there is no sequence of computable tests that will allow you to form some credence about the hypothesis f=h s.t. thi
Apologies for the stark terms if it felt judgmental or degrading!

No worries! I mostly just wrote that comment for the lulz. And the rest was mostly so people wouldn't think I was using humor to obliquely endorse social Darwinism.

I've never heard of anyone doing this directly. Has anyone else? If not, there's probably a reason. I suppose occupational certification programs serve a similar filtering function. Anyway, your suggestion might be more palatable if it were in the form of a deposit refundable if and only if the applicant shows up/answers the phone for scheduled interviews. You would also need a trusted intermediary to hold the deposits, or else we would see a flood of fake job-interview-deposit scams. And even if you had such a trusted intermediary, I suspect tha... (read more)

2ShardPhoenix
> I've never heard of anyone doing this directly. Has anyone else? There' s a Brazilian job website that requires users to pay, though I think it's on a subscription basis.
It's really unclear, as a society, how to get them into a position where they can provide as much value as the resources they take.

Harvest their reusable organs, and melt down what remains to make a nutrient paste suitable for livestock and the working poor?

(Kidding! But that sort of thing is always where my mind wanders when people put the question in such stark terms, perhaps because I myself am a chronically unemployed "taker" (mental illness). Anyway, one of the long-term goals of AI and automation research, as I understand it, is to tur... (read more)

2Dagon
Apologies for the stark terms if it felt judgmental or degrading! I have a LOT of sympathy for people who temporarily or permanently aren't a good fit for the corporate standards of common high-paying jobs. I'm lucky enough currently to be well-employed and providing enough value that I don't feel bad being highly paid, but that wasn't always the case and I recognize the that combination of talents and skills that work for me are pretty much pure luck for me to have. The ability to focus for many hours and work my ass off is mildly rare and incredibly lucky for me to have. I don't mean any blame in my recognition that resources are limited and it's FAR easier for those lucky enough to be smart and conforming to get some of those resources. I recoil from the label "taker" - it's an unhelpful model, more about social status than about understanding or problem-solving. I do honestly believe that one of the biggest challenges for humanity's moral and economic growth (which I see as correlated, if not causal) in the medium-term (next 2 generations, modulo singularity) is how to make more people's contributions larger and more legible so it's trivially obvious that we should give a much higher percentage of humanity more resources and status than we do today.

I'm curious if anyone has recollections of what it was like trying to hire for similar positions in recent years, when the unemployment rate was much higher. That is to say, how much of this is base-rate human flakiness, and how much is attributable to the tight labor market having already hoovered up almost all the well-functioning adults?

I’m not actually sure what to make of that—should we write off some moral intuitions as clearly evolved for not-actually-moral reasons ... ?

All moral intuitions evolved for not-actually-moral reasons, because evolution is an amoral process. That is not a reason to write any of them off, though. Or perhaps I should say, it is only a reason to "write them off" to the extent that it feels like it is, and the fact that it sometimes does, to some people, is as fine an example as any of the inescapable irrationality of moral intuitions.

If we have the m
... (read more)
• You are not required to create a happy person, but you are definitely not allowed to create a miserable one

Who's going around enforcing this rule? There's certainly a stigma attached to people having children when those children will predictably be unhappy, but most people aren't willing to resort to, e.g., nonconsensual sterilization to enforce it, and AFAIK we haven't passed laws to the effect that people can be convicted, under penalty of fine or imprisonment, of having children despite knowing that those children would be at high ... (read more)

1tdb
It is not even a norm. If I marry my true love, someone else who loves my spouse may feel miserable as a result. No one is obligated to avoid creating this sort of misery in another person. We might quibble that such a person is immature and taking the wrong attitude, but the "norm" does not make exceptions where the victims are complicit in their own misery, it just prohibits anyone from causing it. We might be able to construct a similar thought experiment for "dire situations". If I invent a new process that puts you out of business by attracting all your customers, your situation may become dire, due to your sudden loss of income. Am I obligated in any way to avoid this? I think not. Those two norms (don't cause misery or dire situations) only work as local norms, within your local sphere of intimate knowledge. In a large-scale society, there is no way to assure that a particular decision won't change something that someone depends upon emotionally or economically. This is just a challenge of cosmopolitan life, that I have the ultimate responsibility for my emotional and economic dependencies, in the literal sense that I am the one who will suffer if I make an unwise or unlucky choice. I can't count on the system (any system) to rectify my errors (though different systems may make my job harder or easier).
A shy person cannot learn, an impatient person cannot teach

The translation you linked actually gives "A person prone to being ashamed cannot learn," which is even more on the nose. Does anyone have any advice on how to deal with this issue? I have a pretty severe case of it, especially because I tend to take a lot (a lot) longer than other people to do pretty much every kind of work I've ever tried, independently of how much intelligence I needed to apply to it. Aside from seeking medical advice for that problem (which hasn't helped... (read more)

6TheMajor
Epistemic status: uncertain. While I don't have a full answer for you, there are some ideas that might be worth trying out. * Maybe there is a way to do smart-people stuff at your own pace, like learning from books/youtube videos instead of in a public setting. Books and videos have infinite patience. People all have different paces for everything, and if you notice that yours is lower than your peers it might be worth carefully trying not keeping up for a while. This can be dangerous though, so be cautious with this. * Personally I've always been frustrated with my pace of learning, and this feeling has always vanished when I look back and see how far I've come. Some (very good) lecturers explained this to me both in terms of a maze ("At first you don't know which path to take so you have to spend a lot of time making dead turns, and then when you're done it looks like you didn't cover that much distance. But really you did.") and in terms of exponential growth ("At the end of a learning curve you look at your rate of progress, which is the derivative of your knowledge with respect to time, and say [I could have gotten here way faster, look at how high the derivative is now and how low my total knowledge level is]. But that's not how exponentials work, since learning also increases the rate of learning."). This has really helped me think of asking 'stupid' questions as an investment, and if I look back at the things I didn't know half a year ago I tend to be quite proud of my growth.

The Wikipedia link on amphibian decline mentioned the effects of artificial lighting on the behavior of insect prey species as a possible contributor. I suppose it’s possible that that’s a factor in the observations from the German study as well, particularly since they only looked at flying insects. But the observations were apparently made in nature preserves, so one would think that artificial lighting wouldn’t be that common in those habitats. There could still be indirect effects, though.

• How to be aware of other people’s points of view without merging with them
...
• How to restrain yourself from anger or upset
• How to take unflattering comments or disagreement in stride
...
• How to resist impulses to evade the issue or make misleading points
...
• How to understand another person’s perspective

It wasn't the reason I got into it in the first place, but I have found mindfulness practice helpful for these things. I think that's because mindfulness involves a lot of introspection and metacognition, and those skills transfer pretty well ... (read more)

Toastmasters? Never tried them myself, but I get the impression that they aim to do pretty much the thing you're looking for.

Groups of friends often coalesce around common interests. This group of friends coalesced around a common interest in rationality and self-improvement. That this is possible to do is potentially useful information to other people who are interested in rationality and self-improvement and making new friends.

1Lumifer
Was there any doubt about this? Did anyone ever say "No, you can't do that"?