1 min read

1

This is a special post for quick takes by Decaeneus. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
81 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Having young kids is mind bending because it's not uncommon to find yourself simultaneously experiencing contradictory feelings, such as:

  • I'm really bored and would like to be doing pretty much anything else right now.
  • There will likely come a point in my future when I would trade anything, anything to be able to go back in time and re-live an hour of this.

Pretending not to see when a rule you've set is being violated can be optimal policy in parenting sometimes (and I bet it generalizes).

Example: suppose you have a toddler and a "rule" that food only stays in the kitchen. The motivation is that each time food is brough into the living room there is a small chance of an accident resulting in a permanent stain. There's cost to enforcing the rule as the toddler will put up a fight. Suppose that one night you feel really tired and the cost feels particularly high. If you enforce the rule, it will be much more painful than it's worth in that moment (meaning, fully discounting future consequences). If you fail to enforce the rule, you undermine your authority which results in your toddler fighting future enforcement (of this and possibly all other rules!) much harder, as he realizes that the rule is in fact negotiable / flexible.

However, you have a third choice, which is to credibly pretend to not see that he's doing it. It's true that this will undermine your perceived competence, as an authority, somewhat. However, it does not undermine the perception that the rule is to be fully enforced if only you noticed the violation. You get to "s... (read more)

5RobertM
Huh, that went somewhere other than where I was expecting.  I thought you were going to say that ignoring letter-of-the-rule violations is fine when they're not spirit-of-the-rule violations, as a way of communicating the actual boundaries.
3Decaeneus
Perhaps that can work depending on the circumstances. In the specific case of a toddler, at the risk of not giving him enough credit, I think that type of distinction is too nuanced. I suspect that in practice this will simply make him litigate every particular application of any given rule (since it gives him hope that it might work) which raises the cost of enforcement dramatically. Potentially it might also make him more stressed, as I think there's something very mentally soothing / non-taxing about bright line rules. I think with older kids though, it's obviously a really important learning to understand that the letter of the law and the spirit of the law do not always coincide. There's a bit of a blackpill that comes with that though, once you understand that people can get away with violating the spirit as long as they comply with the letter, or that complying with the spirit (which you can grok more easily) does not always guarantee compliance with the letter, which puts you at risk of getting in trouble.
4keltan
Teacher here, can confirm.

There's a justifiable model for preferring "truthiness" / vibes to analytical arguments, in certain cases. This must be frustrating to those who make bold claims (doubly so for the very few whose bold claims are actually true!)

Suppose Sophie makes the case that pigs fly in a dense 1,000 page tome. Suppose each page contains 5 arguments that refer to some of / all of the preceding pages. Sophie makes the claim that I am welcome to read the entire book, or if I'd like I can sample, say, 10 pages (10 * 5 = 50 arguments) and reassure myself that they're solid. Suppose that the book does in fact contain a lone wrong argument, a bit flip somewhere, that leads to the wrong result, but is mostly (99.9%) correct.

If I tell Sophie that I think her answer sounds wrong, she might say: "but here's the entire argument, please go ahead, and show me where any of it is incorrect!"

Since I'm very unlikely to catch the error at a glance, and I'm unlikely to want to spend the time to read and grok the whole thing, I'm going to just say: sorry but the vibes are off, your conclusion just seems too far off my prior, I'm just going to assume you made a difficult-to-catch mistake somewhere, but I'm not going... (read more)

4sunwillrise
This seems to be the Obfuscated Arguments Problem.
1ProgramCrafter
TL;DR: a solution for LLMs of medium size already exists. The situation would be significantly better if we could provide succinct arguments for the fact being true. The pig claim could possibly be supported by a video - though that's not much Bayesian evidence due to forge possibility, but it might still restrict what the arguments can say about a pig, moreso if some tests are run like adding weights and so on. Many claims are harder to support with direct evidence. The next best thing is a scalable transparent argument that your interlocutor would accept each derivation step, and finally the claim itself. For humans, this still requires communicating all derivation steps (because your peer cannot be sure if you ran a simulation of them providing any argument). But things change for AI models: you can have a scalable transparent argument of knowledge that, after reading your book, the model would output "I believe that <your fact>, having updated to credence <P>", which you could then use for future interactions. It requires to feed the book to the model once, sure, but the key feature in this case is that you can prove its output, including to a human; then a large class of hypotheses disappears, leaving essentially "LLM could not evaluate the arguments right" vs "all arguments are correct". You can, optionally, even have private sections in the book, and provide zero knowledge about its contents to verifiers besides the LLM! As for soundness, it is thought to be computationally intractable to construct a false ZK-STARK (not that it is fast to construct for actual computation, but still).

I often mistakenly behave as if my payoff structure is binary instead of gradual. I think others do too, and this cuts across various areas.

For instance, I might wrap up my day and notice that it's already 11:30pm, though I'd planned to go to sleep an hour earlier, by 10:30pm. My choice is, do I do a couple of me-things like watch that interesting YouTube video I'd marked as "watch later", or do I just go to sleep ASAP? I often do the former and then predictably regret it the next day when I'm too tired to function well. I've reflected on what's going on i... (read more)

4Seth Herd
I think you're probably quite correct about that example, and similar things. I notice other people doing this a lot, and I catch myself at it sometimes. So I think noticing and eliminating this particular flaw in logic is helpful. I also think the underlying problem goes deeper. Because we want to stay up and watch that video, our brain will come up with excuses to do it, and we'll be biased to just quickly accept those excuses when we otherwise would recognize them as logically flawed, because we want to. This is motivated reasoning. I think it's the single most impactful and pervasive bias. I spent some years studying this, but I haven't yet gotten around to writing about it on LW because it's not directly alignment-relevant. I really need to do at least a short post, because it is relevant for basically navigating and understanding all psychology. Including the field of alignment research.
1Decaeneus
This raises the question of what it means to want to do something, and who exactly (or which cognitive system) is doing the wanting. Of course I do want to keep watching YT, but I also recognize there's a cost to it. So on some level, weighing the pros and cons, I (or at least an earlier version of me) sincerely do want to go to bed by 10:30pm. But, in the moment, the tradeoffs look different from how they appeared from further away, and I make (or, default into) a different decision. An interesting hypothetical here is whether I'd stay up longer when play time starts at 11:30pm than when play time starts at, say, 10:15pm (if bedtime is 10:30pm). The wanting to play, and the temptation to ignore the cost, might be similar in both scenarios. But this sunk cost / binary outcome fallacy would suggest that I'll (marginally) blow further past my deadline in the former situation than in the latter.
2cubefox
I recognize a very similar failure mode of instrumental rationality: I sometimes include in the decision process for an action not just the utility of that action itself, but also its probability. That is, I act on the expected utility of the action, not on its utility. Example: * I should hurry up enough to catch my train (hurrying up enough has high utility) * Based on experience, I probably won't hurry up enough (hurrying up enough has low probability) * So the expected utility (utility*probability) of hurrying up enough is not very high * So I don't hurry up enough * So I miss my train. The mistake is to pay any attention to the expected utility (utility*probability) of an action, rather than just to its utility. The probability of what I will do is irrelevant to what I should do. The probability of an action should be the output, never the input of my decision. If one action has the highest utility, it should go to 100% probability (that is, I should do it) and all the alternative actions should go to 0 probability. The scary thing is that recognizing this mistake doesn't help with avoiding it.

Epistemics vs Video Generation

Veo 3 released yesterday serves as another example of what's surely coming in terms of being able to generate video that's indistinguishable from reality. We will be coming off a many-decades period where we could just believe video as a source of truth: what a luxury that will have been, in hindsight!

Assuming it's important to have sources of truth, I see the following options going forward:

  1. we will have to just accept that our world has permanently become epistemically worse. sad!
  2. we will have to trust real life more than we d
... (read more)

For me, a crux about the impact of AI on education broadly is how our appetite for entertainment behaves at the margins close to entertainment saturation.

Possibility 1: it will always be very tempting to direct our attention to the most entertaining alternative, even at very high levels of entertainment

Possibility 2: there is some absolute threshold of entertainment above which we become indifferent between unequally entertaining alternatives

If Possibility 1 holds, I have a hard time seeing how any kind of informational or educational content, which is con... (read more)

4Vladimir_Nesov
This kind of "we" seems to deny self-reflection and agency.
1Decaeneus
I suppose there are varying degrees of the strength of the statement. * Strong form: sufficiently compelling entertainment is irresistible for almost anyone (and of course it may disguise itself as different things to seduce different people, etc.) * Medium form: it's not theoretically irresistible, and if you're really willful about it you can resist it, but people will large will (perhaps by choice, ultimately) not resist it, much as they (we?) have not resisted dedicating an increasing fraction of their time to digital entertainment so far. * Weak form: it'll be totally easy to resist, and a significant fraction of people will. I guess I implicitly subscribe to the medium form.
2Vladimir_Nesov
If the majority has some pattern of behavior, that isn't necessarily even a risk factor for a given person getting sucked into that pattern of behavior. So I'm objecting to the framing (emanating through use of the word "we") suggesting that a property of behavior of some group has significant ability to affect individuals who are aware of that property, bypassing their judgement about endorsement of that property.
1Decaeneus
Indeed! I meant "we" as a reference to the collective group of which we are all members of, without requiring that every individual in the group (i.e. you or I) share in every aspect of the general behavior of the group.   To be sure, I would characterize this as a risk factor even if you (or I) will not personally fall prey to this ourselves, in the same way that it's a risk factor if the IQ of the median human drops by 10 points, which this effectively might be equivalent to (net-of-distractions).

Something that gets in the way of my making better decisions is that I have strong empathy that "caps out" the negative disutility that a decision might cause to someone, which makes it hard to compare across decisions with big implications.

In the example of the trolley problem, both branches feel maximally negative (imagine my utility from each of them is negative infinity) so I have trouble comparing them, and I am very likely to simply want to not be involved. This makes it hard for me to perform the basic utility calculation in my head, perhaps not in the literal trolley problem where the quantities are obvious, but certainly in any situation that's more ambiguous.

Inspired by this Tweet by Rohit Krishnan https://x.com/krishnanrohit/status/1923097822385086555

One thing LLMs can teach us is that memorisation of facts is, in fact, a necessary part of reasoning and intelligent behaviour

There's a very simple model under which memorization is important:

  • if good reasoning resembles test-time compute, judgement involves iterating through lots of potential hypotheses and validating them
  • the speed with which you validate each hypothesis is partly a function of the time it takes to fetch the data necessary to assess it
  • when that d
... (read more)

(sci-fi take?) If time travel and time loops are possible, would this not be the (general sketch of the) scenario under which it comes into existence:

1. a lab figures out some candidate particles that could be sent back in time, build a detector for them and start scanning for them. suppose the particle has some binary state. if the particle is +1 (-1) the lab buys (shorts) stock futures and exits after 5 minutes

2. the trading strategy will turn out to be very accurate and the profits from the trading strategy will be utilized to fund the research required... (read more)

6Richard_Kennaway
1. “DO NOT MESS WITH TIME"
3Dagon
4. The lab sees milions of conflicting particles.  Their plan was foiled by future spam. 5. The lab is killed by assasins, who've been waiting for the signal since the 12th century and get activated by the Time Control Authority.

Infertility rates are rising and nobody seems to quite know why.  Below is what feels like a possible (trivial) explanation that I haven't seen mentioned anywhere.

 

I'm not in this field personally so it's possible this theory is out there, but asking GPT about it doesn't yield the proposed explanation: https://chat.openai.com/share/ab4138f6-978c-445a-9228-674ffa5584ea

 

Toy model:

  • a family is either fertile or infertile, and fertility is hereditary
  • the modal fertile family can have up to 10 kids, the modal infertile family can only have 2 kids
  • in
... (read more)
4tailcalled
How robust is the information that infertility rates are rising?
1Decaeneus
To be sure, I'm not an expert on the topic. Declines in male fertility I think are regarded as real, though I haven't examined the primary sources. Regarding female fertility, this report from Norway outlines the trend that I vaguely thought was representative of most of the developed world over the last 100 years.  Female fertility is trickier to measure, since female fertility and age are strongly correlated, and women have been having kids later, so it's important (and likely tricky) to disentangle this confounder from the data.

This is both a declaration of a wish, and a question, should anyone want to share their own experience with this idea and perhaps tactics for getting through it.

I often find myself with a disconnect between what I know intellectually to be the correct course of action, and what I feel intuitively is the correct course of action. Typically this might arise because I'm just not in the habit of / didn't grow up doing X, but now when I sit down and think about it, it seems overwhelmingly likely to be the right thing to do. Yet, it's often my "gut" and not my m... (read more)

[-]leogao126

I think the solution is to become more emotionally integrated. take the time to understand your emotional mind and the reasons behind why it believes the things it does. some say therapy helps with this; your mileage may vary. I've found introspection + living life more fully helps a lot.

1Decaeneus
Thank you. I think, even upon identifying the reasons for why the emotional mind believes the things it does, I hit a twofold sticking point: * I consider the constraints themselves (rarely in isolation but more like the personality milieu that they are enmeshed with) to be part of my identity, and attempting to break them is scary in both a deep existential loss of self sense, and in a "this may well be load bearing in ways I can't fully think through" sense * Even orthogonal to the first bullet, it's somehow hard to change them even though with my analytical mind I can see what's going on. It's almost like the emotional Bayesian updating has brought these beliefs / tendencies to a very sharp peak long ago, but now circumstances have changed but the peak is too sharp to belief it away with new experience. If it sounds like I'm trying to find reasons to not make the change, perhaps that's another symptom of the problem. There's a saboteur in the machine!
4RedMan
One version of decision theory I liked states essentially that the human brain has two systems.  One does rational calculations, the other slaps on a bias for uncertainty avoidance before pushing it for action.  Maybe evaluate your perception of the uncertainty associated with the course of action that makes rational sense.  How uncertain is it really?
1Decaeneus
This is a plausible rational reason to be skeptical of one's own rational calculations: that there is uncertainty, and that one should rationally have a conservativeness bias to account for it. What I think is happening though is that there's an emotional blocker than is then being cleverly back-solved by finding plausible rational (rather than emotional and irrational) reasons for it, of which this is one. So it's not that this is a totally bogus reason, it's that this actually provides a plausible excuse for what is actually motivated by something different.
3ProgramCrafter
I think you cannot do this any more than force yourself to believe something. Indeed, both systems are learning from what you see to be true and what succeeds; if you believe that intuitive system is not judging correctly, you should try experiencing things more deeply (reflect on success more, come back to see if the thing flourishes/helps others/whatever); if you believe that reasoning system is not judging correctly, you should try it on more everyday actions and check if all emotionally relevant factors got included. The systems will approximately agree because they both try to discern truth, not because they are bound to be equal to each other. P.S. turns out I essentially rephrased @leogao ; still posting this in hopes an explanation is useful

There’s a particular type of cognitive failure that I reliably experience, which seems like a pure kind of misconfiguration of the mind, and which I've found very difficult to will myself to not experience, which feels like some kind of fundamental limitation.

The quickest way to illustrate this is with an example: I'm playing a puzzle game that requires ordering 8 letters into a word, and I'm totally stuck. As soon as I look at a hint of what the first letter is, I can instantly find the word.

This seems wrong. In theory, I expect I can just iterate through... (read more)

What if a major contributor to the weakness of LLMs' planning abilities is that the kind of step-by-step description of what a planning task looks like is content that isn't widely available in common text training datasets? It's mostly something we do silently, or we record in non-public places.

Maybe whoever gets the license to train on Jira data is going to get to crack this first.

Proposal: if you're a social media or other content based platform, add a long-press to the "share" button which allows you to choose between "hate share" and "love share".

Therefore:
* quick tap: keep the current functionality, you get to send the link wherever / copy to clipboard
* long press and swipe to either hate or love share: you still get to send the link (optionally, the URL has some argument indicating it's a hate / love share, if the link is a redirect through the social media platform)

This would allow users to separate out between things that are... (read more)

4JBlack
I believe that there is already far too much "hate sharing". Perhaps the default in a social media UI should be that shared content includes a public endorsement of whatever content it links to, and if you want to "hate share" anything without such an endorsement, you have to fight a hostile UI to do so. In particular, "things that are worth sharing" absolutely should not overlap with "want to see less of". If you want to see less of some type of thing, it's self-defeating to distribute more copies of it. Worse, if you even suspect that any of your own readers are anything like you, why are you inflicting it on them?
-1Shankar Sivarajan
One way to "see less of" something you hate is to stop it from being produced, and that may be seen as a better solution than basically averting your eyes from it. Rallying mobs to get whoever produced it fired has proven to be quite effective. 

The more complex the encoding of a system (e.g. of ethics) is, the more likely it is that it's reverse-engineered in some way. Complexity is a marker of someone working backwards to encapsulate messy object-level judgment into principles. Conversely, a system that flows outward from principles to objects will be neatly packed in its meta-level form.

In linear algebra terms, as long as the space of principles has fewer dimensions than the space of objects, we expect principled systems / rules to have a low-rank representation, with a dimensionality approachi... (read more)

3Dagon
I don't know of any encodings or legibile descriptions of ethics that AREN'T reverse-engineered.  Unless you're a moral realist, I suspect this has to be the case, because such systems are in the map, not the territory.  And not even in the most detailed maps, they're massively abstracted over other abstractions. I'm far more suspicious of simple descriptions, especially when the object space has many more dimensions.  The likelihood that they've missed important things about the actual behavior/observations is extremely high.
1Decaeneus
Agreed that ultimately everything is reverse-engineered, because we don't live in a vacuum. However, I feel like there's a meaningful distinction between: 1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result 2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits me For example, consider the contrast between "we should always strive to treat others fairly" and "we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly". I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn't a coincidence. There's something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions. 
2Dagon
My point was not that we don't live in a vacuum, but that there's no ground truth or "correct" model.  We're ONLY extrapolating from very limited experienced examples, not understanding anything fundamental. When you see the word "should", you know you're in preferences and modeling land, right?  

Causality is rare! The usual statement that "correlation does not imply causation" puts them, I think, on deceptively equal footing. It's really more like correlation is almost always not causation absent something strong like an RCT or a robust study set-up.

Over the past few years I'd gradually become increasingly skeptical of claims of causality just by updating on empirical observations, but it just struck me that there's a good first principles reason for this.

For each true cause of some outcome we care to influence, there are many other "measurables" ... (read more)

4Garrett Baker
This seems pretty different from Gwern's paper selection trying to answer this topic in How Often Does Correlation=Causality?, where he concludes Also see his Why Correlation Usually ≠ Causation.
7gwern
Those are not randomly selected pairs, however. There are 3 major causal patterns: A->B, A<-B, and A<-C->B. Daecaneus is pointing out that for a random pair of correlations of some variables, we do not assign a uniform prior of 33% to each of these. While it may sound crazy to try to argue for some specific prior like 'we should assign 1% to the direct causal patterns of A->B and A<-B, and 99% to the confounding pattern of A<-C->B', this is a lot closer to the truth than thinking that 'a third of the time, A causes B; a third of the time, B causes A; and the other third of the time, it's just some confounder'. What would be relevant there is "Everything is Correlated". If you look at, say, Meehl's examples of correlations from very large datasets, and ask about causality, I think it becomes clearer. Let's take one of his first examples: Like, if you randomly assigned Baptist children to be converted to Presbyterianism, it seems unlikely that their school-liking will suddenly jump because they go somewhere else on Sunday, or that siblings will appear & vanish; it also seems unlikely that if they start liking school (maybe because of a nicer principal), that many of those children would spontaneously convert to Presbyterianism. Similarly, it seems rather unlikely that undergoing sexual-reassignment surgery will make Episcopalian men and Baptist women swap places, and it seems even more unlikely that their religious status caused their gender at conception. In all of these 5 cases, we are pretty sure that we can rule out one of the direct patterns, and that it was probably the third, and we could go through the rest of Meehl's examples. (Indeed, this turns out to be a bad example because we can apply our knowledge that sex must have come many years before any other variable like "has cold hands" or "likes poetry" to rule out one pattern, but even so, we still don't find any 50%s: it's usually pretty obviously direct causation from the temporally earlier variable, or
1Decaeneus
Thanks for these references! I'm a big fan, but for some reason your writing sits in the silly under-exploited part of my 2-by-2 box of "how much I enjoy reading this" and "how much of this do I actually read", so I'd missed all of your posts on this topic! I caught up with some of it, and it's far further along than my thinking. On a basic level, it matches my intuitive model of a sparse-ish network of causality which generates a much much denser network of correlation on top of it. I too would have guessed that the error rate on "good" studies would be lower!

Reflecting on the particular ways that perfectionism differs from the optimal policy (as someone who suffers from perfectionism) and looking to come up with simple definitions, I thought of this:

  • perfectionism looks to minimize the distance between an action and the ex-post optimal action but heavily dampening this penalty for the particular action "do nothing"
  • optimal policy says to pick the best ex-ante action out of the set of all possible actions, which set includes "do nothing"

So, perfectionism will be maximally costly in an environment where you have l... (read more)

Absence of evidence is the dark matter of inference. It's invisible yet it's paramount to good judgement.

It's easy to judge X to be true if you see some evidence that could only come about if X were true. It's a lot more subtle to judge X to be false if you do see some evidence that it's true, but you can also determine that there are lots of evidence that you would expect to have if it were true, but that is missing.

In a formalized setting like a RCT this is not an issue, but when reasoning in the wild, this is the norm. I'm guessing this leads to a bias ... (read more)

What's the cost of keeping stuff stuff around vs discarding it and buying it back again?

When you have some infrequently-used items, you have to decide between keeping them around (default, typically) or discarding them and buying them again later when you need them.

If you keep them around, you clearly lose use of some of your space. Suppose you keep these in your house / apartment. The cost of keeping them around is then proportional to the amount of either surface area or volume they take up. Volume is the appropriate measure to use especially if you have... (read more)

It feels like (at least in the West) the majority of our ideation about the future is negative, e.g.

  • popular video games like Fallout
  • zombie apocalypse themed tv
  • shows like Black Mirror (there's no equivalent White Mirror)

Are we at a historically negative point in the balance of "good vs bad ideation about the future" or is this type of collective pessimistic ideation normal?

If the balance towards pessimism is typical, is the promise of salvation in the afterlife in e.g. Christianity a rare example of a powerful and salient positive ideation about our futures (conditioned on some behavior)?

2StartAtTheEnd
I agree. I feel like this is a very recent change as well. We used to be hopeful about the future, creating sci-fi about utopias rather than writing nightmare scenarios. The west is becoming less self-affirming over time, and our mental health is generally getting worse. I think it's because of historic guilt, as well as a kind of self-loathing pretending that it's virtue (anti-borders, anti-nationalism, anti-natalism) not to mention the slander of psychological drives which strive for growth and quality (competition, hierarchies, ambition, elitism, discrimination/selection/gatekeeping) I do not believe that the salvation in the afterlife is the opposite of this, but rather the same. It ultimately talks negatively about life and actual reality, comparing it to some unreachable ideal. It's both pessimistic, as well as a psychological cope which makes it possible to endure this pessimism. The message is something akin to "Endure, and you will be rewarded in the end" It's a weariness we will have to overcome. I feel like our excessive tendency to problem-solving has caused us to view life as a big collection of problems, rather than something which is merely good but imperfect

Is meditation provably more effective than "forcing yourself to do nothing"?

Much like sleep is super important for good cognitive (and, of course, physical) functioning, it's plausible that waking periods of not being stimulated (i.e. of boredom) are very useful for unlocking increased cognitive performance. Personally I've found that if I go a long time without allowing myself to be bored, e.g. by listening to podcasts or audiobooks whenever I'm in transition between activities, I'm less energetic, creative, sharp, etc.

The problem is that as a prescriptio... (read more)

2Nate Showell
There are some styles of meditation that are explicitly described as "just sitting" or "doing nothing."
1Decaeneus
Kind of related Quanta article from a few days ago: https://www.quantamagazine.org/what-your-brain-is-doing-when-youre-not-doing-anything-20240205/
1Perhaps
I think what those other things do is help you reach that state more easily and reliably. It's like a ritual that you do before the actual task, to get yourself into the right frame of mind and form a better connection, similar to athletes having pre game rituals. Also yeah, I think it makes the boredom easier to manage and helps you slowly get into it, rather than being pushed into it without reference.  Probably a lot of other hidden benefits though, because most meditation practices have been optimized for hundreds of years, and are better than others for a reason.
1Decaeneus
The parallel to athlete pre game rituals is an interesting one, but I guess I'd be interested in seeing the comparison between the following two groups: group A: is told to meditate the usual way for 30 minutes / day, and does group B: is told to just sit there for 30 minutes / day, and does So both of the groups considered are sitting quietly for 30 minutes, but one group is meditating while the other is just sitting there. In this comparison, we'd be explicitly ignoring the benefit from meditation which acts via the channel of just making it more likely you actually sit there quietly for 30 minutes.

Maybe there's a deep connection between:

(a) human propensity to emotionally adjust to the goodness / badness our recent circumstances such that we arrive at emotional homeostasis and it's mostly the relative level / the change in circumstances that we "feel"

(b) batch normalization, the common operation for training neural networks

 

Our trailing experiences form a kind of batch of "training data" on which we update, and perhaps we batchnorm their goodness since that's the superior way to update on data without all the pathologies of not normalizing.

I wonder if the attractor state of powerful beings is a bipole consisting of: 

a. wireheading / reward hacking, facing one's inner world 
b. defense, facing one's outer world

As we've gotten more and more control over our environment, much of what we humans seem to want to do resembles reward hacking: video games, sex-not-for-procreation, solving captivating math problems, etc. In an ideal world, we might just look to do that all day long, in particular if we could figure out how to zap our brains into making every time feel like the first time.

Howe... (read more)

Feels like there's a missing deep learning paradigm that is the equivalent of the human "go for a walk and think about stuff, absent new stimuli". There are some existing approaches that hint at this (generative replay / dreaming) but those feel a bit different than my subjective sense that I'm "working things out" when I go for a walk, rather than generatively dreaming, as I do at night.

Relatedly: it reduces my overall cognitive output when I go through periods of depriving myself of these idle periods by jamming them full of stimuli (e.g. podcasts). I do... (read more)

Simple math suggests that anybody who is selfish should be very supportive of acceleration towards ASI even for high values of p(doom).

Suppose somebody over the age of 50 thinks that p(doom) is on the order of 50%, and that they are totally selfish. It seems rational for them to support acceleration, since absent acceleration they are likely to die some time over the next 40ish years (since it's improbable we'll have life extension tech in time) but if we successfully accelerate to ASI, there's a 1-p(doom) shot at an abundant and happy eternity.

Possibly some form of this extends beyond total selfishness.

3Vladimir_Nesov
Not for those who think AGI/TAI plausible within 2-5 years, and ASI 1-2 years after. Accelerating even further than whatever feasible caution can hopefully slow it down a bit and shape it more carefully would mostly increase doom, not personal survival. Also, there's cryonics.
1Decaeneus
OK, agreed that this depends on your views of whether cryonics will work in your lifetime, and of "baseline" AGI/ASI timelines absent your finger on the scale. As you noted, it also depends on the delta between p(doom while accelerating) and baseline p(doom). I'm guessing there's a decent number of people who think current (and near future) cryonics don't work, and that ASI is further away than 3-7 years (to use your range). Certainly the world mostly isn't behaving as if it believed ASI was 3-7 years away, which might be a total failure of people acting on their beliefs, or it may just reflect that their beliefs are for further out numbers.
2Vladimir_Nesov
My model is that the current scaling experiment isn't done yet but will be mostly done in a few years, and LLMs can plausibly surpass the data they are training on. Also, LLMs are digital and 100x faster than humans. Then once there are long-horizon task capable AIs that can do many jobs (the AGI/TAI milestone), even if the LLM scaling experiment failed and it took 10-15 years instead, we get another round of scaling and significant in-software improvement of AI within months that fixes all remaining crippling limitations, making them cognitively capable of all jobs (rather than only some jobs). At that point growth of industry goes off the charts, closer to biological anchors of say doubling in fruit fly biomass every 1.5 days than anything reasonable in any other context. This quickly gives the scale sufficient for ASI even if for some unfathomable reason it's not possible to create with less scale. Unclear what cryonics not yet working could mean, even highly destructive freezing is not a cryptographically secure method for erasing data, redundant clues about everything relevant will endure. A likely reason to expect cryonics not to work is not believing that ASI is possible, with actual capabilities of a superintelligence. This is similar to how economists project "reasonable" levels of post-TAI growth by not really accepting the premise of AIs actually capable of all jobs, including all new jobs their introduction into the economy creates. More practical issues are unreliability of arrangements that make cryopreservation happen for a given person and of subsequent storage all the way until ASI, through all the pre-ASI upheaval.
1Decaeneus
Since you marked as a crux the fragment "absent acceleration they are likely to die some time over the next 40ish years" I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don't have the "absent acceleration" caveat).
2Vladimir_Nesov
I'm more certain about ASI being 1-2 years after TAI than about TAI in 2-5 years from now, as the latter could fail if the current training setups can't make LLMs long-horizon capable at a scale that's economically feasible absent TAI. But probably 20 years is sufficient to get TAI in any case, absent civilization-scale disruptions like an extremely deadly pandemic. A model can update on discussion of its gears. Given predictions that don't cite particular reasons, I can only weaken it as a whole, not improve it in detail (when I believe the predictions know better, without me knowing what specifically they know). So all I can do is mirror this concern by citing particular reasons that shape my own model.
1p4rziv4l
As a 50 year old, you don't need to support acceleration, you'll be well alive when ASI gets here. Simple math suggests you could just enjoy your 50's and roll the dice when you have less to lose.

Immorality has negative externalities which are diffuse, and hard to count, but quite possibly worse than its direct effects.

Take the example of Alice lying to Bob about something, to her benefit and his detriment. I will call the effects of the lie on Alice and Bob direct, and the effects on everybody else externalities. Concretely, the negative externalities here are that Bob is, on the margin, going to trust others in the future less for having been lied to by Alice than he would if Alice has been truthful. So in all of Bob's future interactions, his tr... (read more)

2Dagon
Fully agree, but I'd avoid the term "immorality".  Deviation from social norms has this cost, whether those norms are reasonable or not.
1Decaeneus
You're right, this is not a morality-specific phenomenon. I think there's a general formulation of this that just has to do with signaling, though I haven't fully worked out the idea yet. For example, if in a given interaction it's important for your interlocutor to believe that you're a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there's lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.
2Dagon
I think "trust" is what you're looking for, and signaling is one part of developing and nurturing that trust.  It's about the (mostly correct, or it doesn't work) belief that you can expect certain behaviors and reactions, and strongly NOT expect others.  If a large percentage of online interactions are with evil intent, it doesn't matter too much whether they're chatbots or human-trafficked exploitation farms - you can't trust entities that you don't know pretty well, and who don't share your cultural and social norms and non-official judgement mechanisms.
1FlorianH
With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).

Does belief quantization explain (some amount of) polarization?

Suppose people generally do Bayesian updating on beliefs. It seems plausible that most people (unless trained to do otherwise) subconsciosuly quantize their beliefs -- let's say, for the sake of argument, by rounding to the nearest 1%. In other words, if someone's posterior on a statement is 75.2%, it will be rounded to 75%.

Consider questions that exhibit group-level polarization (e.g. on climate change, or the morality of abortion, or whatnot) and imagine that there is a series of "facts" that... (read more)

1lemonhope
See also different practitioners in the same field with very different methodologies they are sure is the Best Way To Do Things

Regularization implements Occam's Razor for machine learning systems.

When we have multiple hypotheses consistent with the same data (an overdetermined problem) Occam's Razor says that the "simplest" one is more likely true.

When an overparameterized LLM is traversing the subspace of parameters that solve the training set seeking the smallest l2-norm say, it's also effectively choosing the "simplest" solution from the solution set, where "simple" is defined as lower parameter norm i.e. more "concisely" expressed.

2Razied
Unfortunately the entire complexity has just been pushed one level down into the definition of "simple". The L2 norm can't really be what we mean by simple, because simply scaling the weights in a layer by A, and the weights in the next layer by 1/A leaves the output of the network invariant, assuming ReLU activations, yet you can obtain arbitrarily high L2 norms by just choosing A high enough. 
1Decaeneus
Agreed with your example, and I think that just means that L2 norm is not a pure implementation of what we mean by "simple", in that it also induces some other preferences. In other words, it does other work too. Nevertheless, it would point us in the right direction frequently e.g. it will dislike networks whose parameters perform large offsetting operations, akin to mental frameworks or beliefs that require unecessarily and reducible artifice or intermediate steps. Worth keeping in mind that "simple" is not clearly defined in the general case (forget about machine learning). I'm sure lots has been written about this idea, including here.

I wonder how much of the tremendously rapid progress of computer science in the last decade owes itself to structurally more rapid truth-finding, enabled by:

  • the virtual nature of the majority of the experiments, making them easily replicable
  • the proliferation of services like github, making it very easy to replicate others' experiments
  • (a combination of the points above) the expectation that one would make one's experiments easily available for replication by others

There are other reasons to expect rapid progress in CS (compared to, say, electrical engineering) but I wonder how much is explained by this replication dynamic.

2Zac Hatfield-Dodds
Very little, because most CS experiments are not in fact replicable (and that's usually only one of several serious methodological problems). CS does seem somewhat ahead of other fields I've worked in, but I'd attribute that to the mostly-separate open source community rather than academia per se.
1Decaeneus
To be sure, let's say we're talking about something like "the entirety of published material" rather than the subset of it that comes from academia. This is meant to very much include the open source community. Very curious, in what way are most CS experiments not replicable? From what I've seen in deep learning, for instance, it's standard practice to include a working github repo along with the paper (I'm sure you know lots more about this than I do). This is not the case in economics, for instance, just to pick a field I'm familiar with.
2Zac Hatfield-Dodds
See e.g. https://mschloegel.me/paper/schloegel2024sokfuzzevals.pdf Fuzzing is a generally pretty healthy subfield, but even there most peer-reviewed papers in top venues are still are completely useless! Importantly, "a 'working' github repo" is really not enough to ensure that your results are reproducible, let alone ensure external validity.

From personal observation, kids learn text (say, from a children's book, and from songs) back-to-front. That is, the adult will say all but the last word in the sentence, and the kid will (eventually) learn to chime in to complete the sentence.

This feels correlated to LLMs learning well when tasked with next-token prediction, and those predictions being stronger (less uniform over the vocabulary) when the preceding sequences get longer.

I wonder if there's a connection to having rhyme "live" in the last sound of each line, as opposed to the first.

1StartAtTheEnd
A lot of memory seems to be linear, possibly because most information in the world is encoded linearly. If I was to tell you the 20th letter of the alphabet, I'd have to go through every letter it in my head. It's a linked-list data structure. Even many memory techniques, like the mind palace, is ordered, with each item linking to the next. I don't think this is the same as markov-chains or predicting the next item, but that it has to do with the most common data structure of information being linear. As for making the first word rhyme instead of the last, that's an interesting thought! I actually have no idea. When I rhyme like that in my head, it sounds wrong, but I couldn't tell you the reason. You may be on to something.
Curated and popular this week