All of RichardJActon's Comments + Replies

GeneSmith gave some more details about his background in this episode of the Bayesian Conspiracy podcast: https://www.thebayesianconspiracy.com/2025/02/231-superbabies-with-gene-smith/

3Rosoe
If you have listened to the episode it would be nice to relay what those details were. I'm not so interested in listening to a 2 hour podcast mostly about what I just read above to get a few sentences of detail especially so when Gene is here! I'm also keen to know about Kman's background too.

The lack of good population genetic information in animal models and deep phenotyping of complex behavioral traits is probably one of the biggest impediments to robust animal testing of this general approach.

3GeneSmith
Well we have it in cows. Just not in mice.

(For people reading this thread who want an intro to finemapping this lecture is a great place to start for a high level overview https://www.youtube.com/watch?v=pglYf7wocSI)

Kind of, there are many ways that changed in isolation get you a bit more oxygen but many of them act through the same mechanism so you change 1000 things that get you +1 oxygen on their own but in combination only get you +500.

To use a software analogy imagine an object with two methods where if you call either of them a property of an object is set to true, it doesn't matter if you call both methods or if you have a bunch of functions that call those methods you still just get true. Calling either method or any function that calls them is going to be sli... (read more)

I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect.

The logic of 30% of its effect based on 30% chance it's causal only seems like it will be pretty high variance and only work out over a pretty large number of edits. It is also assuming no unexpected effects of the edits to SNPs that are non-causal for whatever trait you are targeting but might do something else when edited.

5kman
Using finemapping. I.e. assuming a model where nonzero additive effects are sparsely distributed among SNPs, you can do Bayesian math to infer how probable each SNP is to have a nonzero effect and its expected effect size conditional on observed GWAS results. Things like SnpEff can further help by giving you a better prior.

Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?

The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.

The general problem is without a robust causal understanding of what an edit does it is very hard to predict what shorts of problem might arise from novel combinations of variants in a haplotype. That's just the nature o... (read more)

4kman
If we have a SNP that we're 30% sure is causal, we expect to get 30% of its effect conditional on it being causal. Modulo any weird interaction stuff from rare haplotypes, which is a potential concern with this approach. I didn't read your first comment carefully enough; I'll take a look at this.

There are a couple of major problems with naively intervening to edit sites associated with some phenotype in a GWAS or polygenic risk score.

  1. The SNP itself is (usually) not causal Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.
    If it is not t

... (read more)
2lemonhope
To dumb it down a bit, here's my made up example: you get +1 IQ if your brain has surplus oxygen in the blood flowing through it. There's 1000 ways to get a bit more oxygen in there, but with +1000 oxygen, you still only get +1 IQ. Is that the idea?
3kman
This is taken into account by our models, and is why we see such large gains in editing power from increasing data set sizes: we're better able to find the causal SNPs. Our editing strategy assumes that we're largely hitting non-causal SNPs. I'm not aware of any evidence for substantial effects of this sort on quantitative traits such as height. We're also adding up expected effects, and as long as those estimates are unbiased the errors should cancel out as you do enough edits. One thing we're worried about is cases where the haplotypes have the small additive effects rather than individual SNPs, and you get an unpredictable (potentially deleterious) effect if you edit to a rare haplotype even if all SNPs involved are common. Are you aware of any evidence suggesting this would be a problem?

I have a reading list and recommendations for sources of ongoing news in the longevity/immortality/life extension space in the show notes for the recent special episode of my podcast where my co-host Michael and I discuss ageing and immortality. We are both biology PhDs, my background is in the epigenetics of ageing and Michael's bone stem cells.

https://www.xenothesis.com/53_xenogenesis_ageing_and_immortality_special/

I should add "Immune: a Journey into the Mysterious System that Keeps You Alive" to that list actually.

In particular from that list I recomme... (read more)

1Willa
Thank you for the resources! I've queued up XenoThesis 53: Ageing and Immortality Special on my podcast player.

To clarify it's the ability to lock you're bootloader that I'm saying is better protection from 3rd parties not the propriety nature of many of the current locks. The HEADs tools for example which allows you to verify the integrity of your boot image in coreboot would be a FOSS alternative that provides analogous protection. Indeed it's not real security if it's not out there in the open for everyone to hammer with full knowledge of how it works and some nice big bug bounties (intentional or unintentional) on the other side to incentivise some scrutiny.

Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I'm aware people have discussed. Though it's mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I'm using them as an anchor at the extreme end of a distribution that I'm arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning t... (read more)

The fact that they exert some of that power, (an ever increasing amount), through software make the question of the freedom of that software quite relevant to your autonomy in relation to those factors. consider the G0v movement. When working with open government software or at least open APIs civic hackers have been able to get improvements in things like government budgetary transparency, the ease with which you can file your tax forms, the ability to locate retailers with face masks in stock etc. The ability to fork the software used by institutions, do... (read more)

5DirectedEvolution
It seems to me that what you're worried about is a tendency toward increased gating of resources that are not inherently monopolizable. Creating infrastructure that permits inherently hard-to-gate resources, like software or technological designs, to be gated more effectively creates unnecessary opportunities for rent-seeking. On the other hand, the traditional argument in favor of allowing such gates to be erected is that it creates incentives to produce the goods behind the gates, and we tend to reap much greater rewards in the long run by allowing gates to be temporarily erected and then torn down than we would by prohibiting such gates from ever being erected at all. The fear is that some savvy agents will erect an elaborate system of gates such that they create, perhaps not a monopoly, but a sufficient system of gates to exact increasing fractions of created wealth over time. I think this is potentially worth worrying about, but it's not clear to me why we'd particularly focus on software as the linchpin of this dynamic, as opposed to all the other forms of gateable wealth. I think this is my primary objection to your argument. Note that at least budgetary transparency and location of retailers with face masks are questions of data access. Sure, software is required to access that data, but it's making the data available that's key here. Forking software is useless without access to the data that software is meant to deliver. It's also useless without access to sufficiently powerful computing power. For example, if I had the source code for GPT-3, it wouldn't do me any good unless I had a sufficiently powerful supercomputer to run that code on. Furthermore, the human knowledge required to implement and maintain the code base can't just be forked even if you have access to the source code. Data, microchips, and expertise are where bits meet atoms. Source code is just a fraction of the total capital controlled by a given company.

Yes a lot of in house software has terrible UX, mostly because it is often for highly specialised applications, it may also suffer from limited budget, poor feedback cycles if it was made as a one off by an internal team or contractor, and the target user group is tiny, lack of access to UX expertise etc.

Companies will optimise for their own workflows no doubt but their is often substantial overlap with common issues. Consider the work the redhat/ibm did on pipewire and wire plumber which will soon be delivering a substantially improved audio experience f... (read more)

2ChristianKl
Blender having a bad UX is what makes it a specialist application. If it would be easier to use, I would expect more people to do 3D printing.  There's certainly not zero investment into improving its UX but the number of resources going into that might be 1-2 orders of magnitude higher if it would be proprietary software.

I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It's the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.

7Raemon
I initially assumed something similar to what you just described. However, it's plausible to me that in practice the line between "program" and "data" might be blurry here.

Maybe, but I would be interested to see that tested empirically by some major jurisdiction. I would bet that in the ascendance of an easy option to use propriety software many more firms would hire developers or otherwise fund the development of features that they needed for their work including usability and design coherence. There is a lot more community incentive to to make it easy to use if the community contains more business whose bottom lines depend on it being easy to use. I suspect propriety software may have us stuck in a local minimum, just because some of the current solutions produce partial alignments does not mean there aren't more optimal solutions available.

2ChristianKl
In-house software is usually worse as far as UX goes than software where the actual user of the software has to pay money. Even if companies would care about the useability of in-house software, general usability is not something that you need for particular use-cases. A company is likely going to optimize for its own workflows instead of optimizing for the useability of the average user. I don't see why we would expect Blender to look different if open source would be legally mandated. Blender is already used a lot commercially.

Yes, I'm merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it's partial form seems to be undermining our ability to have rational discourse in the present.

Dan Dennet has an excellent section on a very similar subject in his book 'freedom evolves'. To use a computer science analogy true telepathy would be the ability for 2+ machines with different instruction set architectures being able to cross compile to code that is binary compatible with the other ISA and transmit the blob to the other machine. We have to serialise to a poorly defined standard and then read from the resulting file with with a library that is only a best guess at and implementation of the de facto spec.

I don't know I'd say that guy torched a lot of future employment opportunities when when he sabotaged his repos. Also obligatory: https://xkcd.com/2347/

Apologies but I'm unclear if you are characterising my post or my comment as "a bunch of buzzwords thrown together" could you clarify? The post's main thrust was to make the simpler case that the more of our cognition takes place on a medium which we don't have control over and which is subject to external interest the more concerned we have to be about trusting our cognition. The clearest and most extreme case of this is if your whole consciousness is running one someone else's hardware and software stack. However, I'll grant that I've not yet make the c... (read more)

2ChristianKl
The problem of raising the sanity waterline in the world that currently exists is not one of dealing with emulated consciousness. The word bottleneck makes only sense if you apply it to a particular moment in time. Before the invention of emulated consciousness, any problems with emulated consciousness are not bottlenecks.

I agree that a computing resources to run code on would be a more complex proposition to make available to all my point is more that if you purchase compute you should be free to use it to perform whatever computations you wish and arbitrary barriers should not be erected to prevent you from using it in whatever way you see fit (cough Apple, cough Sony, cough cough).

There is a distinction between lock-in and the cost of moving between standards, an Ethereum developer trying to move to another blockchain tech is generally moving from one open well documented standard to another. There is even the possibility of (semi-)automated conversion/migration tools. They are not nearly as hamstrung in moving as is the person trying to migrate from a completely un-documented and deliberately obfuscated or even encrypted format to a different one.

The incentive to make it difficult to use if you are providing support has some intere... (read more)

4ChristianKl
You don't need to make any overt attempt to make things difficult to use for complex programs to become difficult to use. If you don't invest effort into making them easy to use they usually become hard to use. There's little community incentive to make software easy to use. Ubuntu's useability for the average person is better than that of more community-driven projects.  Starting in 2008 (number from memory) Microsoft invested a lot in useability for Microsoft Office. As far as I know, there wasn't anything similar for LibreOffice. GIMP's usability is still similarly bad as it was a decade ago. Even though Blender basically won, its usability is awful.  One of the key points of inadequate equilibria is that there are plenty of problems in our society where it's hard to create an alignment between stakeholders to get the problem solved. If we would legislate that all software has to be free software then that would prevent some forms that are currently effectively used to get problems solved from getting solved. 

I'm not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.

4ChristianKl
This sounds to me like a bunch of buzzwords thrown together. You have argued in your post that it might be useful to have free software but I have seen no argument why it's currently a bottleneck for raising the waterline. 

I thought that the sections on Identity and self-deception in this book stuck out as being done better in this book that in other rationalist literature.

Yes I've been looking for is post on idea inoculation and inferential distance and can't find it, just getting an error. What happened to this content?

https://www.lesswrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance

For anyone else feeling this is less than intiutive sone3d is I think likely refering to, respectively:

Idea Inoculation is a very useful concept, and definitly something to bear in mind when playing the 'weak' form of the double crux game.

Correct me if I'm wrong but I have not noticed anyone else post something linking inferential distance with double cruxing so that maybe that's what I should have emphasised in the title.

You are correct of course, I was mostly envisioning senarios where you have a very solid conclusion which you are attempting to convey to another party that you have good reason to beleive is wrong or ignorant of this conclusion. (I was also hoping for some mild comedic effect from an obvious answer.)

For the most part if you are going into a conversation where you are attempting to impart knowledge you are assuming that it is probably largely correct, one of the advantages of finding the crux or 'graft point' at which you want to attach you belei... (read more)

I made a deck of cards with 104 biases from the Wikipedia page on cognitive biases on them to play this and related games with. You can get the image files here:

https://github.com/RichardJActon/CognitiveBiasCards

(There is also a link to a printer where they are preconfigured so you can easily buy a deck if you want.)

The visuals on these cards were originally created by Eric Fernandez, (of http://royalsocietyofaccountplanning.blogspot.co.uk/2010/04/new-study-guide-to-help-you-memorize.html)