The lack of good population genetic information in animal models and deep phenotyping of complex behavioral traits is probably one of the biggest impediments to robust animal testing of this general approach.
(For people reading this thread who want an intro to finemapping this lecture is a great place to start for a high level overview https://www.youtube.com/watch?v=pglYf7wocSI)
Kind of, there are many ways that changed in isolation get you a bit more oxygen but many of them act through the same mechanism so you change 1000 things that get you +1 oxygen on their own but in combination only get you +500.
To use a software analogy imagine an object with two methods where if you call either of them a property of an object is set to true, it doesn't matter if you call both methods or if you have a bunch of functions that call those methods you still just get true. Calling either method or any function that calls them is going to be sli...
I'm curious about the basis on which you are assigning a probability of causality without a method like mendelian randomisation, or something that tries to assign a probability of an effect based on interpreting the biology like a coding of the output of something like SnpEff to an approximate probability of effect.
The logic of 30% of its effect based on 30% chance it's causal only seems like it will be pretty high variance and only work out over a pretty large number of edits. It is also assuming no unexpected effects of the edits to SNPs that are non-causal for whatever trait you are targeting but might do something else when edited.
Could you expand on what sense you have 'taken this into account' in your models? What are you expecting to achieve by editing non-causal SNPs?
The first paper I linked is about epistasic effects on the additivity of a QTLs for quantitative trait, specifically heading date in rice, so this is evidence for this sort of effect on such a trait.
The general problem is without a robust causal understanding of what an edit does it is very hard to predict what shorts of problem might arise from novel combinations of variants in a haplotype. That's just the nature o...
There are a couple of major problems with naively intervening to edit sites associated with some phenotype in a GWAS or polygenic risk score.
The SNP itself is (usually) not causal
Genotyping arrays select SNPs the genotype of which is correlated with a region around the SNP, they are said to be in linkage with this region as this region tends to be inherited together when recombination happens in meiosis. This is a matter of degree and linkage scores allow thresholds to be set for how indicative a SNP is about the genotype a given region.
If it is not t
I have a reading list and recommendations for sources of ongoing news in the longevity/immortality/life extension space in the show notes for the recent special episode of my podcast where my co-host Michael and I discuss ageing and immortality. We are both biology PhDs, my background is in the epigenetics of ageing and Michael's bone stem cells.
https://www.xenothesis.com/53_xenogenesis_ageing_and_immortality_special/
I should add "Immune: a Journey into the Mysterious System that Keeps You Alive" to that list actually.
In particular from that list I recomme...
To clarify it's the ability to lock you're bootloader that I'm saying is better protection from 3rd parties not the propriety nature of many of the current locks. The HEADs tools for example which allows you to verify the integrity of your boot image in coreboot would be a FOSS alternative that provides analogous protection. Indeed it's not real security if it's not out there in the open for everyone to hammer with full knowledge of how it works and some nice big bug bounties (intentional or unintentional) on the other side to incentivise some scrutiny.
Thanks for the link. The problem of how to have a cryptographic root of trust for an uploaded person and how to maintain an on going state of trusted operation is a tricky one that I'm aware people have discussed. Though it's mostly well over my cryptography pay grade. The main point I was trying to get at was not primarily about uploaded brains. I'm using them as an anchor at the extreme end of a distribution that I'm arguing we are already on. The problems of being able to trust its own cognition that an uploaded brain has we are already beginning t...
The fact that they exert some of that power, (an ever increasing amount), through software make the question of the freedom of that software quite relevant to your autonomy in relation to those factors. consider the G0v movement. When working with open government software or at least open APIs civic hackers have been able to get improvements in things like government budgetary transparency, the ease with which you can file your tax forms, the ability to locate retailers with face masks in stock etc. The ability to fork the software used by institutions, do...
Yes a lot of in house software has terrible UX, mostly because it is often for highly specialised applications, it may also suffer from limited budget, poor feedback cycles if it was made as a one off by an internal team or contractor, and the target user group is tiny, lack of access to UX expertise etc.
Companies will optimise for their own workflows no doubt but their is often substantial overlap with common issues. Consider the work the redhat/ibm did on pipewire and wire plumber which will soon be delivering a substantially improved audio experience f...
I would regard the specifics of your brain as private data. The infrastructural code to take a scan of an arbitrary brain and run its consciousness is a different matter. It's the difference between application code and a config file / secrets used in deploying a specific instance. You need to be able to trust the app that running your brain e.g. to not feed it false inputs.
Maybe, but I would be interested to see that tested empirically by some major jurisdiction. I would bet that in the ascendance of an easy option to use propriety software many more firms would hire developers or otherwise fund the development of features that they needed for their work including usability and design coherence. There is a lot more community incentive to to make it easy to use if the community contains more business whose bottom lines depend on it being easy to use. I suspect propriety software may have us stuck in a local minimum, just because some of the current solutions produce partial alignments does not mean there aren't more optimal solutions available.
Yes, I'm merely using a emulated consciousness as the idealised example of a problem that applies to non-emulated consciousnesses that are outsourcing cognitive work to computer systems that are outside of their control and may be misaligned with their interests. This is a bigger problem for you if your are completely emulated but still a problem if you are using computational prostheses. I say it is bottle-necking us because even it's partial form seems to be undermining our ability to have rational discourse in the present.
Dan Dennet has an excellent section on a very similar subject in his book 'freedom evolves'. To use a computer science analogy true telepathy would be the ability for 2+ machines with different instruction set architectures being able to cross compile to code that is binary compatible with the other ISA and transmit the blob to the other machine. We have to serialise to a poorly defined standard and then read from the resulting file with with a library that is only a best guess at and implementation of the de facto spec.
I don't know I'd say that guy torched a lot of future employment opportunities when when he sabotaged his repos. Also obligatory: https://xkcd.com/2347/
Apologies but I'm unclear if you are characterising my post or my comment as "a bunch of buzzwords thrown together" could you clarify? The post's main thrust was to make the simpler case that the more of our cognition takes place on a medium which we don't have control over and which is subject to external interest the more concerned we have to be about trusting our cognition. The clearest and most extreme case of this is if your whole consciousness is running one someone else's hardware and software stack. However, I'll grant that I've not yet make the c...
I agree that a computing resources to run code on would be a more complex proposition to make available to all my point is more that if you purchase compute you should be free to use it to perform whatever computations you wish and arbitrary barriers should not be erected to prevent you from using it in whatever way you see fit (cough Apple, cough Sony, cough cough).
There is a distinction between lock-in and the cost of moving between standards, an Ethereum developer trying to move to another blockchain tech is generally moving from one open well documented standard to another. There is even the possibility of (semi-)automated conversion/migration tools. They are not nearly as hamstrung in moving as is the person trying to migrate from a completely un-documented and deliberately obfuscated or even encrypted format to a different one.
The incentive to make it difficult to use if you are providing support has some intere...
I'm not fundamentally opposed to exceptions in specific areas if there is sufficient reason. If I found the case that AI is such an exception convincing I might carve one out for it. In most cases however and specifically in the mission of raising the sanity waterline so that we collectively make better decisions on things like prioritising x-risks I would argue that a lack of free software and related issues of technology governance are currently a bottleneck in raising that waterline.
I thought that the sections on Identity and self-deception in this book stuck out as being done better in this book that in other rationalist literature.
Yes I've been looking for is post on idea inoculation and inferential distance and can't find it, just getting an error. What happened to this content?
https://www.lesswrong.com/posts/aYX6s8SYuTNaM2jh3/idea-inoculation-inferential-distance
For anyone else feeling this is less than intiutive sone3d is I think likely refering to, respectively:
Idea Inoculation is a very useful concept, and definitly something to bear in mind when playing the 'weak' form of the double crux game.
Correct me if I'm wrong but I have not noticed anyone else post something linking inferential distance with double cruxing so that maybe that's what I should have emphasised in the title.
You are correct of course, I was mostly envisioning senarios where you have a very solid conclusion which you are attempting to convey to another party that you have good reason to beleive is wrong or ignorant of this conclusion. (I was also hoping for some mild comedic effect from an obvious answer.)
For the most part if you are going into a conversation where you are attempting to impart knowledge you are assuming that it is probably largely correct, one of the advantages of finding the crux or 'graft point' at which you want to attach you belei...
I made a deck of cards with 104 biases from the Wikipedia page on cognitive biases on them to play this and related games with. You can get the image files here:
https://github.com/RichardJActon/CognitiveBiasCards
(There is also a link to a printer where they are preconfigured so you can easily buy a deck if you want.)
The visuals on these cards were originally created by Eric Fernandez, (of http://royalsocietyofaccountplanning.blogspot.co.uk/2010/04/new-study-guide-to-help-you-memorize.html)
This is a Link to a resource I came across for people wishing to teach/learn Fermi calculation it contains a problem set, a potentially useful asset especially for meetup planners.
GeneSmith gave some more details about his background in this episode of the Bayesian Conspiracy podcast: https://www.thebayesianconspiracy.com/2025/02/231-superbabies-with-gene-smith/