All of Jackson Wagner's Comments + Replies

Note that this story has recieved a beautifully-animated video adaptation by Rational Animations!
 

Possibly look for other skills / career paths, besides math and computer science?  Glancing through 80,000 Hours' list:

- AI governance and policy -- I'm guessing that seeking out "policy people" will be a non-starter in Russia, either because it's dangerous or because there are fewer such people (not whole graduating classes at Harvard, etc, waiting to become the next generation of DC elites).

- AI safety technical research -- of course you are already thinking about this via IOM, IOI, etc. Others have mentioned trying to expand to LLM-specific competi... (read more)

5Mikhail Samin
* Yep, we've also been sending the books to winners of national and international olympiads in biology and chemistry. * Sending these books to policy-/foreign policy-related students seems like a bad idea: too many risks involved (in Russia, this is a career path you often choose if you're not very value-aligned. For the context, according to Russia, there's an extremist organization called "international LGBT movement"). * If you know anyone with an understanding of the context who'd want to find more people to send the books to, let me know. LLM competitions, ML hackathons, etc. all might be good. * Ideally, we'd also want to then alignment-pill these people, but no one has a ball on this. 

Satellites were also plausibly a very important military technology.  Since the 1960s, some applications have panned out, while others haven't.  Some of the things that have worked out:

  • GPS satellites were designed by the air force in the 1980s for guiding precision weapons like JDAMs, and only later incidentally became integral to the world economy.  They still do a great job guiding JDAMs, powering the style of "precision warfare" that has given the USA a decisive military advantage ever since 1991's first Iraq war.
  • Spy satellites were very
... (read more)

Maybe other people have a very different image of meditation than I do, such that they imagine it as something much more delusional and hyperreligious? Eg, some religious people do stuff like chanting mantras, or visualizing specific images of Buddhist deities, which indeed seems pretty crazy to me.

But the kind of meditation taught by popular secular sources like Sam Harris's Waking Up app, (or that I talk about in my "Examining The Witness" youtube series about the videogame The Witness), seems to me obviously much closer to basic psychology or rationali... (read more)

4Viliam
Thanks for answering my question directly in the second half. I find the testimonies of rationalists who experimented with meditation less convincing than perhaps I should, simply because of selection bias. People who have pre-existing affinity towards "woo" will presumably be more likely to try meditation. And they will be more likely to report that it works, whether it does or not. I am not sure how much should I discount for this, perhaps I overdo it. I don't know. A proper experiment would require a control group -- some people who were originally skeptical about meditation and Buddhism in general, and only agreed to do some exactly defined exercises, and preferably the reported differences should be measurable somehow. Otherwise, we have another selection bias, that if there are people for whom meditation does nothing, or is even harmful, they will stop trying. So at the end, 100% of people who tried will report success (whether real or imaginary), because those who didn't see any success have selected themselves out. I approve of making the "secular version of Buddhism", but in a similar way, we could make a "secular version of Christianity". (For example, how is gratitude journaling significantly different from thanking God for all his blessing before you go sleep?) And yet, I assume that the objection against "secular Christianity" on Less Wrong would be much greater than against "secular Buddhism". Maybe I am wrong, but the fact that no one is currently promoting "secular Christianity" on LW sounds like weak evidence. I suspect, the relevant difference is that for an American atheist, Christianity is outgroup, and Buddhism is fargroup. Meditation is culturally acceptable among contrarians, because our neighbors don't do it. But that is unrelated to whether it works or not. Also, I am not sure how secular the "secular Buddhism" actually is, given that people still go to retreats organized by religious people, etc. It feels too much for me to trust that s

I think there are many cases of reasonably successful people who often cite either some variety of meditation, or other self-improvement regimes / habits, as having a big impact on their success. This random article I googled cites the billionaires Ray Dalio, Marc Benioff, and Bill Gates, among others. (https://trytwello.com/ceos-that-meditate/)

Similarly you could find people (like Arnold Schwarzenegger, if I recall?) citing that adopting a more mature, stoic mindset about life was helpful to them -- Ray Dalio has this whole series of videos on "life pri... (read more)

4MondSemmel
Re: successful people who meditate, IIRC in Tim Ferriss' book Tools of Titans, meditation was one of the most commonly mentioned habits of the interviewees.
Viliam107

To compare to the obvious alternative, is the evidence for meditation stronger than the evidence for prayer? I assume there are also some religious billionaires and other successful people who would attribute their success to praying every day or something like that.

It feels sorta understandable to me (albeit frustrating) that OpenPhil faces these assorted political constraints.  In my view this seems to create a big unfilled niche in the rationalist ecosystem: a new, more right-coded, EA-adjacent funding organization could optimize itself for being able to enter many of those blacklisted areas with enthusiasm.

If I was a billionare, I would love to put together a kind of "completion portfolio" to complement some of OP's work.  Rationality community building, macrostrategy stuff, AI-related advocacy to try an... (read more)

There are actually a number of ways that you might see a permanently stable totalitarian government arise, in addition to the simplest idea that maybe the leader never dies:

https://80000hours.org/problem-profiles/risks-of-stable-totalitarianism/

I and perhaps other LessWrongers would appreciate reading your review (of any length) of the book, since lots of us loved HPMOR, the Sequences, etc, but are collectively skeptical / on the fence about whether to dive into Project Lawful.  (What's the best way to read the bizzare glowfic format?  What are the main themes of the book and which did you like best?  etc)

1Mo Putera
I thought this review was fine: https://recordcrash.substack.com/p/mad-investor-chaos-woman-asmodeus 
2Tapatakt
My opinion, very briefly: Good stuff: * Deception plotline * Demonstration of LDT in action * A lot of thought processes of smart characters * Convincing depictions of how people with very weird and evil ideology can have at least seemingly consistent worldview, be humans and not be completely insane. Stuff that might be good for some and bad for others: * It's Yudkowsky's work and it feels. Some people like the style of his texts, some don't. * Sex scenes (not very erotic and mostly talking) * Re-construction of Pathfinder game mechanics in setting * Math classes (not the best possible explanations, but not the worst either) * A lot of descriptions of "how everything works on dath ilan" Bad stuff: * It's isekai (it's bad if you're allergic to this genre) * It's very long * And in some places it could be shorter without losing anything (but, I think, nothing as egregious as school wars in HPMOR) (but if you don't appreciate the point about "thought processes of smart characters", then it could be much shorter without losing anything in most places)
Error104

I loved Project Lawful/Planecrash (not sure which is the actual title), but I do hesitate to recommend it to others. Not everyone likes their medium-core S&M with a side of hardcore decision theory, or vice-versa. It is definitely weirder than HPMOR.

Something that threw me off at first: it takes the mechanics of the adapted setting very literally (e.g. spell slots and saving throws are non-metaphorical in-universe Things). That's not normal for (good) game fanfiction. The authors make it work anyway -- perhaps because clear rules make it easier to prod... (read more)

7Double
Spoiler free again: Good to know there’s demand for such a review! It’s now on my todo list. To quickly address some of your questions: Pros of PL: If the premise I described above interests you, then PL will interest you. Some good Sequences-style rationality. I certainly was obsessed reading it for months. Cons: Some of the Rationality lectures were too long, but I didn’t mind much. The least sexy sex scenes. Because they are about moral dilemmas and deception, not sex. Really long. Even if you read it constantly and read quickly, it will take time (1.8 million words will do that). I really have to read some authors that aren’t Yud. Yud is great, but this is clearly too much of him, and I’m sure he’d agree. I read PL when it was already complete, so maybe I didn’t get the full experience, but there really wasn’t anything all that strange about the format (the content is another matter!). I can imagine that *writing * a glowfic would be a much different experience than writing a normal serialized work (ie dealing with your co-authors), but reading it isn’t very different from reading any other fiction. Look at the picture to see the POV, look at who’s the author if you’re curious, and read as normal. I’m used to books that change POV (though usually not this often). There are sometimes bonus tangent threads, but the story is linear. What problems do you have with the glowfic format? Main themes would require a longer post, but I hope this helps.
2RHollerith
My question is, Can I download an offline copy of it -- either text or spoken audio? The audio consists of 195 episodes, each of which can be individually downloaded, but can I get it as a single audio file (of duration 150 hours or so)?

This is nice!  I like seeing all the different subfields of research listed and compared; as a non-medical person I often just hear about one at a time in any given news story, which makes things confusing.

Some other things I hear about in longevity spaces:
- Senescent-cell-based theories and medicines -- what's up with these?  This seems like something people were actually trying in humans; any progress, or is this a dud?
- Repurposing essentially random drugs that might have some effect on longevity -- most famously the diabetes drug metformin (a... (read more)

7Abhishaike Mahajan
Thank you for reading!  Senescent-cell-based therapeutics feels like somewhat of a dead-end...senescence happens for a reason, and clearing out these cells have some second-order downsides. E.g., the inflammation caused by senescence is important for acute injury repair. I am less well-read on this area though! Metformin and rapamicyn are promising in the same way ozempic is promising; helping curtail metabolic problems helps a LOT of things, but it won't lead to dramatic changes in lifespan. Definitely in healthspan! But even there, nothing insane.  Imo, partial cellular reprogramming is the only real viable approach we have left, I'm kinda unsure what else the field has to offer if that ends up failing. 

Fellow Thiel fans may be interested in this post of mine called "X-Risk, Anthropics, & Peter Thiel's Investment Thesis", analyzing Thiel's old essay "The Optimistic Thought Experiment", and trying to figure out how he thinks about the intersection of markets and existential risk.

"Americans eat more fats and oils, more sugars and sweets, more grains, and more red meat; all four items that grew the most in price since 2003."

Nice to know that you can eat healthy -- fish, veggies, beans/nuts, eggs, fresh fruit, etc -- and beat inflation at the same time! (Albeit these healthier foods still probably have a higher baseline price. But maybe not for much longer!)

The linked chart actually makes red meat look fine (beef has middling inflation, and pork has actually experienced deflation), but beverages, another generally unhealthy food, a... (read more)

2Declan Molony
"Albeit these healthier foods still probably have a higher baseline price." Maybe in the short-term, but considering the lifetime consequences of unhealthy eating (e.g., atherosclerosis, heart disease, cancer, dementia---Dr. Peter Attia's stated 4 horsemen of death that account for 80% of death in the US---not to mention the emotional damage on your mood and potential productivity), the cost/benefit analysis seems heavily weighted in favor of eating healthier foods.

A thoughtful post! I think about this kind of stuff a lot, and wonder what the implications are. If we're more pessimistic about saving lives in sub-saharan africa, should we:

  1. promote things like lead removal (similar evidence-backed, scalable intervention as bednets, but aimed more directly at human capital)?
  2. promote things like charter cities (untested crazy longshot megaproject, but aimed squarely at transformative political / societal improvements)?
  3. switch to bednet-style lifesaving charities in South Asia, like you mention?
  4. keep on trucking with ou
... (read more)
2vaishnav92
I posted this on the EA forum a couple of weeks ago - https://forum.effectivealtruism.org/posts/7WKiW4fTvJMzJwPsk/adverse-selection-in-minimizing-cost-per-life-saved No surprise that people on the forum seem to think #4 is the right answer (although they did acknowledge this is a valid consideration). But a lot of it was "this is so cheap that this is probably still the right answer" and "we should be humble and not violate the intuition people have that all lives are equal". 

Re: your point #2, there is another potential spiral where abstract concepts of "greatness" are increasingly defined in a hostile and negative way by partisans of slave morality.  This might make it harder to have that "aspirational dialogue about what counts as greatness", as it gets increasingly difficult for ordinary people to even conceptualize a good version of greatness worth aspiring to.  ("Why would I want to become an entrepeneur and found a company?  Wouldn't that make me an evil big-corporation CEO, which has a whiff of the same f... (read more)

Feynman is imagining lots of components being made with "hand tools", in order to cut down on the amount of specialized machinery we need. So you'd want sophisticated manipulators to use the tools, move the components, clean up bits of waste, etc. Plus of course for gathering raw resources and navigating Canadian tundra. And you'd need video cameras for the system to look at what it's doing (otherwise you'd only have feed-forward controls in many situations, which would probably cause lots of cascading errors).

I don't know how big a rasberry pi would be if it had to be hand-assembled from transistors big enough to pick up individually. So maybe it's doable!

1[anonymous]
.

idk, you still have to fit video cameras and complex robotic arms and wifi equipment into that 1m^3 box, even if you are doing all the AI inference somewhere else!  I have a much longer comment replying to the top-level post, where I try to analyze the concept of an autofac and what an optimized autofac design would really look like.  Imagining a 100% self-contained design is a pretty cool intellectual exercise, but it's hard to imagine a situation where it doesn't make sense to import the most complex components from somewhere else (at least initially, until you can make computers that don't take up 90% of your manufacturing output).

1[anonymous]
.

This was a very interesting post.  A few scattered thoughts, as I try to take a step back and take a big-picture economic view of this idea:

What is an autofac?  It is a vastly simplified economy, in the hopes that enough simplification will unlock various big gains (like gains from "automation").  Let's interpolate between the existing global economy, and Feynman's proposed 1-meter cube.  It's not true that "the smallest technological system capable of physical self-reproduction is the entire economy.", since I can imagine many potentia... (read more)

9Carl Feynman
Wow, I think that comment is as long as my original essay.  Lots of good points.  Let me take them one by one. The real motivation for the efficiency-impairing simplifications is none of size, cost or complexity.  It is to reduce replication time.  We need an Autofac efficient enough that what it produces is higher value than what it consumes.  We don't want to reproduce Soviet industry, much of which processed expensive resources into lousy products worth less than the inputs.  Having achieved this minimum, however, the goal is to allow the shortest possible time of replication.  This allows for the most rapid production of the millions of tons of machinery needed to produce massive effects. Consider that the Autofac, 50 kg in a 1 m^3, is modeled on a regular machine shop, with the machinist replaced by a robot.  The machine shop is 6250 kg in 125 m^3.  I just scale it down by a factor of 5, and thereby reduce the duplication time by a factor of 5.  So it duplicates in 5 weeks instead of 25 weeks.  Suppose we start the Autofac versus the robot machine shop at the same time.  After a year, there are 1000 Autofacs versus 4 machine shops; or in terms of mass, 50,000 kg of Autofac and 25,000 kg of machine shop.  After two years, 50,000,000 kg of Autofac versus 100,000 kg of machine shop.  After 3 years, it's even more extreme.  At any time, we can turn the Autofacs from making themselves to making what we need, or to making the tools to make what we need.  The Autofac wins by orders of magnitude even if it's teeny and inefficient, because of sheer speed. That's why I picked a one meter cube.  I would have picked a smaller cube, that reproduced faster, but that would scale various production processes beyond reasonable limits.  I didn't want to venture beyond ordinary machining into weird techniques only watchmakers use. This is certainly a consideration.  Given the phenomenal reproductive capacity of the Autofac, there's an enormous return to finishing design as qu

1950s era computers likely couldn't handle the complex AI tasks imagined here (doing image recognition; navigating rough Baffin Island terrain, finishing parts with hand tools, etc) without taking up much more than 1 meter cubed.

1[anonymous]
.

Socialism / communism is about equally abstract as Georgism, and it certainly inspired a lot of people to fight! Similarly, Republican campaigns to lower corporate tax rates, cut regulations, reduce entitlement spending, etc, are pretty abstract (and often actively unpopular when people do understand them!), but have achieved some notable victories over the years. Georgism is similar to YIMBYism, which has lots of victories these days, even though YIMBYism also suffers from being more abstract than conspiracy theories with obvious villains about people "... (read more)

2mako yass
Maybe "abstract" was the wrong word. Communism and minarchy both have very simple visceral moral impulses supporting them. Fairness/equality vs liberty/choice. It's possible to get a person into a state where they feel one pole so intensely that they will be willing to fight against someone fighting earnestly for the other pole (right? But I'm not sure there's actually been a civil war between communists and minarchists, it's usually been communists vs monarchists/nationalists) For grey civics, I don't know what the unifying principle is. Commitment to growth? Progress? Hey, maybe that's it. I've been considering the slogan "defending zoning isn't compatible with calling yourself a progressive. If you believe in urban zoning you don't believe in progress." Progress seems to require meritocracy, rewarding work in proportion to its subjective EV or capricious outcomes, distributing rewards unevenly, and progress comes with a duty to future generations that minarchists might not like very much, but at least in tech, people seem alright with that. On the left, the tension mostly comes out of earnest disbelief, it's not intuitive that progress is real. For most of our evolutionary history it wasn't real, and today it happens only on the scale of years, and its every step is unprecedented. But how would we resolve the tension with humanism. I guess e/acc is the faction within the grey tribe who don't try to resolve that tension, they lean into it, they either explicitly reject the duty to defend the preferences of present humanity against those aspects of progress that threaten it, or they find reasons to downplay the immanence of those threats. The other faction has to sit and listen while Hanson warns them about the inevitability of absolute cultural drift, and I don't think we know what to say to that.

Future readers of this post might be interested this other lesswrong post about the current state of multiplex gene editing: https://www.lesswrong.com/posts/oSy5vHvwSfnjmC7Tf/multiplex-gene-editing-where-are-we-now

Future readers of this blog post may be interested in this book-review entry at ACX, which is much more suspicious/wary/pessimistic about prion disease generally:

  • They dispute the idea that having M/V or V/V genes reduces the odds of getting CJD / mad cow disease / etc.
  • They imply that Britain's mad cow disease problem maybe never really went away, in the sense that "spontaneous" cases of CJD have quadrupled since the 80s, so it seems CJD is being passed around somehow?

https://www.astralcodexten.com/p/your-book-review-the-family-that

What kinds of space resources are like "mice & cheese"?  I am picturing civilizations expanding to new star systems mostly for the matter and energy (turn asteroids & planets into a dyson swarm of orbiting solar panels and supercomputers on which to run trillions of emulated minds, plus constructing new probes to send onwards to new star systems).

re: the Three Body Problem books -- I think the book series imagines that alien life is much, much more common (ie, many civilizations per galaxy) than Robin Hanson imagines in his Grabby Aliens hypot... (read more)

2JBlack
After even the first million years as slow as 0.1c, the galaxy is full and it's time to go intergalactic. A million years is nothing in the scale of the universe's age. When sending a probe millions of light years to other galaxies, the expense of 0.999c probes start to look more useful than 0.8c ones, saving hundreds of thousands of years. Chances are that it wouldn't just be one probe either, but billions of them seeding each galaxy within plausible reach. Though as with any discussion about these sorts of things, we have no idea what we don't know about what a civilization a million years old might achieve. Discussions of relativistic probes are probably even more laughably primitive than those of using swan's wings to fly to the abode of the Gods.

Yes, it does have to be fast IMO, but I think fast expansion (at least among civilizations that decide to expand much at all) is very likely.

Of course the first few starships that a civilization sends to colonize the nearest stars will probably not be going anywhere near the speed of light.  (Unless it really is a paperclips-style superintelligence, perhaps.)  But within a million years or so, even with relatively slow-moving ships, you have colonized thousands of solar systems, built dyson swarms around every star, have a total population in the... (read more)

6jmh
Maybe, but I have to also put this all in a somewhat different frame. Is the universe populated by birds or mice? Are the resources nice ground full or worms or perhaps dangerous traps with the cheese we want?  So if we're birds and the universe resources are worms, maybe a race. If we're all mice and resources are those dangerous traps with cheese, well, the old saying "The early bird might get the worm but the second mouse gets the cheese." In a universe populated by mice & cheese, civilation expansion may well be much slower and measured. Perhpas we can add one of the thoughts from the Three Body Problem series -- advertising your civilation in the universe might be a sure way to kill yourself. Possibly fits with the Grabby Aliens thought but would argue for a different type of expansion patter I would think. That, and I'm not sure how the apparent solution ot energy problems (apparenly a civilization has no engery problem so accelleration and decellerations costs don't really matter) impacts a desire for additional resources. And if the energy problem is not solved then we need to know the cost curves for accelleration and decelleration to optimize speed in that resource search/grab.

I think part of the "calculus" being run by the AI safety folks is as follows:

  1. there are certainly both some dumb ways humanity could die (ie, AI-enabled bioweapon terrorism that could have easily been prevented by some RLHF + basic checks at protein synthesis companies), as well as some very tricky, advanced ways (AI takeover by a superintelligence with a very subtle form of misalignment, using lots of brilliant deception, etc)

  2. It seems like the dumber ways are generally more obvious / visible to other people (like military generals or the median vote

... (read more)

re: your comments on Fermi paradox -- if an alien super-civilization (or alien-killing AI) is expanding in all directions at close to the speed of light (which you might expect a superintelligence to do), then you mostly don't see them coming until it's nearly too late, since the civilization is expanding almost as fast as the light emitted by the civilization. So it might look like the universe is empty, even if there's actually a couple of civilizations racing right towards you!

There is some interesting cosmological evidence that we are in fact living i... (read more)

4faul_sname
"Close to the speed of light" has to be quite close to the speed of light for that argument to hold (at 0.8c about half of the volume in the light cone of an expanding civilization is outside of that civilization's expansion front).

[spoilers for minor details of later chapters of the book] Isn't the book at least a little self-aware about this plot hole? If I recall correctly, the book eventually reveals (rot13 from here on out)...

gung gur fcnpr cebtenz cyna jnf rffragvnyyl qrfvtarq nf n CE fghag gb qvfgenpg/zbgvingr uhznavgl, juvpu vaqrrq unq ab erny cebfcrpg bs jbexvat (gbb srj crbcyr, gbb uneq gb qbqtr zrgrbef, rirelguvat arrqf gb or 100% erhfrq, rgp, yvxr lbh fnl). Juvyr gur fcnpr cebtenz jnf unccravat, bgure yrff-choyvp rssbegf (eryngrq gb ahpyrne fhoznevarf, qvttvat haqreteb... (read more)

2Yair Halberstadt
Vg pna'g or gur pnfr gung jnf gur gehyr rssbeg, fvapr gur cerfvqrag pubfr gb tb gb fcnpr vafgrnq bs gur bgure rssbeg.

In addition to the researchy implications for topics like deception and superpersuasion and so forth, I imagine that results like this (although, as you say, unsuprising in a technical sense) could have a huge impact on the public discussion of AI (paging @Holly_Elmore and @Joseph Miller?) -- the general public often seems to get very freaked out about privacy issues where others might learn their personal information, demographic characteristics, etc.

In fact, the way people react about privacy issues is so strong that it usually seems very overblown to me... (read more)

1eggsyntax
Agreed. I considered releasing a web demo where people could put in text they'd written and GPT would give estimates of their gender, ethnicity, etc. I built one, and anecdotally people found it really interesting. I held off because I can imagine it going viral and getting mixed up in culture war drama, and I don't particularly want to be embroiled in that (and I can also imagine OpenAI just shutting down my account because it's bad PR). That said, I feel fine about someone else deciding to take that on, and would be happy to help them figure out the details -- AI Digest expressed some interest but I'm not sure if they're still considering it.
1Joseph Miller
Nice idea. Might try to work it into some of our material.

So, sure, there is a threshold effect in whether you get value from bike lanes on your complex journey from point A to point G.  But other people throughout the city have different threshold effects:

  • Other people are starting and ending their trips from other points; some people are even starting and ending their trip entirely on Naito Parkway.
  • People have a variety of different tolerances for how much they are willing to bike in streets, as you mention.
  • Even people who don't like biking in streets often have some flexibility.  You say that you pers
... (read more)

Nice; Colorado recently passed a statewide law that finally does away with a similar "U+2" rule in my own town of Fort Collins (as well as other such rules in Boulder and elsewhere). To progress!

I don't understand this post, because it seems to be parodying Anthropic's Responsible Scaling Policies (ie, saying that the RSPs are not sufficient), but the analogy to nuclear power is confusing since IMO nuclear power has in fact been harmfully over-regulated, such that advocating for a "balanced, pragmatic approach to mitigating potential harms from nuclear power" does actually seem good, compared to the status quo where society hugely overreacted to the risks of nuclear power without properly taking a balanced view of the costs vs benefits.

Maybe you c... (read more)

I will definitely check out that youtube channel!  I'm pretty interested in mechanism design and public-goods stuff, and I agree there are a lot of good ideas there.  For instance, I am a huge fan of Georgism, so I definitely recognize that going all-in on the "libertarian individualist approach" is often not the right fit for the situation!  Honestly, even though charter cities are somewhat an intrinsically libertarian concept, part of the reason I like the charter city idea is indeed the potential for experimenting with new ways to manage ... (read more)

Yup, there are definitely a lot of places (like 99+% of places, 99+% of the time!) which aren't interested in a given reform -- especially one as uniqely big and experimental as charter cities.  This is why in our video we tried to focus on political tractability as one of the biggest difficulties -- hopefully we don't come across as saying that the world will instantly be tiled over with charter cities tomorrow!  But some charter cities are happening sometimes in some places -- in addition to the examples in the video, Zambia is pretty friendly ... (read more)

Thanks, this is exciting and inspiring stuff to learn about!

I guess another thing I'm wondering about, is how we could tell apart genes that impact a trait via their ongoing metabolic activities (maybe metabolic is not the right term... what I mean is that the gene is being expressed, creating proteins, etc, on an ongoing basis), versus genes that impact a trait via being important for early embryonic / childhood development, but which aren't very relevant in adulthood.  Genes related to intelligence, for instance, seem like they might show up with po... (read more)

3GeneSmith
Yes, this is an excellent question. And I think it's likely we could (at least for the brain) thanks to some data from this study that took brain biopsies from individuals of varying stages of life and looked at the transcriptome of cells from different parts of the brain. My basic prior is that the effect of editing is likely to be close to the same as if you edited the same gene in an embryo iff the peak protein expression occurs in adulthood. Though there aren't really any animal experiments that I know of yet which look at how the distribution of effect sizes vary by trait and organ.

Is there a plausible path towards gene therapies that edit dozens, hundreds, or thousands of different genes like this? I thought people were worried about off-target errors, etc? (Or at least problems like "you'll have to take 1000 different customized doses of CRISPR therapy, which will be expensive".) So my impression is that this kind of GWAS-inspired medicine would be most impactful with whole-genome synthesis? (Currently super-expensive?)

To be clear I agree with the main point this post is making about how we don't need animal models, etc, to do medicine if we have something that we know works!

7GeneSmith
Yes, there is. I've been working on a post about this for the last few months and hope to post something much more comprehensive soon. Off-targets are a potential issue though they're less of an issue if you target non-coding regions that aren't directly translated into proteins. The editing tools have also improved a lot since the original CRISPR publications back in 2012. Base editors and prime editors have indel rates like 50-300x lower than original CRISPR. Base editors and prime editors can do simultaneous edits in the same cell. I've read a paper where the authors did 50 concurrent base edits (though it was in a cell type that is easier than average to edit). Scaling concurrent editing capabilities is the very first thing I want to focus on. Also, if your delivery vector doesn't trigger an adaptive immune response (or end up being toxic for some other reason), you can redose someone a few times and make new edits with each round. If we can solve delivery issues, the dosing would be as simple as giving someone an IV injection. I'm not saying these are simple problems. Solving all of them is going to be hard. But many of the steps have already been done independently in one research paper or another. No. You don't need to be able to synthesize a genome to make any of this work. You can edit the genome of a living person.

(this comment is kind of a "i didn't have time to write you a short letter so I wrote you a long one" situation)

re: Infowar between great powers -- the view that China+Russia+USA invest a lot of efforts into infowar, but mostly "defensively" / mostly trying to shape domestic opinion, makes sense.  (After all, it must be easier to control the domestic media/information lansdscape!)  I would tend to expect that doing domestically-focused infowar stuff at a massive scale would be harder for the USA to pull off (wouldn't it be leaked? wouldn't it be ... (read more)

(Copies from EA Forum for the benefit of lesswrongers following the discussion here)

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated soci... (read more)

1Mikhail Samin
I appreciate this comment. Qualia (IMO) certainly is "information processing": there are inputs and outputs. And it is a part of a larger information-processing thing, the brain. What I'm saying is that there's information processing happening outside of the qualia circuits, and some of the results of the information processing outside of the qualia circuits are inputs to our qualia.  Well, how do you know that visual information processing produces qualia? You can match when algorithms implemented by other humans' brains to algorithms implemented by your brain, because all of you talk about subjective experience; how do you, inside your neural circuitry, make an inference that a similar thing happens in neurons that just process visual information? You know you have subjective experience, self-evidently. You can match the computation run by the neural circuitry of your brain to the computation run by the neural circuitry of other humans: because since they talk about subjective experience, you can expect this to be caused by similar computation. This is valid. Thinking that visual information processing is part of what makes qualia (i.e., there's no way to replace a bunch of your neurons with something that outputs the same stuff without first seeing and processing something, such that you'll experience seeing as before) is something you can make theories about but is not a valid inference, you don't have a way of matching the computation of qualia to the whole of your brain. And, how can you match it to matrix multiplications that don't talk about qualia, did not have evolutionary reasons for experience, etc.? Do you think an untrained or a small convolutional neural network experiences images to some extent, or only large and trained? Where does that expectation come from? I'm not saying that qualia is solved. We don't yet know how to build it, and we can't yet scan brains and say which circuits implement it. But some people seem more confused than warranted

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple crea... (read more)

0Mikhail Samin
Both (modeling stuff about others by reusing circuits for modeling stuff about yourself without having experience; and having experience without modelling others similarly to yourself) are possible, and the reason why I think the suggested experiment would provide indirect evidence is related to the evolutionary role I consider qualia to possibly play. It wouldn't be extremely strong evidence and certainly wouldn't be proof, but it'd be enough evidence for me to stop eating fish that has these things. The studies about optimistic/pessimistic behaviour tell us nothing about whether these things experience optimism/pessimism, as they are an adaptation an RL algorithm would implement without the need to implement circuits that would also experience these things, unless you can provide a story for why circuitry for experience is beneficial or a natural side effect of something beneficial. One of the points of the post is that any evidence we can have except for what we have about humans would be inderect, and people call things evidence for confused reasons. Pain-related behaviour is something you'd see in neural networks trained with RL, because it's good to avoid pain and you need a good explanation for how exactly it can be evidence for qualia. (Copied from EA Forum)

Hi Trevor!  I appreciate this thread of related ideas that you have been developing about intelligence agencies, AI-augmented persuasion techniques, social media, etc.

  • It seems important to "think ahead" about how the power-struggle over AI will play out as things escalate to increasingly intense levels, involving eg national governments and militaries and highly-polarized political movements and etc.
  • Obviously if some organization was hypercompetent and super-good at behind-the-scenes persuasion, we wouldn't really know about it!  So it is hard to
... (read more)
9trevor
Thanks for putting the work into this attempt to falsify my model! I read all of it, but I will put work into keeping my response concise and helpful. 1. Quoting ChristianKI quoting Michael Vassar: "We didn't have an atomic war as people expected after WWII, but we had an infowar and now most people are like zombies when it comes to their ability to think and act independently". Russia and China have this technology too, but I currently think that all three countries prioritize defensive uses. Homogenizing people makes their thoughts and behavior more predictable and therefore require less FLOPS, lower risk of detection/suspicion, and higher success rate; polarization offers this homogenization, as well as the cultural intensity/creativity that Jan Kulviet referenced.  2. In addition to international infowar, there's also domestic elites, many of them are paying close attention to things they care about. The US government exists atop ~6 million people above 130 IQ with varying values, which keeps things complicated.  3. Generally though, they have learned from many of the mistakes from Vietnam and the War on Terror. Riding the wave is not an improbable outcome in my model, especially considering that individual humans and groups are less predictable/controllable when OOD. 4. It's also absolutely possible that the Left still bears ugly scars from the Infowars surrounding Vietnam and the War on Terror. There is a pretty low bar for running SGD on secure user data, and social media news feed data/algorithms/combinations of posts, in order to see what causes people to stay versus leave. That means that their systems, by default, throw outrageously clumsy failed influence attempts at you, because those tend to cause users to feel safe and return; when in reality they are not safe and shouldn't return. Meanwhile, influencing people in ways that they notice will make them leave and get RLHF'd out quickly. This is the most probable outcome, they probably couldn't prev

Thanks for writing this post, I 100% share your sentiment and appreciate the depth with which you've explored this topic, including some of the political considerations.

Here are some other potentially-relevant case studies of people doing similar-ish things, trying to make the world a better place while navigating touchy political fears related to biotech:

  • The "Enhanced Games" is organizing an alternative to the Olympic games where doping and other human enhancement technologies will be allowed.  Naturally, they try to put a heavy emphasis on the impor
... (read more)
3Metacelsus
I like your idea of exploring "a more detailed breakdown of who exactly might be opposed, and for what reasons.  And then try and figure out which of these sources actually matter the most / are the most real!" It reminds me of "Is that your true rejection?" https://www.lesswrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection
2Metacelsus
>Same deal for the project to revive Woolly Mammoths -- the awesome documentary "We Are As Gods" is basically a PR campaign for the righteousness of this cause, and a good portrait of a similar movement which is farther along in the PR pipeline.   Unfortunately, on this one the hype has outpaced the science. See: https://www.lesswrong.com/posts/ihq24ri5g5svwwmjx/why-i-m-skeptical-of-de-extinction

Yeah, I am interested in this from the "about to have an infant" perspective (my wife is almost 20 weeks pregnant).  Interestingly this means she will be able to get both the flu, covid, and newly-approved RSV shot.

  • Presumably you want to space out the vaccines a lot -- I would guess two weeks at least, but maybe more?
  • Is there a difference between when covid, flu, and RSV peak in activity, which might justify getting one before the other?  (The RSV vaccine is apparently only approved for weeks 32 - 36 of pregnancy, so we will at least have to wait
... (read more)

Good point that rationalism is over-emphasizing the importance of Bayes theorem in a pretty ridiculous way, even if most of the individual statements about Bayes theorem are perfectly correct.  I feel like if one was trying to evaluate Eliezer or the rationalist community on some kind of overall philosophy scorecard, there would be a lot of situations like this -- both "the salience is totally out of whack here even though it's not technically /wrong/...", and "this seems like a really important and true sentiment, but it's not really the kind of thin... (read more)

Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences:

  • Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc.  (IMO he's right about this stuff.  But idk if this counts as controversial enough within academia?)
  • Probably a lot of assorted philosophy-of-science stuff about the nature of evidence, the idea that high-caliber rationality ought to operate "faster
... (read more)
0TAG
His claims about Bayes go far beyond "better than frequentism". He also claims it is can be used as the sole basis of epistemology, and that it is better than "science". Bayes of course is not a one stop shop for epistemology, because It can't generate hypotheses, or handle paradigm shifts. It's also far too complex to use of in practice, for informal decision making. Most "Bayesians" are deceiving themselves about how much they are using it. Almost his only argument for science wrong, Bayes right is the supposedly "slam dunk" nature of MWI -- which, oddly, you dont mention directly. Talk of emergence without any mechanism of emergence is bunk, but so is talk of reductionism without specific reductive explanations. Which is a live issue, because many rationalists do regard reductionism as a necessary and apriori. Since it isn;'t, other models and explanations are possible -- reduction isn't necerssary, so emergence is possible. Is that good or bad? That's obviously true of a subset of claims, eg what counts as money, how fast you are allowed to drive. It would be false if applied to everything , but is very difficult to find a postmodernists who says so in so many words. I have never discerned a single clear theory of ethics or metaethics in Yudkowky's writing. The linked article does not make a clear commitment to either realism or anti realism AFAICS. IMO he has as many as four theories. 0, The Argument Against Realism, maybe. 1. The Three Word Theory (Morality is values). 2. Coherent Extrapolated Volition 3. Utilitarianism of Some Variety. The argument for atheism from Solomonoff induction is bizarre . SI can only work in an algorithmic universe. Inasmuch as it is considering hypotheses, it is considering which algorithm is actually generating observed phenomena. It can't consider and reject any non algorithmic hyposthesis, incuding non-algorithmic (non Turing computable) physics. Rationalists believe that SI can resolve theology in the direction of a

I suggest maybe re-titling this post to:
"I strongly disagree with Eliezer Yudkowsky about the philosophy of consciousness and decision theory, and so do lots of other academic philosophers"

or maybe:
"Eliezer Yudkowsky is Frequently, Confidently, Egregiously Wrong, About Metaphysics"

or consider:
"Eliezer's ideas about Zombies, Decision Theory, and Animal Consciousness, seem crazy"

Otherwise it seems pretty misleading / clickbaity (and indeed overconfident) to extrapolate from these beliefs, to other notable beliefs of Eliezer's -- such as cryonics, quantum mec... (read more)

0omnizoid
Philosophy is pretty much the only subject that I'm very informed about.  So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on.  That's . . . worrying. 

This video is widely believed to be a CGI fake.

1Runaway Smyle
Perpetuating this point, I and many acquaintances have looked into the origins of this video over the past two weeks and came up with no substantial proof of it's validity.

I think "Why The West Rules", by Ian Morris, has a pretty informative take on this.  The impression I got from the book was that gradually accruing technologies/knowledge, like the stuff you mention, is slowly accruing in the background amid the ups and downs of history, and during each peak of civilizational complexity (most notably the Roman empire, and the medieval-era Song Dynasty in china, and then industrial-era Britain) humanity basically gets another shot-on-goal to potentially industrialize.

Britain had a couple of lucky breaks -- cheap and ab... (read more)

Thanks!  Apparently I am in a mood to write very long comments today, so if you like, you can see some thoughts about addressing potential objections / difficulties in a response I made to a comment on the EA Forum version of this post.

Thanks for catching that about Singaporeans!

Re: democracy, yeah, we debated how exactly to phrase this.  People were definitely aware of the democracies of ancient Greece and Rome, and democracy was sometimes used on a local level in some countries, and there were sometimes situations where the nobles of a country had some sway / constraints over the king (like with the Magna Carta).  But the idea of really running an entire large country on American-style democracy seems like it was a pretty big step and must've seemed a bit crazy at the time...... (read more)

Probably the charter city with the most publicity is Prospera, so you could do stuff like:

  • read a bunch of hostile news articles complaining about how Prospera is neocolonialism and might be secretly hoping to confiscate people's land
  • read stuff put out by the Prospera organization about how they are actually fanatics about the importance of property rights and would never confiscate anyone's land, and how in general they are trying to be responsible and nice and create lots of positive externalities for neighboring communities (jobs, construction, etc)
  • read
... (read more)
2tailcalled
Hm, I should read a bit up on Prospera then. The extended history behind it sounds wild, like with the coup and everything, but I haven't made heads and tails in it yet. Edit: Made a separate thread for it: Prospera-dump

I think one problem with this concept is that the "restrictions" might turn out to be very onerous, preventing the good guys (using "restrictions) from winning a complete unilateral victory over everyone else.  One of the major anticipated benefits of superhuman AI systems is the ability to work effectively even on vague, broad, difficult tasks that span multiple different domains.  If you are committed to creating a totally air-gapped high-security system, where you only hand your AI the "smallest subdividable subtask" and only giving your AI ac... (read more)

3[anonymous]
I agree with this criticism. What you have done in this design is to create a large bureaucracy of AI systems who essentially will not respond when anything unexpected happens (input outside training distribution) and who are inflexible, anything other than the task assigned at the moment is "not my job/not my problem". They can have superintelligent subtask performance and the training set can include all available video on earth so they can respond to any situation they have ever seen humans performing, so it's not as inflexible as it might sound. This is going to work extremely well I think compared to what we have now. But yes, if this doesn't allow you to get close to what the limits are for what intelligence allows you to do, "unrestricted" systems might win. It depends. As a systems engineer myself I don't see unrestricted systems going anywhere, the issue isn't that they could be cognitively capable of a lot, it's that in the near term when you try to use them they will make too many mistakes to trust them with anything that matters. And they are uncorrectable errors, without a structure like described there is a lot of design coupling and making the system better at one thing with feedback comes at a cost elsewhere etc. It's easy to talk about an AI system that has some enormous architecture with a thousand modules more like a brain, and it learns online from all the tasks it is doing. Hell it has a module editor so it can add additional whenever it chooses. But...how do you validate or debug such a system? It's learning from all inputs, it's a constantly changing technological artifact. In practice this is infeasible, when it makes a catastrophic error there is nothing you can do to fix it. Any test set you add to train it on the scenario it made a mistake on is not guaranteed to fix the error because the system is ever evolving... Forget alignment, getting such a system to reliably drive a garbage truck would be risky. I can't deny such a system mig

[Cross-posting my comment from the EA Forum]

This post felt vague and confusing to me.  What is meant by a "game board" -- are you referring to the world's geopolitical situation, or the governance structure of the United States, or the social dynamics of elites like politicians and researchers, or some kind of ethereum-esque crypto protocol, or internal company policies at Google and Microsoft, or US AI regulations, or what?

How do we get a "new board"?  No matter what kind of change you want, you will have to get there starting from the current s... (read more)

1Prometheus
[crossposting my reply] Thank you for taking the time to read and critique this idea. I think this is very important, and I appreciate your thoughtful response. Regarding how to get current systems to implement/agree to it, I don't think that will be relevant longterm. The mechanisms current institutions use for control I don't think can keep up with AI proliferation. I imagine most existing institutions will still exist, but won't have the capacity to do much once AI really takes off. My guess is, if AI kills us, it will happen after a slow-motion coup. Not any kind of intentional coup by AIs, but from humans just coup'ing themselves because AIs will just be more useful. My idea wouldn't be removing or replacing any institutions, but they just wouldn't be extremely relevant to it. Some governments might try to actively ban use of it, but these would probably be fleeting, if the network actually was superior in collective intelligence to any individual AI. If it made work economically more useful for them, they would want to use it. It doesn't involve removing them, or doing much to directly interfere with things they are doing. Think of it this way, recommendation algorithms on social media have an enormous influence on society, institutions, etc. Some try to ban or control them, but most can still access them if they want to, and no entity really controls them. But no one incorporates the "will of twitter" into their constitution. The game board isn't any of the things you mention. All the things you mention I don't think have the capacity to do much to change the board. The current board is fundamentally adversarial, where interacting with it increases the power of other players. We've seen this with OpenAI, Anthropic, etc. The new board would be cooperative, at least at a higher level. How do we make the new board more useful than the current one? My best guess would be economic advantage of decentralized compute. We've seen how fast the OpenSource community
Load More