All of Oleg S.'s Comments + Replies

Oleg S.1-3

Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

This is very strong claim. It puts severe limitations on biotech capabilities. Do you have any references to support it?

7EGI
This is not the kind of stuff it is easy to find references on since Nanoengineering is not a real field of study (yet). But if you look at my discussion with bhauth above you will probably get a good idea of the reasoning involved. No, it does not put severe limitations on biotech. Diamond is entirely unnecessary for most applications. Where it is necessary it can be manufactured conventionally and be integrated with the biosystems later.

When discussing the physics behind why the sky is blue, I'm surprised that the question 'Why isn't it blue on Mars or Titan?' isn't raised more often. Perhaps kids are so captivated by concepts like U(1) that they overlook inconsistencies in the explanation.

Just realized that stability of goals under self-improvement is kinda similar to stability of goals of mesa-optimizers; so there vingian reflection paradigm and mesa-optimization paradigm should fit.

What are practical implication of alignment research in the world where AGI is hard? 

Imagine we have a good alignment theory but do not have AGI. Can this theory be used to manipulate existing superintelligent systems such as science, deep state, stock market? Does alignment research have any results which can be practically used outside of AGI field right now?

1Jay Bailey
Systems like the ones you mentioned aren't single agents with utility functions we control - they're made up of many humans whose utility functions we can't control since we didn't build them. This means alignment theory is not set up to align or manipulate these systems - it's a very different problem. There is alignment research that has been or is being performed on current-level AI, however - this is known as prosaic AI alignment. We also have some interpretability results that can be used in understanding more about modern, non-AGI AI's. These results can be and have been used outside of AGI, but I'm not sure how practically useful they are right now - someone else might know more. If we had better alignment theory, at least some of it would be useful in aligning narrow AI as well.
Oleg S.200

How does AGI solves it's own alignment problem?

For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell AGI how to self-improve without destroying its own values. Good alignment theory should work across all intelligence levels. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips?

2plex
Excellent question! MIRI's entire vingian reflection paradigm is about stability of goals under self-improvement and designing successors.

I don’t know too much about alignment research, but what surprises me most is lack of discussion of two points:

  1. For the alignment to work its theory should not only tell humans how to create aligned super-human AGI, but also tell that AGI how to self-improve without destroying its own values. Otherwise how does paperclips optimizer which is marginally smarter than human make sure that its next iteration will still care about paperclips? Good alignment theory should work across all intelligence levels.

  2. What are practical implication of alignment researc

... (read more)

What do you think about offering an option to divest from companies developing unsafe AGI? For example, by creating something like an ESG index that would deliberately exclude AGI-developing companies (Meta, Google etc) or just excluding these companies from existing ESGs. 

The impact = making AGI research a liability (being AGI-unsafe costs money) + raising awareness in general (everyone will see AGI-safe & AGI-unsafe options in their pension investment menu + a decision itself will make a noise) + social pressure on AGI researchers (equating them... (read more)

3Chris_Leong
A suggestion from my brother: https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=hRJxxhKtbKj8fhDd5
4lc
At first my reaction was something like, "the teams have been acquired by large trillion dollar technology companies and so a dollar moved away from those companies is probably a bit less than a penny moved away from AGI development. This sounds very inefficient."  But as a publicly announced way to incentivize defunding DeepMind it's at least theoretically very efficient. If I controlled BlackRock I could say "I will divest x$ from Google as a whole unless you divest x/y$ from DeepMind's meta research toward legitimate AI safety" and it would be pretty strongly in Google's interest to do so. The difficulties lie in all of the details - you'd want to make the campaign extremely boring except for the people you care about , evaluate leadership of Google/Facebook/Microsoft to see how they'd react, coordinate the funds so that the shareholder activists have clear and almost auto-enforced terms, etc. The failure mode for this ironically looks something like how the U.S. does its sanctions, where we do a very poor job of dictating clear terms and goals, and increasing or decreasing pressure quickly and transparently in response to escalation and de-escalation. We also really wouldn't want these strategies to backfire by giving any fired meta researchers a reason to hate us personally and be less sympathetic. Finding some way to "cancel" AGI researchers would honestly feel really good to me but even under the best circumstances it'd be really ineffective. We don't want them disgruntled and working somewhere else on the same thing, I want them to have something else to do that doesn't lead to the collapse of the lightcone.
Answer by Oleg S.90

You can do something similar to the Drake equation:

where Nlife is how many stars with life there are in the Milky Way and it is assumed that a) once self-replicating molecule is evolved it produces life with 100% probability; b) there is an infinite supply of RNA monomers, and c) lifetime of RNA does not depend on its length. In addition:

  • Nstars - how many stars capable of supporting life there are (between 100 and 400 billion),
  • Fplanet - Number of planets and moons capable of supporting life
... (read more)

On the object level it looks like there are a spectrum of society-level interventions starting from "incentivizing research that wouldn't be published" (which is supported by Eliezer) and all the way to "scaring the hell out of general public" and beyond. For example, I can think of removing $FB and $NVDA from ESGs, disincentivizing publishing code and research articles in AI, introducing regulation of compute-producing industry. Where do you think the line should be drawn between reasonable interventions and ones that are most likely to backfire?

On the me... (read more)

You haven't commented much on Eliezer's views on the social approach to slow down the development of AGI - the blocks starting with 

I don't know how to effectively prevent or slow down the "next competitor" for more than a couple of years even in plausible-best-case scenarios. 

and

I don't want to sound like I'm dismissing the whole strategy, but it sounds a lot like the kind of thing that backfires because you did not get exactly the public reaction you wanted

What's your take on this?

7Zvi
On slowing down, I'd say strong inside view agreement, I don't see a way either, not without something far more universal. There's too many next competitors. Could have been included, probably excluded due to seeming like it followed from other points and was thus too obvious. On the likelihood of backfire, strong inside view agreement. Not sure why that point didn't make it into the post, but consider this an unofficial extra point (43?), of something like (paraphrase, attempt 1) "Making the public broadly aware of and afraid of these scenarios is likely to backfire and result in counterproductive action."

Here are some other failure modes that might be important: 

The Covid origin story (https://www.facebook.com/yudkowsky/posts/10159653334879228) - some sort of AI research moratorium is held in US, the problem appears to be solved, but in reality it is just off-shored, and then it explodes in an unpredicted way.

The Archegos/Credit Suisse blow up (https://www.bloomberg.com/opinion/articles/2021-07-29/archegos-was-too-busy-for-margin-calls) - special comittee is set up to regulate AI-related risks, and there is a general consensus that something has to be... (read more)

3ChristianKl
While there was a moratorium for gain-of-function research, funding continued in spite of the moratorium. It wasn't just the off-shored funding in the US. 
2Carlos Ramirez
I'll be there. Been thinking about what precisely to ask. Probably something about how it seems we don't take AI risk seriously enough. This is assuming the current chip shortage has not, in fact, been deliberately engineered by the Future of Humanity Institute, of course...

An important consideration is whether you are trying to fool simulated creatures into believing simulation is real by hiding glitches, or you are doing an honest simulation and allow exploitation of these glitches. You should take it into account when you consider how deep you should simulate matter to make simulation plausible.

For example up until 1800-s you could coarse-grain atoms and molecules, and fool everyone about composition of stuff. The advances in chemistry and physics and widespread adoption of inventions relying on atomic theory made it progr... (read more)

1lorepieri
Quantum computing is a very good point. I thought about it, but I'm not sure if we should consider it "optional". Perhaps to simulate our reality with good fidelity, simulating the quantum is necessary and not an option. So if the simulators are already simulating all the quantum interactions in our daily life, building quantum computers would not really increase the power consumption of the simulation.

I think the offer needs to be modified to generate more solid market.

First, instead of making a crazy N-point contract, the correct way to trade souls is through the NFT auctions/markets. The owner gets the same symbolic rights as the owner of the arts sold through this mechanism, and there are no those extra unclear requirements on seller's actions that will hinder the healthy financial derivatives market.

Second, only desperate people sell their whole soles. Obviously, you should trade shares of souls.

So, how much do you think it will cost to develop a platform, where one can register, put a share of one's soul for auction, or build a solid portfolio of souls? What do you think the market size would be?

If I follow the logic correctly, the root cause of aging is that stem cells can irreversibly and invisibly accumulate active transposones, which are then passed on to derived cells, which then become senescent much faster. Also, for some reason this process is supressed in gonads. So, I see these alternatives:

  1. Transposone activation is essentially blocked in gonades, or
  2. There is a barrier which prevents embryos with above-normal number of active transposones from developing, or
  3. Children born from parents of old age will age faster, or
  4. Active transposone accumulation is not a root cause of aging.
[anonymous]100

On the 'old parent' hypothesis : let me point out that if there is not a full 'reset' somewhere in the chain, well, we wouldn't exist, because some of our ancestors were old parents, and the effect would be cumulative, such that none of us would live past childhood due to aging.  (there is a disease like that, albeit with a different root cause)

2johnswentworth
Solid reasoning. Transposon activity is indeed believed to be repressed in the gonads to a much greater extent than elsewhere. I've also seen a few papers talking about health problems in the children of old parents, though I don't know as much about that.
Oleg S.100

  1. Omegaven® is manufactured by German pharmaceutical company Fresenius Kabi. For some reason, the company decided to stay away from US market and this raised questions when announced back in 2006. Until patents held by FK are expired, no one in USA can sell Omegaven without license from FK.
  2. Brief search in Clinical Trials registry gives 14 open clinical studies of Omegaven as Parenteral Nutrition in USA. I hope at least some of them don't just pursue scientific goal of replicating earlier, but are compassionate attempts to provide an access to Omegave
... (read more)

Here is another example of inadequacy / inefficiency in the pharmaceutical market.

Cancer X is very aggressive and even when it is diagnosed at very early stage and surgically removed, the recurrence rate is something around 70% in 5 years. When cancer returns or when a patient has advanced stage, the mean survival time is only 6 months.

Pharmaceutical company Y has recently discovered a new anticancer drug. According to state-of-art preclinical experiments, the drug inhibits spread of cancer and kills cancer cells very effectively. Top scientists at company... (read more)