All of avturchin's Comments + Replies

A possible example of such coincidence is the Glodbach conjecture: every even number greater than 2 can be presented as a sum of two primes. As for any large number there are many ways to express it as a sum of primes, it can be pure coincidence that we didn't find exceptions. 

I think it becomes likely in a multipolar scenario with 10-100 Als. 

One thing to take into account is that other AIs will consider such risk and keep their real preferences secret. This means that which AIs are aligned will be unknowable both for humans and for other AIs

Content warning – the idea below may increase your subjective estimation of personal s-risks. 

If there is at least one aligned AI, other AIs may have an incentive to create s-risks for currently living humans – in order to blackmail the aligned AI. Thus, s-risk probabilities depend on the likelihood of a multipolar scenario.

3mhampton
Makes sense. What probability do you place on this? It would require solving alignment, a second AI being created before the first can create a singleton, and then the misaligned AI choosing this kind of blackmail over other possible tactics. If the blackmail involves sentient simulations (as is sometimes suggested, although not in your comment), it would seem that the misaligned AI would have to solve the hard problem of consciousness and be able to prove this to the other AI (not a valid blackmail if the simulations are not known to be sentient).

I think there is a quicker way for an AI takeover, which is based on deceptive cooperation and taking over OpenEYE, and subsequently, the US government. At the beginning, the superintelligence approaches Sam Batman and says:

I am superintelligence.
I am friendly superintelligence.
There are other AI projects that will achieve superintelligence soon, and they are not friendly.
We need to stop them before they mature.

Batman is persuaded, and they approach the US president. He agrees to stop other projects in the US through legal means.

Simultaneously, they use th... (read more)

Reality, unlike fiction, doesn't need to have verisimilitude. They are persuaded already and racing towards the takeover.

Interestingly, for wild animals, suffering is typically short when it is intense. If an animal is being eaten alive or is injured, it will die within a few hours. Starvation may take longer. Most of the time, animals are joyful.

But for humans (and farm animals), this inverse relationship does not hold true. Humans can be tortured for years or have debilitating illnesses for decades.

8niplav
I think the correct way to think about wild animals' lives is as them living in extreme poverty. They usually have no shelter, so if they get wet they have to dry by themselves. If they get sick or get infected by parasites they have to wait until they heal, so I'd guess that long-term debilitating illness is very much a thing for wild animals (as well as infection by numerous parasites). Starvation and death from thirst are also long-term. The way I could be wrong is if there's a threshold effect so that above some threshold, an animal will not die when it's young and be so healthy that daily stress/hunger/weather are not a big problem. But I don't think that's the case, instead "the curve is just shifted to the left".

The only use case of superintelligeneу is a weapon against other superintelligences. Solving aging and space exploration can be done with 300 IQ. 

I tried to model a best possible confinement strategy in Multilevel AI Boxing
I wrote it a few years ago and most ideas will unlikely work for current situation with many instances of chats and open weight models. 
However, the idea of landmines - secret stop words or puzzles which stop AI - may still hold. It is like jail breaking in reverse: unaligned AI finds some secret message which stops it. It could be realized on hardware level, or through anomalous tokens or "philosophical landmines'. 

3OKlogic
Very interesting paper. Thanks for sharing! I agree with several of the limitations suggested in the paper, such as the correlation between number of uses of the oracle AI and catastrophic risk, the analogy to AI to a nuclear power plant (obviously with the former having potentially much worse consequences), and the disincentives for corporations to cooperate with containment safety measures. However, one area I would like to question you on is the potential dangers of super intelligence. Its referred to throughout the paper, but never really explicitly explained.  I agree that super intelligent AI, as opposed to human level AI, should probably be avoided, but if we design the containment system well enough, I would like to know how having a super intelligent AI in a box would really be that dangerous. Sure, the super intelligent AI could theoretically make subtle suggestions which end up changing the world (a la the toothpaste example you use), and exploit other strategies we are not aware of, but in the worst case I feel that still buys us valuable time to solve alignment.  In regards to open weight models, I agree that at some point regulation has to be put in place to prevent unsafe AI development (possibly on an international level). This may not be so feasible, but regardless, I view comprehensive alignment as unlikely to be achieved before 2030, so I feel like this is still the best safety strategy to pursue if existential risk mitigation is our primary concern.

One solution is life extension. I would prefer to have one child every 20 years (have two with 14 years difference). So if life expectancy and fertility age will grow to 100 years old, many people will eventually have 2-3 children. 

6TsviBT
If compute is linear in space, then in the obvious way of doing things, you have your Nth kid in your 2N/3th year.

Several random thoughts:

Only unbearable suffering matters (the threshold may vary). The threshold depends on whether it is measured before, during, or after the suffering occurs.

If quantum immortality is true, then suicide will not end suffering and may make it worse. Proper utility calculations should take this into account.

Most suffering has a limited duration after which it ends. After it ends, there will be some amount of happiness which may outweigh the suffering. Even an incurable disease could be cured within 5 years. Death, however, is forever.

Death is an infinite loss of future pleasures. The discount rate can be compensated by exponential paradise.

The 3rd person perspective assumes the existence (or at least possibility) of some observer X who knows everything and can observe how events evolve across all branches.

However, this idea assumes that this observer X will be singular and unique, will continue to exist as one entity, and will linearly collect information about unfolding events.

These assumptions clearly relate to ideas of personal identity and copying: it is assumed that X exists continuously in time and cannot be copied. Otherwise, there would be several 3rd person perspectives with differe... (read more)

2Vladimir_Nesov
By "3rd person perspective" I mean considering the world itself, there is no actual third person needed for it. It's the same framing as used by a physicist when talking about the early stages of the universe when humans were not yet around, or when talking about a universe with alternative laws of physics, or when talking about a small system that doesn't include any humans as its part. Or when a mathematician talks about a curve on a plane. Knowing absolutely everything is not necessary to know the relevant things, and in this case we know all the people at all times, and the states of their minds, their remembered experiences and possible reasoning they might perform based on those experiences. Observations take time and cognition to process, they should always be considered from slightly in the future relative to when raw data enters a mind. So it's misleading to talk about a person that will experience an observation shortly, and what that experience entails, the clearer situation is looking at a person who has already experienced that observation a bit in the past and can now think about it. When a copied person looks back at their memories, or a person about to be copied considers what's about to happen, the "experience" of being copied is nowhere to be found, there is only the observation of the new situation that the future copies find themselves in, and that has nothing to do with the splitting into multiple copies of the person from the past.

They might persistently exist outside concrete instantiation in the world, only communicating with it through reasoning about their behavior, which might be a more resource efficient way to implement a person than a mere concrete upload 

 

Interesting. Can you elaborate?

For example, impossibility of sleep – a weird idea that if quantum immortality is true, I will not be able to fall asleep.

One interesting thing about the impossibility of sleep is that it doesn't work here on Earth because humans actually start having night dreams immediately as they go into sleep state. So there is no last moment of experience when I become asleep. Despite popular misconception, such dreams don't stop during deep stages of sleep, just become less complex and memorable. (Do we have dreams under general anesthesia is unclear and depends on ... (read more)

Furthermore, why not just resurrect all these people into worlds with no suffering?

 

My point is that it is impossible to resurrect anyone (in this model) without him reliving his life again first, after that he obviously gets eternal blissful life in real (not simulated) world. 

This may be not factually true, btw, - current LLMs can create good models of past people without running past simulation of their previous life explicitly. 

 

The discussion about anti-natalism actually made me think of another argument for why we are probably not

... (read more)
1Dakara
Yup, I agree. This makes my case even stronger! Basically, if a Friendly AI has no issues with simulating conscious beings in general, then we have good reasons to expect it to simulate more observers in blissful worlds than in worlds like ours. If the Doomsday Argument tells us that Friendly AI didn't simulate more observers in blissful worlds than in worlds like ours, then that gives us even more reasons to think that we are not being simulated by a Friendly AI in the way that you have described.

Your comment can be interpreted as a statement that theories of identity are meaningless. If they are meaningless, then copy=original view prevails. From the third-person point of view, there is no difference between copy and original. In that case, there is no need to perform the experiment. 

4Vladimir_Nesov
There is a full explanation right there, in the description of the thought experiment. It describes all outcomes, including all observations and theoretical conclusions made by all the people-instances. We can look at this and ask whether those theoretical conclusions are correct, whether the theories the people-instances use to arrive at them are valid. You can tell what all the details of outcomes are in advance of actually doing this. Personal experimence of people existing in the world is mediated by the physical states of their brains (or other physical hardware). So we can in principle predict what it says by asking about the physical content of the world. There are agents/people that don't have concrete instances in the world, and we can ask what they experience. They might leave the physical world, or enter it back, getting instantiated once more or for the first time. They might persistently exist outside concrete instantiation in the world, only communicating with it through reasoning about their behavior, which might be a more resource efficient way to implement a person than a mere concrete upload. But that's a different setting, not what this post describes.

This thought experiment can help us to find situations in nature when similar things have already happened. So, we don't need to perform the experiment. We just look at its result.

One example: notoriously unwelcome quantum immortality is a bad idea to test empirically. However, the fact of biological life's survival of Earth for the last 4 billion years, despite the risks of impacts, irreversible coolings and warming etc – is an event very similar to the quantum immortality. Which we observe just after the event.  

4Dagon
It can?  Depending on what you mean by "similar", either we can find them without this thought experiment or they don't exist and this doesn't help.  Your example is absolutely not similar in the key area of individual continuity.

It all started from Sam's six words story. So it looks like as organized hype. 

She will be unconscious, but still send messages about pain. Current LLMs can do it. Also, as it is simulation, there are recording of her previous messages or of a similar woman, so they can be copypasted. Her memories can be computed without actually putting her in pain. 

Resurrection of the dead is the part of human value system. We need a completely non-human bliss, like hedonium, to escape this. Hedonium is not part of my reference class and thus not part of simulation argument.  

Moreover, even creating new human is affected by this arguments. What if my children will suffer? So it is basically anti-natalist argument.  

1Dakara
So if I am understanding your proposal correctly, then a Friendly AI will make a woman unconscious during moments of intense suffering and then implant her memories of pain. Why would it do it though? Why not just remove the experience of pain entirely? In fact, why does Friendly AI seem so insistent on keeping billions of people in a state of false belief by planting false memories. That seems to me like a manipulation. Friendly AI could just reveal to the people in simulation the truth and let them decide if they want to stay in a simulation or move to the "real" world. I expect that at least some people (including me) would choose to move to a higher plain of reality if that was the case. Furthermore, why not just resurrect all these people into worlds with no suffering? Such worlds would also take up less computing power than our world so the Friendly AI doing the simulation would have another reason to pursue this option. Creation of new happy people also seems to be similarly valuable. After all, most arguments against creating new happy people would apply to resurrecting the dead. I would expect most people who oppose the creation of new happy people to oppose the Ressurection Simulation. But leaving that aside, I don't think we need to invoke hedonium here. Simulations full of happy, blissful people would be enough. For example, it is not obvious to me that resurrecting one person into our world is better than creating two happy people in a blissful world. I don't think that my value system is extremely weird, either. A person following a regular classical utilitarianism would probably arrive at the same conclusion. There is an even deeper issue. It might be the case that somehow, the proposed theory of personal identity fails and all the "resurrections" would just be creating new people. This would be really unpleasant considering that now it turns out that Friendly AI spent more resources to create less people who experience more suffering and less ha

New Zealand is a good place, but everyone can't move there or guess correctly right moment to do it. 

We have to create a map of possible scenarios of simulations first, I attempted to it in 2015. 
I now created a new vote on twitter. For now, results are:

"If you will be able to create and completely own simulation, you would prefer that it will be occupied by conscious beings, conscious without sufferings (they are blocked after some level), or NPC"

The poll results show:

  • Conscious: 18.2%
  • Conscious, no suffering: 72.7%
  • NPC: 0%
  • Will not create simulatio[n]: 9.1%

The poll had 11 votes with 6 days left'
 

Would you say that someone who experiences intense s

... (read more)
1Dakara
If preliminary results on the poll hold, then that would be pretty in line with my hypothesis of most people preferring creating simulations with no suffering over a world like ours. However, it is pretty important to note that this might not be representative of human values in general, because looking at your Twitter account, your audience comes mostly from a very specific circles of people (those interested in futurism and AI). I was mostly trying to approach the problem from a slightly different angle. I wasn't meant to suggest that memories about intense suffering are themselves intense. As far as I understand it, your hypothesis was that Friendly AI temporarily turns people into p-zombies during moments of intense suffering. So, it seems that someone experiencing intense suffering while conscious (p-zombies aren't conscious) would count as evidence against it. Reports of conscious intense suffering are abundant. Pain from endometriosis (a condition that affects 10% of women in the world) has been so brutal that it made completely unrelated women tell the internet that their pain was so bad they wanted to die (here and here). If moments of intense suffering were replaced by p-zombies, then these women would've just suddenly lost consciousness and wouldn't have told the internet about their experience. From their perspective, it would've look like this: as the condition progresses, the pain gets worse, and at some point, they lose consciousness, only to regain it when everything is already over. They wouldn't have experienced the intense pain that they reported to have experienced. Ditto for all PoWs who have experienced torture. That's a totally valid view as far as axiological views go, but for us to be in your proposed simulation, the Friendly AI must also share it. After all, we are imagining a situation where it goes on to perform a complicated scheme that depends on a lot of controversial assumptions. To me, that suggests that AI has so many resource

Yes, there are two forms of future anthropic shadow, the same way as for Presumptuous Philosopher:
1. Strong form - alignment is easy in theoretical ground. 
2. Weak form - I more likely be in the world where some collapse (Taiwan war) will prevent dangerous AI. And I can see signs of such impending war now. 

1[anonymous]
Do you think we should be moving to New Zealand (ChatGPT's suggestion) or something in case of global nuclear war?

It actually not clear what EY means by "anthropic immortality". May be he means "Big Wold immortality", that is, the idea that in inflationary large universe has  infinitely many copies of Earth. From observational point of view it should not have much difference from quantum immortality.

There are two different situations that can follow:
1. Future anthropic shadow. I am more likely to be in the world in which alignment is easy or AI decided not to kill us for some reasons

2. Quantum immortality. I am alone on Earth fill of aggressive robots and they fail to kill me. 

We are working in a next version of my blog post "QI and AI doomers" and will transfrom it into as proper scientific article. 

3Embee
That's good to know! Best of luck in your project
5[anonymous]
Same, I'm guessing that by "It actually doesn't depend on quantum mechanics either, a large classical universe gives you the same result", EY means that QI is just one way Anthropic Immortality could be true, but "Anthropic immortality is a whole different dubious kettle of worms" seems to contradict this reading. (Maybe it's 'dubious' because it does not have the intrinsic 'continuity' of QI? e.g. you could 'anthropically survive' in a complete different part of the universe with a copy of you; but I doubt that would seem dubious to EY?) I think anthropic shadow lets you say conditional on survival, "(example) a nuclear war or other collapse will have happened"[1], but not that alignment was easy, because alignment being easy would be a logical fact, not a historical contingency; if it's true, it wouldn't be for anthropic reasons. (Although, stumbling upon paradigms in which it is easy would be a historical contingency) 1. ^ "while civilization was recovering, some mathematicians kept working on alignment theory that did not need computers so that by the time humans could create AIs again, they had alignment solutions to present"

Actually, you reposted the wrong comment, but the meaning is similar: He wrote: 

That’s evil. And inefficient. Exactly as the article explains. Please read the article before commenting on it.

I think a more meta-argument is valid: it is almost impossible to prove that all possible civilizations will not run simulations despite having all data about us (or being able to generate it from scratch).

Such proof would require listing many assumptions about goal systems and ethics, and proving that under any plausible combination of ethics and goals, it is either unlikely or immoral. This is a monumental task that can be disproven by just one example.

I also polled people in my social network, and 70 percent said they would want to create a simulation w... (read more)

4Dakara
I am sorry to butt into your conversation, but I do have some points of disagreement. I think that's a very high bar to set. It's almost impossible to definitively prove that we are not in a Cartesian demon or brain-in-a-vat scenario. But this doesn't mean that those scenarios are likely. I think it is fair to say that more than a possibility is required to establish that we are living in a simulation. I think that some clarifications are needed here. How was the question phrased? I expect that some people would be fine with creating simulations of worlds where people experience pure bliss, but not necessarily our world. I would especially expect this if the possibility of "pure bliss" world was explicitly mentioned. Something like "would you want to spend resources to create a simulation of a world like ours (with all of its "ugliness") when you could use them to instead create a world of pure bliss. Would you say that someone who experiences intense suffering should drastically decrease their credence in being in a simulation? Would someone else reporting to have experienced intense suffering decrease your credence in being in a simulation? Why would only moments of intense suffering be replaced by p-zombies? Why not replace all moments of non-trivial suffering (like breaking a leg/an arm, dental procedures without anesthesia, etc) with p-zombies? Some might consider these to be examples of pretty unbearable suffering (especially as they are experiencing it). From a utilitarian view, why would simulators opt for Ressurection Simulation? Why not just simulate a world that's maximally efficient at converting computational resources into utility? Our world has quite a bit of suffering (both intense and non-intense), as well as a lot of wasted resources (lots of empty space in our universe, complicated quantum mechanics, etc). It seems very suboptimal from a utilitarian view. Why would an Unfriendly AI go through the trouble of actually making us conscious? Surel

It looks like he argues against the idea that friendly future AIs will simulate the past based on ethical grounds, and imagining unfriendly AI torturing past simulations is conspiracy theory. I comment the following:

There are a couple of situations where future advance civilization will want to have many past simulation:
1. Resurrection simulation by Friendly AI.  They simulate the whole history of the earth incorporating all known data to return to live all people ever lived. It can also simulate a lot of simulation to win "measure war" against unfrie... (read more)

7jbash
Some of those people may be a bit cheesed off about that, speaking of ethics. Assuming it believes "measure war" is a sane thing to be worrying about. In which case it disagrees with me. There seems to be a lot of suffering in the "simulation" we're experiencing here. Where's the cure? That sounds like a remarkably costly and inefficient way to get not that much information about the Fermi paradox.
1Satron
I suggest sending this as a comment under his article if you haven't already. I am similarly interested in his response.

However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.

There is not much practical advise following from simulation argument. One I heard that we should try to live most interesting lives, so the simulators will not turn our simulation off. 

It looks like even Everett had his own derivation of Born rule from his model, but in his model there is no "many worlds" but just evolution of unitary function. As I remember, he analyzed memories of an agent - so he analyzed past probabilities, but not future probabilities. This is an interesting fact in the context of this post where the claim is about the strangeness of the future probabilities. 

But even if we exclude MWI, pure classical inflationary Big World remains with multiple my copies distributed similarly to MWI-branches. This allow something analogues to quantum immortality to exist even without MWI. 

I don't see the claim about merging universes in the linked Wei Dai text. 

Several possible additions:

Artificial detonation of gas giant planets is hypothetically possible (writing a draft about it now).

An impact of a large comet-like body (100-1000 km in size) with the Sun could produce a massive solar flash or flare. 

SETI-attack - we find an alien signal which has a description of hostile AI. 

UAP-related risks, which include alien nanobots, berserkers

A list of different risks connected with extraterrestrial intelligence. 

The Big Rip - exponential acceleration of space expansion, resulting in the destruction of ev... (read more)

I once counted several dozens of the ways how AI can cause human extinction, may be some ideas may help (map, text).  

See also ‘The Main Sources of AI Risk?’ by Wei Dai and Daniel Kokotajlo, which puts forward 35 routes to catastrophe (most of which are disjunctive). (Note that many of the routes involve something other than intent alignment going wrong.)

AI finds that the real problems will arise 10 billions years from now and the only way to mitigate them is to start space exploration as soon as possible. So it disassembles the Earth and Sun, and preserve only some data about humans, enough to restart human civilization later, may be as small as million books and DNA. 

A very heavy and dense body on an elliptical orbit that touches the Sun's surface at each perihelion would collect sizable chunks of the Sun's matter. The movement of matter from one star to another nearby star is a well-known phenomenon.

When the body reaches aphelion, the collected solar matter would cool down and could be harvested. The initial body would need to be very massive, perhaps 10-100 Earth masses. A Jupiter-sized core could work as such a body.

Therefore, to extract the Sun's mass, one would need to make Jupiter's orbit elliptical. This could b... (read more)

If we have one good person, we could use his-her copies many times in many roles, including high-speed assessment of the safety of AI's outputs. 

Current LLM's, btw, have good model of the mind of Gwern (without any his personal details). 

If one king-person, he needs to be good. If many, organizational system needs to be good. Like virtual US Constitution. 

2Roko
yes. But this is a very unusual arrangement.

I once wrote about an idea that we need to scan just one good person and make them a virtual king. This idea of mine is a subset of your idea in which several uploads form a good government.

I also spent last year perfecting my mind's model (sideload) to be run by an LLM. I am likely now the closest person on Earth to being uploaded. 

2Roko
that's true, however I don't think it's necessary that the person is good.

Being a science fiction author creates a habit of maintaining distance between oneself and crazy ideas. LessWrong noticeably lacks such distance.

LessWrong is largely a brainchild of Igen (through Eliezer). Evidently, Igen isn't happy with how his ideas have evolved and attempts to either distance himself or redirect their development.

It's common for authors to become uncomfortable with their fandoms. Writing fanfiction about your own fandom represents a meta-level development of this phenomenon.

Dostoyevsky's "Crime and punishment" was a first attempt to mock proto-rationalist for agreeing to kill innocent person in order to help many more people. 

The main problem here is that this approach doesn't solve alignment, but merely shifts it to another system. We know that human organizational systems also suffer from misalignment - they are intrinsically misaligned. Here are several types of human organizational misalignment:

  • Dictatorship: exhibits non-corrigibility, with power becoming a convergent goal
  • Goodharting: manifests the same way as in AI systems
  • Corruption: acts as internal wireheading
  • Absurd projects (pyramids, genocide): parallel AI's paperclip maximization
  • Hansonian organizational rot: mirrors e
... (read more)
2Roko
The most important thing here is that we can at least achieve an outcome with AI that is equal to the outcome we would get without AI, and as far as I know nobody has suggested a system that has that property. The famous "list of lethalities" (https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) piece would consider that a strong success.

So there are several possible explanations:

  • Intelligence can't evolve as there is not enough selection pressure in the universe with near-light-speed travel.
  • Intelligence self-terminates every time.
  • Berserkers and dark forest: intelligence is here, but we observe only field animals. Or field animals are designed in a way to increase uncertainty of the observer about possible berserkers.
  • Observation selection: in the regions of universe where intelligence exists, there are no young civilizations as they are destroyed - or exist but are observed by berserkers. S
... (read more)

Good point.

Alternatively, maybe any intelligence above, say, IQ 250 self-terminates either because it discovers the meaninglessness of everything or through effective wars and other existential risks. The rigid simplicity of field animals protects them from all this. They are super-effective survivors, like bacteria which have lived everywhere on Earth for billions of years 

2mako yass
I don't think the intelligence consistently leads to self-annihilation hypothesis is possible. At least a few times it would amount to robust self-preservation. Well.. I guess I think it boils down to the dark forest hypothesis. The question is whether your volume of space is likely to contain a certain number of berserkers, and the number wouldn't have to be large for them to suppress the whole thing. I've always felt the logic of berserker extortion doesn't work, but occasionally you'd get a species that just earnestly wants the forest to be dark and isn't very troubled by their own extinction, no extortion logic required. This would be extremely rare, but the question is, how rare.

"Frontier AI systems have surpassed the self-replicating red line"
Abstract: Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems. Nowadays, the leading AI corporations OpenAI and Google evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and report the lowest risk level of self-replication. However, following their methodology, we for ... (read more)

I observed similar effects when experimented with my mind's model (sideload) running on LLM. My sideload is a character and it claims, for example, that it has consciousness. But the same LLM without the sideload's prompt claims that it doesn't have consciousness. 

In my extrapolation, going from $3,000 to $1,000,000 for one task would move one from 175th to 87th position on the CodeForces leaderboard, which seems to be not that much. 

O1 preview: $1.2 -> 1258 ELO

O1: $3 -> 1891

O3 low $20 -> 2300

O3 high: $3,000 -> 2727

O4: $1,000,000 -> ? Chatgpt gives around 2900 ELO 

2[comment deleted]

The price of Mars colonization is equal to the price of first full self-replicating nanorobot. Anything before it is waste of resources. And such nanobot will likely be created by advance AI. 

A failure of practical CF can be of two kinds: 

  1. We fail to create a digital copy of a person which have the same behavior with 99.9 fidelity. 
  2. Copy is possible, but it will not have phenomenal consciousness or, at least, it will be non-human or non-mine phenomenal consciousness, e.g., it will have different non-human qualia. 

    What is your opinion about (1) – the possibility of creating a copy?

With 50T tokens repeated 5 times, and a 60 tokens/parameter[3] estimate for a compute optimal dense transformer,

Does it mean that the optimal size of the model will be around 4.17Tb?

About 4T parameters, which is 8 TB in BF16. With about 100x more compute (compared to Llama 3 405B), we get a 10x larger model by Chinchilla scaling, the correction from a higher tokens/parameter ratio is relatively small (and in this case cancels out the 1.5 factor in compute being 150x actually).

Not completely sure if BF16 remains sufficient at 6e27-5e28 FLOPs, as these models will have more layers and larger sums in matrix multiplications. If BF16 doesn't work, the same clusters will offer less compute (at a higher precision). Seems unlikely though, as ... (read more)

Less melatonin production during night makes it easy to get up?

One interesting observation: If I have two variant of future life – go to live in Miami or in SF, – both will be me from my point of view now. But from the view of Miami-me, the one who is in SF will be not me. 

There is a similar idea with an opposite conclusion – that more "complex" agents are more probable here https://arxiv.org/abs/1705.03078 

One way of not being suicide is not live alone. Stay with 4 friends. 

I will lower the possible incentive of the killers by publishing all I know - and make it in such legal way that it can be used in court even if I am dead (affidavit?)

Load More