I've been dedicating a fair amount of my time recently to investigating
whole brain emulation (WBE).
As computational power continues to grow, the feasibility of emulating a
human brain at a reasonable speed becomes increasingly plausible.
While the connectome data alone seems insufficient to fully capture and
replicate human behavior, recent advancements in scanning technology
have provided valuable insights into distinguishing different types of
neural connections. I've heard suggestions that combining this
neuron-scale data with higher-level information, such as fMRI or EEG,
might hold the key to unlocking WBE. However, the evidence is not yet
conclusive enough for me to make any definitive statements.
I've heard some talk about a new company aiming to achieve WBE within
the next five years. While this timeline aligns suspiciously with the
typical venture capital horizon for industries with weak patent
protection, I believe there is a non-negligible chance of success within
the next decade -- perhaps exceeding 10%. As a result, I'm actively
exploring investment opportunities in this company.
There has also been speculation about the potential of WBE to aid in AI
alignment efforts. However, I remain skeptical about this prospect. For
WBE to make a significant impact on AI alignment, it would require not
only an acceleration in WBE progress but also a slowdown in AI
capability advances as they approach human levels or the assumption that
the primary risks from AI emerge only when it substantially surpasses
human intelligence.
My primary motivation for delving into WBE stems from a personal desire
to upload my own mind. The potential benefits of WBE for those who
choose not to upload remain uncertain, and I'm uncertain how to predict
its broader societal implications.
Here are some videos that influenced my recent increased interest. Note
that I'm relying heavily on the reputations of the speakers when
deciding how much weight to give to their opinions.
Additionally, I've been working on some of the suggestions mentioned in
the first video. I'm sharing my code and analysis on
Colab.
My aim is to evaluate the resilience of language models to the types of
errors that might occur during the brain scanning process. While the
results provide some reassurance, their value heavily relies on
assumptions about the importance of low-confidence guesses made by the
emulated mind.
I've been dedicating a fair amount of my time recently to investigating whole brain emulation (WBE).
As computational power continues to grow, the feasibility of emulating a human brain at a reasonable speed becomes increasingly plausible.
While the connectome data alone seems insufficient to fully capture and replicate human behavior, recent advancements in scanning technology have provided valuable insights into distinguishing different types of neural connections. I've heard suggestions that combining this neuron-scale data with higher-level information, such as fMRI or EEG, might hold the key to unlocking WBE. However, the evidence is not yet conclusive enough for me to make any definitive statements.
I've heard some talk about a new company aiming to achieve WBE within the next five years. While this timeline aligns suspiciously with the typical venture capital horizon for industries with weak patent protection, I believe there is a non-negligible chance of success within the next decade -- perhaps exceeding 10%. As a result, I'm actively exploring investment opportunities in this company.
There has also been speculation about the potential of WBE to aid in AI alignment efforts. However, I remain skeptical about this prospect. For WBE to make a significant impact on AI alignment, it would require not only an acceleration in WBE progress but also a slowdown in AI capability advances as they approach human levels or the assumption that the primary risks from AI emerge only when it substantially surpasses human intelligence.
My primary motivation for delving into WBE stems from a personal desire to upload my own mind. The potential benefits of WBE for those who choose not to upload remain uncertain, and I'm uncertain how to predict its broader societal implications.
Here are some videos that influenced my recent increased interest. Note that I'm relying heavily on the reputations of the speakers when deciding how much weight to give to their opinions.
Some relevant prediction markets:
Additionally, I've been working on some of the suggestions mentioned in the first video. I'm sharing my code and analysis on Colab. My aim is to evaluate the resilience of language models to the types of errors that might occur during the brain scanning process. While the results provide some reassurance, their value heavily relies on assumptions about the importance of low-confidence guesses made by the emulated mind.