LESSWRONG
LW

avturchin
4383Ω810417980
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5avturchin's Shortform
6y
177
No wikitag contributions to display.
Time Machine as Existential Risk
avturchin7h61

Thanks, that was actually what EY said in his quote, which I put just below my model - that we should change the bit each time. I somehow missed it ("send back a '0' if a '1' is recorded as having been received, or vice versa—unless some goal state is achieved").

As I stated in the epistemic status, this article is just a preliminary write-up. I hope more knowledgeable people will write much better models of x-risks from time machines and will be able to point out where avturchin was wrong and explain what the real situation is.

Reply
Nina Panickssery's Shortform
avturchin2d20

I am going to post about biouploading soon – where the uploading is happened into (or via) a distributed net of my own biological neurons. This combines good things about uploading – immortality, ability to be copied, easy to repair, and good things about being biological human – preserving infinite complexity, exact sameness of a person, guarantee that the bioupload will have human qualia and any other important hidden things which we can miss.  

Reply1
Time Machine as Existential Risk
avturchin3d40

Thanks! Fantastic read. It occurred to me that sending code or AI back in time, rather than a person, is more likely since sending data to the past could be done serially and probably requires less energy than sending a physical body.

Some loops could be organized by sending a short list of instructions to the past to an appropriate actor – whether human or AI.

Additionally, some loops might not require sending any data at all: Roko's Basilisk is an example of such acausal data transmission to the past. Could there be an outer loop for Roko's Basilisk? For example, a precommitment not to be acausally blackmailed.

Also (though I'm not certain about this), loops like you described require that the non-cancellation principle is false – meaning that events which have happened can be turned into non-existence. To prevent this, we would need to travel to the past and compensate for any undesirable changes, thus creating loops. This assumption motivated the character in Timecrimes to try to recreate all events exactly as they happened.

However, if the non-cancellation principle is false, we face a much more serious risk than nested loops (which are annoying, but most people would live normal lives, especially those who aren't looped and would continue through loops unaffected). The risk is that a one-time time machine could send a small probe into the remote past and prevent humanity from appearing at all.

We can also hypothesize that an explosion of nested loops and time machines might be initiated by aliens somewhere in the multiverse – perhaps in the remote future or another galaxy. Moreover, what we observe as UAPs might be absurd artifacts of this time machine explosion.

Reply
Time Machine as Existential Risk
avturchin3d53

The main claim of the article does not depend on the exact mechanism of time travel, which I have chosen not to discuss in detail. The claim is that we should devote some thought to possible existential risks related to time travel.

The argument about presentism is that the past does not ontologically exist, so "travel" into it is impossible. Even if one travels to what appears to be the past, it would not have any causal effects along the timeline.

I was referring to something like eternal return—where all of existence happens again and again, but without new memories being formed. The only effect of such a loop is anthropic—it has a higher measure than a non-looped timeline. This implies that we are more likely to exist in such a loop and in a universe where this is possible.

Reply2
Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change
[+]avturchin6d-10-6
Interstellar travel will probably doom the long-term future
avturchin6d40

I would add that there are a series of planetary system-wide risks that appear only for civilizations traveling within their solar systems but do not affect other solar systems. These include artificial giant planet explosions via initiating nuclear fusion in their helium and lithium deposits, destabilization of the Oort cloud, and the use of asteroids as weapons.

More generally speaking, any spacecraft is a potential weapon, and the higher its speed, the more dangerous it becomes. Near light-speed starships are perfect weapons. Even a small piece of matter traveling at very high velocities (not necessarily light speed, but above 100 km per second, as I recall) will induce large nuclear reactions at the impact site. Such nuclear reactions may not produce much additional energy but will cause significant radioactive contamination. They could destroy planets and any large structures.

Additionally, space colonization will likely develop alongside weapon miniaturization, which could ultimately result in space-based grey goo with some level of intelligence. Stanisław Lem's last novel, "Fiasco," seems to address this concept.

Reply
CstineSublime's Shortform
avturchin6d00

"Explain as gwern ELI5"

Reply
Vladimir_Nesov's Shortform
avturchin19d10

This means that straightforward comparison of flops-per-USD between home computer GPU cards and data center flops-per-USD is incorrect. If someone already has a GPU card, they already have a computer and house where this computer stays "for free." But if someone needs to scale, they have to pay for housing and mainframes.

Such comparisons of old 2010s GPUs with more modern ones are used to show the slow rate of hardware advances, but they don't take into account the hidden costs of owning older GPUs.

Reply
$500 bounty for engagement on asymmetric AI risk
avturchin21d52

In that case, AI risk becomes similar to aging risk – it will kill me and my friends and relatives. The only difference is the value of future generations. 

Extinction-level AI risk kills future generations, but mundane AI risk (eg. ubiquitous drone clouds and only some people survive in bunkers) still assume existence of future generations. Mundane AI risk also does not require superintelligence. 

I wrote on similar topics in https://philpapers.org/rec/TURCOG-2
and here https://philpapers.org/rec/TURCSW 

Reply
Do you even have a system prompt? (PSA / repo)
avturchin1mo20

The difference is as if AI gets 20 IQ boost. It is not easy to actually explain what I like.

Reply
Load More
15Time Machine as Existential Risk
3d
7
23Our Reality: A Simulation Run by a Paperclip Maximizer
2mo
65
9Experimental testing: can I treat myself as a random sample?
2mo
41
10The Quantum Mars Teleporter: An Empirical Test Of Personal Identity Theories
5mo
18
16What would be the IQ and other benchmarks of o3 that uses $1 million worth of compute resources to answer one question?
Q
6mo
Q
2
13Sideloading: creating a model of a person via LLM with very large prompt
7mo
4
5If I care about measure, choices have additional burden (+AI generated LW-comments)
8mo
11
12Quantum Immortality: A Perspective if AI Doomers are Probably Right
8mo
55
81Bitter lessons about lucid dreaming
8mo
62
8Three main arguments that AI will save humans and one meta-argument
9mo
8
Load More