All of pdf23ds's Comments + Replies

For a while I thought I had delayed sleep phase syndrome (which is more easily treated with light therapy), and that it's just so severe that the morning sunlight late in my day tends to make it go crazy. It's not quite regular enough for non-24. Or it could be completely irregular.

In any case, light therapy doesn't seems to help at all. I tried it for about a month or two with this and saw no effects. Also, it's a /huge/ inconvenience.

What I'm wondering with a markov process is whether it could be embellished to include other potentially relevant variables. From 5 minutes reading wikipedia, it seems like I'd have a combinatorial explosion of states, and the more states, the more data needed to train the model.

So I'd have like 48 states, for each half-hour of the day, times 3-4 for 8-11 hours long sleep? Would it work to have ordered pairs where the first item is measured in time since my last awakening?

I have karma display turned off (greasemonkey script). It stresses me out. I think your comment could certainly expand on point 3/4. Really what I was looking for as a response to the post is a good pointer on what sort of algorithms or tools could potentially give me good results on this problem to direct my studying, and perhaps what textbooks or introductions I should be reading.

But point 1 is good. I hadn't thought to do that. I was just going to go on common sense, and a kitchen sink approach.

1Cyan
Would it have improved the comment if I had stated explicitly at the start that my reply was not directly responsive to your request but rather addressed an oversight/implicit assumption?

Vyvanse has insomnia listed as a side effect.

Well, Vyvanse is modified amphetamine, so yeah. I also have serious focus problems. I was only on it for a month or so, and found it ineffective for the same reasons as other stimulants. I think in the sleep log I had just taken an isolated pill I had left.

But your advice is good. Going through the options very thoroughly might turn up something.

I have six months of past sleep data, though nothing current, with sleep and wake times. I could easily augment that with other potentially relevant variables, like daily caffeine intake or whatnot.

4Daniel_Burfoot
As a starter method, I would try Adaboost. AdaBoost is nice because it is easy to implement, gives some protection against overfitting, and allows you a lot of liberty to define whatever context functions/predictors you want. Try to predict whether a given hour will be sleep or not. Use whatever information like caffeine intake you can as predictors, and use as many of them as you can dream up: AdaBoost will figure out which ones are the most important.
4gwern
Wake and sleep times don't strike me as very good data, IMO. If you had 6 months of Zeo data, that'd be real data you could try to feed into a model of some sort.

I use Supermemo daily, and have read everything Wozniak has written about sleep. I've talked to him a couple times about other things (1-2 month response time). I may ask him about this.

1jwhendy
Wow -- I'm quite impressed! You've really tried a helluva lot of things. I'd see what he says. One never knows. Good luck finding assistance.

It was replaced shortly after, and my back problems promptly dissipated. I had only been sleeping on that mattress for a few weeks at the time, having just thrown away another.

8SarahNibs
Side note: I thought my mattress was giving me lots of back problems (and incidentally sleep problems), and replaced it, and everything was fixed. Some months later I resumed sleeping with a certain very comfortable, too-small blanket. I hadn't realized I had stopped when I switched mattresses. The back problems returned. I got rid of the blanket, they went away. The too-small blanket had caused me to curl up into too tight a ball while sleeping.

I have tried sedatives, melatonin, melatonin-inducing sleeping aids, traditional sleeping aids, and Ambien (whatever that is). Some have no effect, some put me to sleep but leave me unrested, and some put me to sleep and leave me unrested and incredibly groggy for the rest of the day. Generally speaking, trying to shift your sleep schedule by more than 1-2 hours using sleep aids doesn't work. If your circadian rhythm keeps advancing anyway, the results are just like a normal person trying to go to bed at noon using sleep aids.

a lot of different ways to use them

Can you expand on this?

1jimrandomh
You can vary the dosage, the timing, and the preconditions (ie use it only when you predict you'll fail to fall asleep otherwise). You can use the same compound with different release profiles. You can mix some combinations with each other (but not all combinations; X and Y being safe individually does not always mean coadministration is safe).
-1jwhendy
Edit: I humbly remove this. As not to simply "run and hide", it was an off-color comment. I had already debated with myself about whether to even post it but thought it might produce at least something humorous to the OP. It's been downvoted and I'm pulling it before many more have to see it. I apologize. Edit2: were this to be left up, I could see it getting to -5-10 depending on traffic, so feel free to continue down vote to that point.

I suppose I could shop around for a doctor willing to prescribe modafinil for my sort of sleep problems. I have thought of trying it in the past, but that's pretty far off-label.

"Everything" includes having read all current medical literature, which all says that severe circadian rhythm disorders are basically untreatable, and having one sleep doctor basically give up. I could also try more sleep doctors, I suppose.

4wedrifid
Do it. Even if your underlying condition is incurable some of the symptoms can be managed. And Modafinil is outright brilliant for managing fatigue and supplying wakefulness. Compared to the most prominent usages of modafinil (performance enhancement) your usage would be pretty damn close to the label all things considered. But forget the label. Tell the doctor whatever is convenient to make him sign stuff for you. (You do keep your doctors separate, right? The ones who give you actual useful advice and the ones you use as gatekeepers to the system. Lie to the latter.)
2jake987722
It doesn't sound unreasonable to me given the severity of your symptoms. But I'm not a sleep doctor. Consider also that there are other ways to procure drugs like this, i.e., shady online vendors from overseas. Just make sure you do your research on the vendors first. There are people who have ordered various drugs from these vendors, chemically verified that the drugs were in fact what they were advertised to be, and then posted their results in various places online for the benefit of others. Bottom line: some companies are more trustworthy than others--do your homework. And obviously you should exercise due caution when taking a new drug without a doctor's consent.

More like, "here's the times I went to sleep and woke up in the previous month. What can I expect today?" Hopefully including the effects of caffeine, delayed sleep, early awakening, etc. My sleep may sort of follow a cycle, but it's not regular enough that knowing the cycle would be that useful.

Here's the raw data for 6 months or so last year: Data.

EDIT: I was unemployed during this period, and not using an alarm regularly, so I was sleeping exactly when I felt like it. If I was working it would look much different.

1Larks
Upvoted for meticulous data collection.
1jimrandomh
Ok, here's what I just did. I looked through that log, collected all the names of pharmaceuticals in there (Klonopin, Vyvanse, Lunesta, Melatonin), and searched each one's wikipedia page for sleep-related effects and side-effects. And I found this: So that's probably a net negative, sleep-wise, although there might still be non-sleep-related reasons to take it. Wikipedia has this to say about Lunesta: Vyvanse has insomnia listed as a side effect. It's not clear from the log whether you just used it once, in which case that's unimportant, or for a long time in which case it is. Melatonin is sometimes used as a sleeping aid, but in my experience it's pretty weak. It may be effective against a particular cause of insomnia, but it didn't do much for me and if it didn't do much for you, well, that's not that surprising. So, the good news is that the reason using drugs to help you sleep hasn't worked is that they're lousy drugs. The next step is to go through the wikipedia page for insomnia, and collect a list of candidate substances. Then read the page for each one, cross off the ones that sound bad (mainly based on side effect risk), and get a list of candidates to try. Bring this list to your physician, let him veto any subset, get prescriptions for any remaining ones that require prescriptions, and then try each one in turn.
6jimrandomh
Now we're getting somewhere! One thing really jumps out at me in that log: I see no mention in the log of it having been replaced. That is a big deal. It may not be the entire cause of your sleep problem, but it is at least a very major contributing factor. You need to replace that mattress immediately, or find somewhere else to sleep.

I wouldn't exactly call it a median. It trends forward every day, eventually wraps around, but it doesn't spend much time at all around 2-8 AM, due to sunlight keeping me awake when I'd otherwise go to sleep in late morning or afternoon.

2Spencer_Sleep
This sounds a lot like Non-24-hour sleep-wake syndrome. The defining symptom for Non-24 is (from Wikipedia) "a chronic steady pattern comprising one- to two-hour daily delays in sleep onset and wake times in an individual living in society". Your delays seem to be longer than 1-2 hours, but it may be a similar problem. I don't know how much you've looked into this, given the impressive extent of your other searches, but it may be something to look into. Have you tried light therapy? Wikipedia (and this study) recommend it, perhaps in combination with melatonin, as the most effective treatment of Non-24. Not sure how valid this is, but it might be worth looking into, if you haven't already.
5[anonymous]
You're Harry Potter! On a more serious note, how would you deal with a constantly forward-moving wake time even if you could predict it?

Besides, having a tool that could forecast my sleep patterns given different variables would allow me to understand the interactions of those variables and ultimately would allow me to take control of my sleep patterns.

These don't work for me. The details are boring.

1jimrandomh
The details may be boring, but they matter a great deal. I believe you when you say you tried using sedatives and it didn't work, but there are a lot of different ones to try and a lot of different ways to use them. Which ones have you tried, and in what way(s) were they unsuitable? Have you actually run out of compounds to try, or did you just get discouraged by a few bad results?

"I find it impossible to wake up at a consistent time every day (+/- 8 hours), despite years of trying"

In other words, I've tried everything else.

2jake987722
How about Modafinil or a similar drug? It is prescribed for narcolepsy. More generally, can I safely assume that "everything" includes having talked to your doctor about how serious these symptoms are?

What about the PocketPro II? It draws 240 mA, so a 1 Ah external battery gets you 4 extra hours.

0gwern
Not a bad suggestion; but will it run off the external battery seamlessly?

I've been doing audio-only with a $40 dictator from Wal-mart that fits in my pocket. It averages 150-200 MB a day. I generate hashes of each file and timestamp them so they're more likely to be useful if I ever need them for proof of something.

The thing that prompted me to start doing this was frequent arguments with close ones that often got down to "you said this", "no I didn't" type of stuff. It's oddly very assuring to have this recording. (FTR, I used it for that purpose more or less once. Although I find it useful for recording therapy sessions too.)

3andreas
Combine this with speech-to-text transcription software and you get a searchable archive of your recorded interactions! ETA: In theory. In practice, dictation software algorithms are probably not up to the task of turning noisy speech from different people into text with any reasonable accuracy.

I remember, when first reading this article, that it was really convincing and compelling. I looked it up again because I wanted to be able to make the argument myself, and now I find that I don't understand how you can get from "if the staid conventional normal boring understanding of physics and the brain is correct" to "there's no way in principle that a human being can concretely envision, and derive testable experimental predictions about, an alternate universe in which things are irreducibly mental." That seems like too large a jump for me. Any help?

I thought a lot about creating such a system and how it would look a number of years ago, but never did make any good progress on it. The point where I got stuck was to take a particular blog post with lots of debate in the comments and try to dissect it in different ways and see what ended up being the most useful. I found I didn't have the focus to do so.

Anyway, there's Truth Mapping, which I think sucks for quite a number of reasons.

I came across a few cites supporting the "quite a bit" answer in the "Cold War" article at Alcor (linked elsewhere on this thread).

It is interesting and more than a little ironic to note that fifteen years prior to the time that Persidsky wrote the words above, a large and growing body of evidence was already present in the scientific literature to discredit the "suicide-bag concept" of lysosomal rupture resulting in destruction of cells shortly after so-called death. I cite below papers debunking this notion:

  • Trump, B.F.,

... (read more)

I forget who brought this up--maybe zero_call? jhrandom?--but I think a good question is "How quickly does brain information decay (e.g. due to autolysis) after the heart stops and before preservative measures are taken?" If the answer is "very quickly" then cryonics in non-terminal-illness cases becomes much less effective.

4pdf23ds
I came across a few cites supporting the "quite a bit" answer in the "Cold War" article at Alcor (linked elsewhere on this thread). There's more at the link.

Here's another one. When reading wikipedia on Chaitin's constant, I came across an article by Chaitin from 1956 (EDIT: oops, it's 2006) about the consequences of the constant (and its uncomputability) on the philosophy of math, that seems to me to just be completely wrongheaded, but for reasons I can't put my finger on. It really strikes the same chords in me that a lot of inflated talk about Godel's Second Incompleteness theorem strikes. (And indeed, as is obligatory, he mentions that too.) I searched on the title but didn't find any refutations. I wonder if anyone here has any comments on it.

I may be stretching the openness of the thread a little here, but I have an interesting mechanical engineering hobbyist project, and I have no mechanical aptitude. I figure some people around here might, and this might be interesting to them.

The Avacore CoreControl is a neat little device, based on very simple mechanical principles, that lets you exercise for longer and harder than you otherwise could, by cooling down your blood directly. It pulls a slight vacuum on your hand, and directly applies ice to the palm. The vacuum counteracts the vasocontriction... (read more)

As it was mocking bgrah's assertion, and bgrah used "unrational", and in my estimation his meaning was closer to "irrational" than "arational", I used the former. Perhaps using "unrational" would have been better, though.

Ok, say you enter into a binding agreement forcing yourself to take a sleeping pill tomorrow.

I don't think any such agreement could be legally binding under current law, which is relevant since we're talking about rights.

Disliking Pollock is irrational. As is disliking Cage. Or Joyce. Or PEZ.

0wedrifid
Neutral vote. I like the PEZ juxtaposition but 'arational' would fit better. A simply false assertion doesn't fit well with the irony.
2[anonymous]
People can get the humor and still downvote you. I didn't vote one way or the other.
2AdeleneDawner
It was, yes - maybe we need to make emoticons more normal here, since this is a recurring problem. :P (Downvote removed)
0thomblake
Just consider it evidence of the level of culture you'll find hereabouts. Savages.
5Blueberry
I love 4'33". It helps me get to sleep.

Hyper operators. You can represent even bigger numbers with Conway chained arrow notation. Eliezer's 3^^^^3 is a form of hyper operator notation, where ^ is exponentiation, ^^ is tetration, ^^^ is pentation, etc.

If you've ever looked into really big numbers, you'll find info about Ackermann's function, which is trivially convertable to hyper notation. There's also Busy Beaver numbers, which grow faster than any computable function.

0MrHen
Yes, this is exactly what I was looking for. Thank you.

Umm, that's not what I meant by "faithful reproductions", and I have a hard time understanding how you could have misunderstood me. Say you took a photograph using the exact visual input over some 70 square degrees of your visual field, and then compared the photograph to that same view, trying to control for all the relevant variables*. You seem to be saying that the photograph would show the shadows as darker, but I don't see how that's possible. I am familiar with the phenomenon, but I'm not sure where I go wrong in my thought experiment.

* photo correctly lit, held so that it subtends 70 square degrees of your visual field, with your head in the same place as the camera was, etc.

0SilasBarta
I thought you meant "faithful" in the sense of "seeing this is like seeing the real thing", not "seeing this is learning what your retinas actually get". If you show a photograph that shows exactly what hit the film (no filters or processing), then dark portions stay dark. When you see the scene in real life, you subtract off the average coloring that can be deceiving. When you see the photo, you see it as a photo, and you use your current real-life-background and lighting to determine the average color of your visual field. The darkness on the photo deviates significantly from this, while it does not so deviate when you're immersed in the actual scene, and have enough information about the shadow for your brain to subtract off the excessive blackness. Been a long day, hope I'm making sense.

Along the same lines, this is why cameras often show objects in shadows as blacked out -- because that's the actual image it's getting, and the image your own retinas get! It's just that your brain has cleverly subtracted out the impact of the shadow before presenting it to you

That doesn't explain why faithful reproductions of images with shadows don't prompt the same reinterpretation by your brain.

5mattnewport
Blacked out shadows are generally an indication of a failure to generate a 'faithful' reproduction due to dynamic range limitations of the camera and/or display medium. There is a fair amount of research into how to work around these limitations through tone mapping. High Dynamic Range cameras and displays are also an area of active research. There's not really anything to explain here beyond the fact that we currently lack the capture or display capability to faithfully reproduce such scenes.
0SilasBarta
Sure it does -- Faithful reproductions give the shadowed portion the appropriate colors for matching how your brain would perceive a real-life shadowed portion of a scene.

I am fairly sure, though I haven't been able to refind a link, that there's some solid evidence that autolysis isn't nearly that quick or severe.

8jhuffman
We can watch neural cells dying underneath a microscope. The destruction looks pretty complete. Structure is dissolved in what are essentially digestive enzymes. If you read Alcor's FAQ for Scientists, you'll notice that they are the most careful to point out that there is considerable doubt about the possibility to ever revive anyone whose gone several hours without vitrification. Maybe this is because they want more "stand-by" revenue. Maybe its because they know they know there is no basis for speculation; by our current understanding of things its a serious problem. There are those who hope it is not a fatal problem. There are those who hope there is a heaven, too.

Hmm. I can with the necker cube, but not at all with this one.

7AndyWood
I was never able to do it with this one before, either. What I'm doing now is concentrating hard on the two tiles of interest, until the rest of the picture fades into the background. The two tiles then seem to be floating on a separate top layer, and appear to be the same shade.

For people wanting different recordings of the garbled/non-garbled: it's right on the page right above the one Morendil linked to.

On the next sample, I only caught the last few words on the first play (of the garbled version only), and after five plays still got a word wrong. On the third, I only got two words the first time, and additional replays made no difference. On the fourth, I got half after one play, and most after two. On the fifth, I got the entire thing on the first play. (I'm not feeling as clear-headed today as I was the other day, but it did... (read more)

Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.

This isn't actually a case of pareidolia, as the squiggly noises (they call it "sine wave speech") are in fact derived from the middle recording, using an effect that sounds, to me, most like an extremely low bitrate mp3 encoding. Reading up on how they produce the effect, it is in fact a very similar process to mp3 encoding. (Perhaps inspired by it? I believe most general audio codecs work on very similar basic principles.)

1arundelo
True; I suppose it's a demonstration of the thing that makes pareidolia possible -- the should-be-obvious-but-isn't fact that pattern recognition takes place in the mind.
5Blueberry
So it's the opposite of pareidolia. It's actually meaningful sound, but it looks random at first. Maybe we should call it ailodierap.

My problem with CEV is that who you would be if you were smarter and better-informed is extremely path-dependent. Intelligence isn't a single number, so one can increase different parts of it in different orders. The order people learn things in, and how fully they integrate that knowledge, and what incidental declarative/affective associations they form with the knowledge, can all send the extrapolated person off in different directions. Assuming a CEV-executor would be taking all that into account, and summing over all possible orders (and assuming that ... (read more)

1Stuart_Armstrong
Good point, though I'm not too worried about the path dependency myself; I'm more preoccupied with getting some where "nice and tolerable" than somewhere "perfect".

Hmm. I got the meaning of the first section of the clip the first time I heard it. OTOH, that was probably because I looked at the URL first, and so I was primed to look at the content that way.

1Dustin
The first and last parts sounded exactly the same to me. However, what "meaning" are you talking about? I got no meaning from the sound effects.
0Morendil
How about the other vocoded samples? Thanks for the report anyway, that's interesting to know.

Here's an algorithm that I've heard is either really hard to derandomize, or has been proven impossible to derandomize. (I couldn't find a reference for the latter claim.) Find an arbitrary prime between two large numbers, like 10^500 - 10^501. The problem with searching sequentially is that there are arbitrarily long stretches of composites among the naturals, and if you start somewhere in one of those you'll end up spending a lot more time before you get to the end of the stretch.

6pengvado
See the Polymath project on that subject. The conjecture is that it is possible to derandomize, but it hasn't been proven either way. Note that finding an algorithm isn't the hard part: if a deterministic algorithm exists, then the universal dovetail algorithm also works.

I agree that this argument depends a lot on how you look at the idea of "evidence". But it's not just in the court-room evidence-set that the cryonics argument wouldn't pass.

Yes, that's very true. You persuasively argue that there is little scientific evidence that current cryonics will make revival possible.

But you are still conflating Bayesian evidence with scientific evidence. I wonder if you could provide a critique that says we shouldn't be using Bayesian evidence to make decisions (or at least decisions about cryonics), but rather scient... (read more)

Of course, every perfect-information deterministic game is "a somewhat more complex tic-tac-toe variant" from the perspective of sufficient computing power.

Yeah, sure. And I have a program that gives constant time random access to all primes less than 3^^^^3 from the perspective of sufficient computing power.

So you know how to divide the pie? There is no interpersonal "best way" to resolve directly conflicting values. (This is further than Eliezer went.) Sure, "divide equally" makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there's more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.

1Vladimir_Nesov
I'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect emptiness). The possibilities open for the search of "off-line" resolution of conflict (with abstract transformation of preference) are wider than those for the "on-line" method (with AIs fighting/arguing it over) and so the "best" option, for any given criterion of "best", is going to be better in "off-line" case.

I don't have a problem with that usage. 0% or 100% can be used as a figure of speech when the proper probability is small enough that x < .1^n (4 (or something appropriate) < n) in 0+x or 1-x. If others are correct that probabilities that small or large don't really have much human meaning, getting x closer to 0 in casual conversation is pretty much pointless.

Of course, a "~0%" would be slightly better, if only to avoid the inevitable snarky rejoinder.

Third, re senile dementia, there is the possibility of committing suicide and undergoing cryonics.

http://lesswrong.com/lw/1mh/that_magical_click/1hp5

Eh. At least when you're alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you're vitrified in a canister for 75-300 years than they can while you're asleep. I prefer Technologos' reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.

Most of what you're worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.

If you're worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don't have the warning required to commit suicide - in fact, any UFAI that cares enough to preserve and torture you, has a motive to delibera... (read more)

I automatically think of 8-year-olds if it's not very clear who's being referred to.

Right. "Girl" really has at least two distinct senses, one for children and one for peers/juniors of many ages. "Guy" isn't used in the first sense, and the second sense of "boy" is more restricted. The first sense of "boy"/"girl" is the most salient one, and thus the default absent further context. I don't think the first sense needs to poison the second one. But its use in the parent comment this discussion wasn't all that innocent. (I've been attacked before, by a rather extreme feminist, for using it innocently.)

"Child" is probably never OK for people older than 12-13, but "girl", "guy", and occasionally "boy" are usually used by teens, and often by 20-somethings to describe themselves or each other. ("Boy" usually by females, used with a sexual connotation.)

3AdeleneDawner
I'm aware of it, and am actually still getting into the habit of referring to women about my age or younger as women rather than girls. I still trip over it when other people use the words that way, though - I automatically think of 8-year-olds if it's not very clear who's being referred to.

I would really like someone to expand upon this:

Understanding and complying with ownership and beneficiary requirements of cryonics vendors is often confusing to insurance companies, and most insurance companies will consequently not allow the protocols required by cryonics vendors. Understanding and complying with your cryonics organization requirements is confusing and often simply will not be done by most insurance companies.

I only call green numbers probabilities.

2Cyan
What orifice do green numbers come out of?

I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.

Well, yes, I assumed that was the motivation. On the other hand, Thomas Donaldson. They actually went to court with him against California to support his "suicide". (They ended up losing. The court said it was a matter for the legislature.) And what I'm asking only amounts to figuring out the best way to avoid autopsy.

EDIT: Actually, Alcor probably wasn't involved directly in the case. I forget where I read that they were; I probably didn't read it. But anyway, the overall publicity from the case was positive for Alcor.

I would be very surprised if uploading was easier than AI

Do you mean "easier than AGI"? Why? With enough computing power, the hardest thing to do would probably be to supply the sensory inputs and do something useful with the motor outputs. With destructive uploading you don't even need nanotech. It doesn't seem like it requires any incredible new insights into the brain or intelligence in general.

3MichaelGR
If you want to learn more about WBE and the challenges ahead, this is probably the best place to start: Whole Brain Emulation: A Roadmap by Nick Bostrom and Anders Sandberg

Uploading is likely to require a lot of basic science, though not the depth of insight required for AGI. That same science will also make AGI much easier while most progress towards AGI contributes less though not nothing to uploading.

With all the science done there is still a HUGE engineering project. Engineering is done in near mode but very easy to talk about in far mode. People hand-wave the details and assume that it's a matter of throwing money at a problem, but large technically demanding engineering projects fail or are greatly delayed all the... (read more)

2AngryParsley
I think that's why Vassar is betting on AGI: it requires insight, but the rest of the necessary technology is already here. Uploading requires an engineering project involving advances in cryobiology, ultramicrotomes, scanning electron microscopes, and computer processors. There's no need for new insight, but the required technology advances are significant.
Load More