Comment author: Lumifer 21 September 2016 08:47:26PM 0 points [-]

open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

Not to mention that you're assuming that "open-source" and "closed-source" concepts will still make sense in that high-tech future. As an example, let's say I give you a trained neural net. It's entirely open source, you can examine all the nodes, all the weights, all the code, everything. But I won't tell you how I trained that NN. Are you going to trust it?

Comment author: DataPacRat 21 September 2016 11:20:51PM 0 points [-]

On the basis of this "tends" you make a rather drastic request to NOT revive you if you'll be running on top of some closed-source layer.

That's true. But given the various reasonably-possible scenarios I can think of, making this extreme of a request seems to be the only way to express the strength of my concern. I'll admit it's not a common worry; of course, this isn't a common sort of document.

(If you want to know more about what leads me to this conclusion, you could do worse than to Google one of Cory Doctorow's talks or essays on 'the war on general-purpose computation'.)

As an example

You provide insufficient data about your scenario for me to make a decent reply. Which is why I included the general reasoning process leading to my requests about open- and closed-source - and in the latest version of the doc, have mentioned part of the reason for going into that detail is to let revivalists have some data to extrapolate what my choices would be in unknown scenarios. (In this particular case, the whole point of differentiating between open- and closed-source software is the factor of /trust/ - and in your scenario, you don't give any information on how trustworthy such NNs have been at performing their intended functions properly and at avoiding being subverted.)

Comment author: Lumifer 21 September 2016 04:38:04PM 1 point [-]

One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection.

That's a claim often made ("With enough eyes all bugs are shallow") but it's not so clear-cut in practice. In real life a lot of open-source projects are very buggy and remain very buggy (and open to 'sploits) for a very long time. At the same time there is closed-source software which is considerably more bug-free (but very expensive) -- e.g. the code in fly-by-wire airplanes.

Besides, physical control, generally speaking, trumps all. If your mind is running on top of, say, open-source Ubuntu 179.5 Zooming Zazzle but I have access to your computing substrate, that is, the physical machine which runs the code, the fact that the machine runs an open-source OS is quite irrelevant. You're looking for impossible guarantees.

And remember, that you are not making choices, but requests. You can't "trust the motives" or not -- if someone revives you with malicious intent, he can ignore your requests easily enough.

Comment author: DataPacRat 21 September 2016 04:46:44PM 0 points [-]

a lot of open-source projects are very buggy and remain very buggy

Yep.

there is closed-source software which is considerably more bug-free

Yep.

You're looking for impossible guarantees.

I'm not looking for guarantees at all. (Put another way, I'm well aware that 0 and 1 are not probabilities.) What I am doing is trying to gauge the odds; and given my own real-world experience, open-source software /tends/ to have fewer, less severe, and shorter-lasting exploitable bugs than closed-source software, to the extent that I'm willing to make an important choice based on whether or not a piece of software is open-source.

And remember, that you are not making choices, but requests.

True, as far as it goes. However, this document I'm writing is also something of a letter to anyone who is considering reviving me, and given how history goes, they are very likely going to have to take into account factors that I currently can't even conceive of. Thus, I'm writing this doc in a fashion that not only lists my specific requests in regards to particular items, but also describes the reasoning behind the requests, so that the prospective reviver has a better chance of being able to extrapolate what my preferences about the unknown factors would likely be.

if someone revives you with malicious intent

If someone revives me with malicious intent, then all bets are off, and this document will nigh-certainly do me no good at all. So I'm focusing my attention on scenarios involving at least some measure of non-malicious intent.

Comment author: DataPacRat 20 September 2016 05:28:36PM 3 points [-]
Comment author: DataPacRat 21 September 2016 04:10:03PM 1 point [-]

Today's version: https://www.datapacrat.com/temp/Cryo Revival Preferences - draft 0.1.3.txt

The change: Added new paragraph:

There is no such thing as being able to have 100% certainty that a piece of software is without flaws or errors. One of the few methods for detecting a large proportion of any program's is to allow many people, with all their varied perspectives and skills, to examine it, by proclaiming that the program is free and open source and releasing both the source code and binaries for inspection. Without that strategy, not only are bugs much more likely to remain, but when someone does manage to find a bug, it is likely to remain secret and uncorrected. Such uncorrected bugs can be used by unscrupulous people to do just about anything to any data stored on a computer. This is bad enough when that data is merely personal email, or even a bank's financial records; when the data is a sapient mind, the possibilities are horrifying. Given the possible downsides, I find it difficult to trust the motives of anyone who wishes to run an uploaded mind on a computer that uses closed-source software. Therefore, if there is a choice between uploading my mind using uninspectable, closed-source software, and not being revived, I would choose not to be uploaded in that fashion, even if doing so increases the risk of never being revived at all. If there is a choice between being uploading my mind using closed-source software that the uploaded mind can inspect, then if that includes all the documentation that is necessary for the uploaded mind to learn how to understand the software, I would reluctantly agree to the uploading procedure as being preferable to risking never being revived at all.

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: DataPacRat 20 September 2016 05:28:36PM 3 points [-]
Comment author: scarcegreengrass 19 September 2016 09:22:51PM 2 points [-]

Interesting idea! I guess you could add a 'when in doubt' for whether you'd rather be revived in an early period (eg, if resurrection is possible with a 80% success rate) or to be downprioritized until resurrection is very mature and safe.

Comment author: DataPacRat 20 September 2016 12:04:17AM 1 point [-]

It shouldn't be too hard to add some quantitative numbers, or at least which numbers I'd like potential revivers to consider.

Comment author: turchin 19 September 2016 11:28:26PM 2 points [-]

I would add lines about would you prefer to be revived together with your friends, family members, before them or after.

May be I would add a secret question to check if you are restored properly.

I would also add all my digital immortality back-up information, which could be used to fill gaps in case if some information is lost.

I also expect that revival may happen in maybe 20-30 years from my death so I should add some kind of will about how to manage my property during my absence.

Comment author: DataPacRat 20 September 2016 12:02:45AM 0 points [-]

I'm afraid that none of my friends or family are interested in cryo.

I already created one recognition protocol, but it's more for multiple copies of myself meeting. I suppose it would be easy enough to include an MD5 hash of a keyphrase in this doc.

I already have provisions in place for my other data, which will end up in that "perpetual storage drawer" I mentioned.

Preserving assets while im dead is an entirely different kettle of fish, and assumes that I will have any worth preserving, which, given my financial situation, I don't expect to be the case.

Comment author: turchin 19 September 2016 11:23:14PM 5 points [-]

These two lines seem to me contradictory. It is not clear to me should I upload you or preserve your brain.

  • I don't understand how the cells of the brain produce qualia and consciousness, and have a certain concern that an attempt at uploading my mind into digital form may lose important parts of my self. If you haven't solved those fundamental problems of how brains produce minds, I would prefer to be revived as a biological, living being, rather than have my mind uploaded into software form.

  • I understand that all choices contain risk. However, I believe that the "information" theory of identity is a more useful guide than theories of identity which tie selfhood to a physical brain. I also suspect that there will be certain advantages to be one of the first minds turned into software, and certain disadvantages. In order to try to gain those advantages, and minimize those disadvantages, I am willing to volunteer to let my cryonically-preserved brain be used for experimental mind-uploading procedures, provided that certain preconditions are met, including:

Comment author: DataPacRat 19 September 2016 11:55:42PM 1 point [-]

The intended meaning, which it seems I will need to rephrase to clarify: "If you are experimenting with uploading, and can meet these minimal common-sense standards, then I'm willing to volunteer ahead of time be your guinea pig. If you can't meet them, then I'd rather stay frozen a little longer. Just FYI."

Comment author: DataPacRat 19 September 2016 06:35:24PM 10 points [-]

As a cryonicist, I'm drafting out a text describing my revival preferences and requests, to be stored along with my other paperwork. (Oddly enough, this isn't a standard practice.) The current draft is here. I'm currently seeking suggestions for improvement, and a lot of the people around here seem to have good heads on their shoulders, so I thought I'd ask for comments here. Any thoughts?

Comment author: WhySpace 16 September 2016 12:36:12AM *  0 points [-]

Depends how much storage space you are willing to buy.

One of my fantasies is a Raspberry Pi that automatically downloads all Wikipedia updates each month or so, to keep a local copy. The ultimate version of this would do the same for every new academic article available on Sci-Hub.

Sci-Hub is the largest collection of scientific papers on the planet, and has over 58 million academic papers. If they average 100 kB a piece, that's only 5.8 TB. If they average 1MB each, then you would need to shell out some decent cash, but you could in theory download all available academic papers.

Someone may even have already done something like this, and put the script on GitHub or somewhere. (I haven't looked.)

(Also, nice username. :) )

EDIT: It turns out there's a custom built app for downloading and viewing Wikipedia in various languages. It's available on PCs, Android phones, and there's already a version made specially for the Pi: http://xowa.org/home/wiki/Help/Download_XOWA.html

I wonder how difficult it would be to translate all of Sci-Hub into a wiki format that the app could add and read. You'd probably have to modify the app slightly, in order to divide up all the Sci-Hub articles among multiple hard drives. It might make the in-app search feature take forever, for instance. And obviously it wouldn't work for the Android app, since there's not enough space on a MicroSD card. (Although, maybe a smaller version could be made, containing only the top 32GB of journal articles with the most citations, plus all review articles.)

Even just converting science into a Wikipedia-like format would be useful for the sake of open access. Imagine if all citations in a paper were a hyperlink away, and the abstract would display if you hovered your mouse over the link. (The XOWA app does this for Wikipedia links.)

Comment author: DataPacRat 16 September 2016 04:52:31PM 0 points [-]

For Wikipedia, I've been reasonably satisfied with Kiwix for software, and their updated-every-month-or-three copies of Wikipedia, and the related Wikimedia foundation sites, at http://wiki.kiwix.org/wiki/Content_in_all_languages .

If they average 1MB each, then you would need to shell out some decent cash

Unfortunately, I don't have "decent cash" to shell out. I've seem some setups at /r/DataHoarder that I would be extremely happy to ever own, but don't expect to until typical HDs are an order of magnitude or two bigger than today's. By which time I expect people will have come up with brand-new forms of data to fill the things with. :)

(Also, nice username. :) )

It's not just a nom-de-net, it's a way of life. :)

Comment author: Houshalter 14 September 2016 03:04:16PM 0 points [-]

Ah, data hoarding. This is a subject that interests me for multiple reasons. I think preserving humanity's knowledge is important to start with. But I also like to have local copies of things in case of emergency or just a regular internet outage.

You mentioned wikipedia. I found it takes a long time to download, and viewing it is difficult.

I am working on a scraper for lesswrong. I already downloaded all the html of every post, but I need to parse it into a machine readable format, and then I will publish it as a torrent.

All reddit comments ever are available. I don't really know what the utility of this is, I'm mostly interested in this stuff for machine learning. But I have found that reddit comments are fantastic for answering questions that wikipedia might not be able to answer, not to mention multiple lifetimes of reading material. I once had an IRC bot that would answer questions by searching askreddit, and it was fairly effective for many types of questions. Similarly it might be worth scraping other social media sites such as hacker news.

I find a torrent for "reddit's favorite books" which contains hundreds of books people recommended on reddit. It may be worth downloading say all books that have ever appeared on a best sellers list. But one would need to have such a list and how to scrape libgen, which I haven't looked into yet.

Various textbooks are available through torrent sites or the library genesis. These contain knowledge in a format better than wikipedia, I think. Also scientific papers.

The problem with this is that many books and especially papers and textbooks, are distributed in weird formats like pdf or even postscript. These formats are awful and don't compress well.

The fantastic thing about text data is that it's so small, compared to images or video. And it compresses super well. You can store multiple libraries worth of text in a cheapish hard drive.

But pdfs store tons of data as overhead. Just converting them to text might be possible. But that fails terribly on math or anything that isn't english text. Especially graphs which are important I think. OCR has tons of errors. I'd love to someday have a local archive of all of humanity's knowledge with almost every book and paper ever published, but it would require solving this problem.

Then perhaps it would be possible to store the data on nickel plates that will last up to 10,000 years. One website is doing that to all of their data. Which is crazy because it's mostly images too. There is no information on the total storage space, but they do say "Ten thousand standard letter-sized sheets of text or more could fit onto a 2.2-inch diameter nickel plate", which seems like a lot.

Comment author: DataPacRat 14 September 2016 05:58:30PM *  0 points [-]

I am working on a scraper for lesswrong. I already downloaded all the html of every post, but I need to parse it into a machine readable format, and then I will publish it as a torrent.

I think that'll be worth at least a Discussion post when you publish it, for those of us who don't keep track of every comment. :)

(Will you be including OvercomingBias?)

But I also like to have local copies of things in case of emergency or just a regular internet outage.

I've found a torrent of public-domain "survival books" of which at least some may interest you; unfortunately, LW doesn't seem to want to let me embed the magnet URL, so I'll try just pasting it: magnet:?xt=urn:btih:57963b66246379aa3c10d84a5de92c0ab5173faf&dn=SurvivalLibrary&tr=http%3a%2f%2ftracker.tfile.me%3a80%2fannounce&tr=http%3a%2f%2fpow7.com%3a80%2fannounce&tr=http%3a%2f%2ftracker.pow7.com%2fannounce&tr=http%3a%2f%2ftorrent.gresille.org%3a80%2fannounce&tr=http%3a%2f%2fp4p.arenabg.ch%3a1337%2fannounce&tr=http%3a%2f%2fretracker.krs-ix.ru%2fannounce&tr=http%3a%2f%2fmgtracker.org%3a2710%2fannounce&tr=http%3a%2f%2ftracker.dutchtracking.nl%3a80%2fannounce&tr=http%3a%2f%2fshare.camoe.cn%3a8080%2fannounce&tr=http%3a%2f%2ftracker.dutchtracking.com%3a80%2fannounce&tr=http%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=http%3a%2f%2ftorrent.gresille.org%2fannounce&tr=http%3a%2f%2fretracker.krs-ix.ru%3a80%2fannounce&tr=http%3a%2f%2ft1.pow7.com%2fannounce&tr=http%3a%2f%2fpow7.com%2fannounce&tr=http%3a%2f%2fsecure.pow7.com%2fannounce&tr=http%3a%2f%2ftracker.tfile.me%2fannounce&tr=http%3a%2f%2fatrack.pow7.com%3a80%2fannounce&tr=http%3a%2f%2fextremlymtorrents.me%2fannounce.php&tr=http%3a%2f%2finferno.demonoid.me%3a3414%2fannounce&tr=http%3a%2f%2ftorrentsmd.com%3a8080%2fannounce&tr=udp%3a%2f%2fopen.facedatabg.net%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337&tr=udp%3a%2f%2fthetracker.org%3a80&tr=udp%3a%2f%2f9.rarbg.to%3a2710&tr=udp%3a%2f%2f9.rarbg.me%3a2710%2fannounce&tr=udp%3a%2f%2f9.rarbg.to%3a2710%2fannounce&tr=udp%3a%2f%2f9.rarbg.me%3a2710&tr=udp%3a%2f%2fopen.facedatabg.net%3a6969&tr=udp%3a%2f%2ftracker.ex.ua%3a80%2fannounce&tr=udp%3a%2f%2finferno.demonoid.com%3a3411%2fannounce&tr=udp%3a%2f%2finferno.demonoid.ph%3a3389%2fannounce&tr=udp%3a%2f%2f9.rarbg.com%3a2710%2fannounce&tr=udp%3a%2f%2ftracker.leechers-paradise.org%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.coppersurfer.tk%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.ilibr.org%3a6969%2fannounce&tr=udp%3a%2f%2fzer0day.ch%3a1337%2fannounce&tr=udp%3a%2f%2fwww.eddie4.nl%3a6969%2fannounce&tr=udp%3a%2f%2ftorrent.gresille.org%3a80%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.ch%3a1337%2fannounce&tr=udp%3a%2f%2fp4p.arenabg.com%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.leechers-paradise.org%3a6969&tr=udp%3a%2f%2ftracker.kicks-ass.net%3a80%2fannounce&tr=udp%3a%2f%2ftracker.tiny-vps.com%3a6969%2fannounce&tr=udp%3a%2f%2f91.218.230.81%3a6969%2fannounce&tr=udp%3a%2f%2f168.235.67.63%3a6969%2fannounce&tr=udp%3a%2f%2fexplodie.org%3a6969%2fannounce&tr=udp%3a%2f%2feddie4.nl%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.coppersurfer.tk%3a6969&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337%2fannounce&tr=udp%3a%2f%2ftracker.aletorrenty.pl%3a2710%2fannounce&tr=http%3a%2f%2ftracker.dler.org%3a6969%2fannounce

View more: Prev | Next