Mind uploading from the outside in

6 Alexandros 29 November 2015 02:05AM

Most discussion of uploading talks of uploading from the inside out: simply, a biological person undergoes a disruptive procedure which digitises their mind. The digital mind then continues the person’s timeline as a digital existence, with all that entails.

 

The thing that stands out here is the disruptive nature of the process from biological to digital being. It is not only a huge step to undergo such a transformation, but few things in reality operate in such binary terms. More commonly, things happen gradually.

 

Being an entrepreneur and also having a keen interest in the future, I both respect audacious visions, and study how they come to be realised. Very rarely does progress come from someone investing a bunch of resources in a black-box process that ends in a world-changing breakthrough. Much more commonly, massive innovations are realised through a process of iteration and exploration, fueled by a need that motivates people to solve thousands of problems, big and small. Massive trends interact with other innovations to open up opportunities that when exploited cause a further acceleration of innovation. Every successful startup and technology, from Facebook to Tesla and from mobile phones to modern medicine can be understood in these terms.

 

With this lens in mind, how might uploading be realised? This is one potential timeline, barring AI explosion or existential catastrophy.

 

It is perhaps useful to explore the notion of “above/below the API”. A slew of companies have formed, often called “Uber for X” or “AirBnB for Y”, solving needs we have, through a computer system, such as a laptop or a mobile phone app. The app might issue a call to a server via an API, and that server may delegate the task to some other system, often powered by other humans. The original issuer of the command then gets their need covered, minimising direct contact with other humans, the traditional way of having our needs covered. It is crucial to understand that API-mediated interactions win because they are superior to their traditional alternative. Once they were possible, it was only natural for them to proliferate. As an example, compare the experience of taking a taxi with using Uber.

 

And so computer systems are inserted between human-to-human interactions. This post is composed on a computer, through which I will publish it in a digital location, where it might be seen by others. If I am to hear their response to it, it will also be mediated by APIs. Whenever a successful new API is launched, fortunes are made and lost. An entire industry, venture capital, exists to fund efforts to bring new APIs into existence, each new API making life easier for its users than what came before, and adding additional API layers.

 

As APIs flood interpersonal space, humans gain superpowers. Presence is less and less important, and a person anywhere in the connected world can communicate and effect change anywhere else. And with APIs comes control of personal space and time. Personal safety increases both by decreasing random physical contact and by always being connected to others who can send help if something goes wrong. The demand for connectivity and computation is driving networking everywhere, and the cost of hardware to fall through the floor.

 

Given the trends that are in motion, what’s next? Well, if computer-mediated experience is increasing, it might grow to the point where every interaction a human has with the world around them will be mediated by computers. If this sounds absurd, think of noise-cancelling headphones. Many of us now use them not to listen to music, but to block the sound from our environment. Or consider augmented reality. If the visual field, the data pipeline of the brain, can be used to provide critical, or entertaining, context about the physical environment, who would want to forego it? Consider biofeedback: if it’s easy to know at all times what is happening within our bodies and prevent things from going wrong, who wouldn’t want to? It’s not a question of whether these needs exist, but of when technology will be able to cover them.

 

Once most interaction is API-mediated, the digital world switches from opt-in to opt-out. It’s not a matter of turning the laptop on, but of turning it off for a while, perhaps to enjoy a walk in nature, or for a repair. But wouldn’t you want to bring your augmented reality goggles that can tell you the story of each tree, and ensure you’re not exposed to any pathogens as you wander in the biological jungle? As new generations grow up in a computer-mediated world, fewer and fewer excursions into the offline will happen. Technology, after all, is what was invented after you were born. Few of us consider hunting and gathering our food or living in caves to be a romantic return to the past. When we take a step backward, perhaps to signal virtue, like foregoing vaccination or buying locally grown food, we make sure our move will not deprive us of the benefits of the modern world.

 

Somewhere around the time when APIs close the loop around us or even before then, the human body will begin to be modified. Artificial limbs that are either plainly superior to their biological counterparts, or better adapted to that world will make sense, and brain-computer interfaces (whether direct or via the existing senses) will become ever more permanent. As our bodies are replaced with mechanical parts, the brain will come next. Perhaps certain simple parts will be easy to replace with more durable, better performing ones. Intelligence enhancement will finally be possible by adding processing power natural selection alone could never have evolved. Gradually, step by small step, the last critical biological components will be removed, as a final cutting of the cord with the physical world.

 

Humans will have digitised themselves, not by inventing a machine that takes flesh as input and outputs ones and zeroes, not by cyberpunk pioneers jumping into an empty digital world to populate it. We will have done it by making incremental choices, each one a sound rational decision that was in hindsight inevitable, incorporating inventions that made sense, and in the end it will be unclear when the critical step was made. We will have uploaded ourselves simply in the course of everyday life.

All discussion post titles, points, and dates as an excel sheet

15 Alexandros 03 June 2014 02:38PM

You can find it here.

Earlier today I wanted to quantify whether lesswrong has stopped being a well kept garden. So I wrote a scraper to produce the above dataset, so that anyone that wants to do the analysis, can.

All data is as of a few minutes ago.

For programmers: You can see the source here, it's made to run on scraperwiki, but it will time out after about 3000 articles. At that point you need to adjust the initial value of the uri variable to be the last uri printed. Repeating this process once more will allow you to reach the end. Have fun.

 

RapGenius + Sequences = ?

15 Alexandros 01 August 2013 06:04AM

I recently saw Ashton Kutcher's annotation of a speech by Steve Jobs on RapGenius. For those that haven't heard of it, RapGenius is a content annotation platform, where the "Rap" part is purely incidental.

The format seems quite interesting, so I wondered what the LessWrong community could do if allowed to annotate popular articles (or other texts like MIRI publications) in the same way.

To experiment, I created a RapGenius page for Tsuyoku Naritai and started with an annotation.

Feel free to add other annotations etc. and let's see if we can do something interesting with the medium.

Note: If Eliezer/LW/MIRI have an issue with the wholesale copying of the text, let me know and I will remove as much of it as I can (RG doesn't allow removal of text that has been annotated if the annotation is not removed as well)

Rationality & Startups - The Workshop

7 Alexandros 28 February 2012 11:25AM

I have been given the opportunity to prepare a workshop for the General Assembly team in London. General Assembly is geared towards education of entrepreneurs and aspiring entrepreneurs and have been very successful in New York, now expanding to London. The workshops are 90 minutes long, and usually gather anywhere from 15 to 35 people who have paid to attend.

While I considered doing something on concrete coding skills, I think by far the superior alternative (for myself and the audience) is to do a crash course on cognitive bias as it relates to startups, maybe throw in some other topics on rationality in a similar context. I am fairly confident that startups are an excellent testing ground for extreme rationality as they require exceptionally quick assimilation of new skills and knowledge, as well as demand rapid decisions with incomplete information.

 

So, as part of the brainstorming for this, here are my questions for you:

1.Do you think educating startup founders on cognitive bias/rationality will help them improve their outcomes?

2.Which biases would especially affect startups? Which of these can be mitigated (either by knowing about them or by utilising explicit strategies)?

3.What is a good way to use 90 minutes to get this information across?

4.What prior material exists to introduce rationality in a fast-paced manner? What prior material exists that relates startups to rationality?

5.Other relevant thoughts welcome

 

Should I go ahead with this, I will of course make the deck available for any others who may want to do similar presentations elsewhere.

Meetup : London This Sunday

2 Alexandros 14 October 2011 08:48AM

Discussion article for the meetup : London This Sunday

WHEN: 16 October 2011 02:00:00PM (+0100)

WHERE: Africa House/64-68 Kingsway, London, WC2B 6BG

We're meeting up in London this weekend. Sunday 16th October, at 2pm, in the Shakespeares Head on Kingsway near Holborn Tube station. We're usually easy to spot and occasionally have a large paperclip drawing/printout somewhere on the table.

Discussion article for the meetup : London This Sunday

[LINK] Robin Hanson on Carl Shulman's recent paper on Whole Brain Emulation

10 Alexandros 05 October 2011 07:51AM

Shulman on Superorgs

Best to read the link first and my comments later.

I have very little to comment on the topic itself, but I do find it odd that Robin takes such a confrontational stance, beginning from the first sentence "It has come to my attention that some think that by now I should have commented on Carl Shulman’s em paper" and culminating with a harsh analysis not only of Carl's conclusions, but about what (Robin believes) made him want to reach those conclusions, as well as SIAI's mission statement in general. There is negative framing, "obsession with making a god to rule us all (well)", that I wouldn't expect from someone trying to honestly represent the other side. It's not that I don't share some of those concerns, but to psychoanalyse (who you seem to have identified as) your opponent in an obvious effort to discredit, is at the very least unfair.  I was generally aware that there was some kind of tension between the former dynamic duo of Hanson - Yudkowsky, but it seems to have become full-blown hostility.

Robin does seem to find the courage to say he's glad others are looking into emulations, but the overall vibe I get is of someone protective of a research field they believe they uniquely 'get', someone who feels others should just get in line or get out of the ring, and it's a vibe not uncommon in academia.

Decision Fatigue, Rationality, and Akrasia.

17 Alexandros 19 September 2011 03:37PM

I was reading the NY Times article on Decision Fatigue, when I came upon a hypothesis I would like everyone's feedback on.

I take as a premise that there seems to be a high prevalence of akrasia in the lesswrong community.

I also take as a premise that the sequences give us a more-than-usual detailed model of the world, one that presents us with more possible trade-offs we could be making in every day life.

So the conjecture that by trying to reduce bias and perform a lot of cognitive calculation, we effectively spend large parts of our days in a decision fatigued state, leading to akrasia problems.

Does this sound (un)reasonable? Why? How would you go about turning this into a testable proposition?

UPDATE: Anna Salamon has put up a detailed poll here that may shed some light on the situation. Please take some time to fill it in.

Meetup : London Science Museum, Aug. 31

1 Alexandros 26 August 2011 12:54PM

Discussion article for the meetup : London Science Museum, Aug. 31

WHEN: 31 August 2011 06:30:00PM (+0100)

WHERE: Exhibition Road, London SW7 2DD

We have decided to try different meetup formats and the next 'Science Museum Lates' event for adults, on Wednesday 31st of August looks like just the thing.

More information about the event: http://www.sciencemuseum.org.uk/visitmuseum/events/events_for_adults/Lates.aspx

UPDATE: Meeting time&place under discussion, will be updated as soon as they are decided.

Discussion article for the meetup : London Science Museum, Aug. 31

London meetup, Sunday 2011-08-21 14:00, near Holborn

0 Alexandros 20 August 2011 06:00PM

We're meeting up in London tomorrow. Sunday 21st August, at 2pm, in the Shakespeares Head (official page) on Kingsway near Holborn Tube station. See you there!

Meetup : Two-monthly London Meetup

1 Alexandros 29 June 2011 07:16AM

Discussion article for the meetup : Two-monthly London Meetup

WHEN: 03 July 2011 02:00:00PM (+0100)

WHERE: 64-68 Kingsway, Holborn, London WC2B 6BG

After two months of having the smaller fortnightly meetups, the time has come to have another two-monthly London meetup! It will be on Sunday July 3 at 14:00, at the usual location: Shakespeares Head on Kingsway near Holborn Tube station. (Note that there's more than one pub in London with that name, so make sure you get the right one.) As always, we'll have a big picture of a paperclip on the table so you can find us. Hope to see lots of you there!

Discussion article for the meetup : Two-monthly London Meetup

View more: Next