Just another day in utopia

78 Stuart_Armstrong 25 December 2011 09:37AM

(Reposted from discussion at commentator suggestion)

Thinking of Eliezer's fun theory and the challenge of creating actual utopias where people would like to live, I tried to write a light utopia for my friends around Christmas, and thought it might be worth sharing. It's a techno-utopia, but (considering my audience) it's only a short inferential distance from normality.

 

 

Just another day in Utopia

Ishtar went to sleep in the arms of her lover Ted, and awoke locked in a safe, in a cargo hold of a triplane spiralling towards a collision with the reconstructed temple of Solomon.

 

Again! Sometimes she wished that a whole week would go by without something like that happening. But then, she had chosen a high excitement existence (not maximal excitement, of course – that was for complete masochists), so she couldn’t complain. She closed her eyes for a moment and let the thrill and the adrenaline warp her limbs and mind, until she felt transformed, yet again, into a demi-goddess of adventure. Drugs couldn’t have that effect on her, she knew; only real danger and challenge could do that.

 

continue reading »

Hacking on LessWrong Just Got Easier

57 atucker 04 September 2011 06:59AM

TrikeApps has done a great job running LessWrong and adding new features, but they could use a little help. Have you thought about improving the LessWrong website but haven't done it because you weren't sure how? Or had installation issues? Well, now is a great time to start, because hacking on LessWrong just got much easier!

On behalf of the LessWrong Public Goods Team, I have built a Virtual Machine Image which hosts its own version of the LessWrong website. This eliminates the need to figure out how to host LessWrong yourself. To hack on LessWrong you simply:

  1. Install VirtualBox
  2. Download and use the VM image
  3. Edit LessWrong's code 
  4. Test
  5. Submit pull request

Detailed instructions and download link here.

Interested, but not sure what to work on? The LessWrong issue tracker is here. Run into trouble with the code? Ask questions on the dev list.

Many thanks to Matt, Jon and David at TrikeApps for helping me do this, and John Salvatier for initiating this project.

 

Cognitive Load from simple computations

12 NancyLebovitz 14 July 2011 02:35PM

From Ken Burnside, a game designer

Counting is easier than addition.

Addition is easier than subtraction.

Subtraction can be done, but combining addition and subtraction into the same game mechanic will slow down play considerably. (“I have +2 for flanking, and -2 for lighting…”) is surprisingly fiddly in play.


BIG GAP HERE


Subtraction and addition in the same operation is easier than most forms of multiplication. For example, it’s better to express a critical hit as “+100% damage” rather than “x2″ damage. Not only does it avoid edge cases (where you have two critical hit multipliers that both apply) it’s faster at the gaming table because it’s addition.

Multiplication by 10, 5, 4 and 2 are light weight enough to be usable. If you’re using a multiplicative operator other than one of those four digits, you need to change something earlier in the process. When in doubt, make it addition.


BIGGER GAP HERE


Division is only tolerable when A) you’re dividing an integer and expect an integer outcome, or are rounding to an integer outcome, and B) the divisor is 2 or 10. I specifically built some parts of my games for modulus division to avoid these problem.


GINORMOUS GAP HERE


Square roots, cube roots and exponentiation are probably not game friendly to most people. When in doubt, transform your equations to use one of the first two items on this list.

My impression is that Ken's games are geeky, so it's reasonable to assume his players are more adept with arithmetic than most people. I'm posting this because it might be worth knowing for anyone who's trying to explain something that involves numbers.

Is it possible to prevent the torture of ems?

12 NancyLebovitz 29 June 2011 07:42AM

When I was reading The Seven Biggest Dick Moves in the History of Gaming, I was struck by the number of people who are strongly motivated to cause misery to others [1], apparently for its own sake. I think the default assumption here is that the primary risk to ems is from errors in programming an AI, but cruelty from other ems, from silicon minds closely based on humans but not ems (is there a convenient term for this?) and from just plain organic humans strikes me as extremely likely.

We're talking about a species where a significant number of people feel better when they torture Sims. I don't think torturing Sims is of any moral importance, but it serves as an indicator about what people like to do. I also wonder how good a simulation has to be before torturing it does matter.

I find it hard to imagine a system where it's easy to upload people which has security so good that torturing copies wouldn't be feasible, but maybe I'm missing something.

[1] The article was also very funny. I point this out only because I feel a possibly excessive need to reassure readers that I have normal reactions.

What we're losing

52 PhilGoetz 15 May 2011 03:34AM

More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, in general, without any specific application.  This is probably the intended purpose of the site.  But they're starting to bore me.

What drew me to LessWrong is that it's a place where I can put rationality into practice, discussing specific questions of philosophy, value, and possible futures, with the goal of finding a good path through the Singularity.  Many of these topics have no other place where rational discussion of them is possible, online or off.  Such applied topics have almost all moved to Discussion now, and may be declining in frequency.

This isn't entirely new.  Applied discussions have always suffered bad karma on LW (statistically; please do not respond with anecdotal data).  I thought this was because people downvote a post if they find anything in it that they disagree with.  But perhaps a lot of people would rather talk about rationality than use it.

Does anyone else have this perception?  Or am I just becoming a LW old geezer?

At the same time, LW is taking off in terms of meetups and number of posts.  Is it finding its true self?  Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!)  Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?

(ADDED: Some rationality posts are good.  I am also a lukeprog fan.)

Functioning Synapse Created Using Carbon Nanotubes [link]

2 Dreaded_Anomaly 23 April 2011 11:10PM

Functioning Synapse Created Using Carbon Nanotubes: Devices Might Be Used in Brain Prostheses or Synthetic Brains (article @ ScienceDaily)

Engineering researchers the University of Southern California have made a significant breakthrough in the use of nanotechnologies for the construction of a synthetic brain. They have built a carbon nanotube synapse circuit whose behavior in tests reproduces the function of a neuron, the building block of the brain.

A very promising development for both human and artificial intelligence research.

Ray Kurzweil on The Colbert Report [video embed]

5 Kevin 17 April 2011 10:32AM

Two questions about CEV that worry me

29 cousin_it 23 December 2010 03:58PM

Taken from some old comments of mine that never did get a satisfactory answer.

1) One of the justifications for CEV was that extrapolating from an American in the 21st century and from Archimedes of Syracuse should give similar results. This seems to assume that change in human values over time is mostly "progress" rather than drift. Do we have any evidence for that, except saying that our modern values are "good" according to themselves, so whatever historical process led to them must have been "progress"?

2) How can anyone sincerely want to build an AI that fulfills anything except their own current, personal volition? If Eliezer wants the the AI to look at humanity and infer its best wishes for the future, why can't he task it with looking at himself and inferring his best idea to fulfill humanity's wishes? Why must this particular thing be spelled out in a document like CEV and not left to the mysterious magic of "intelligence", and what other such things are there?

Hacking the CEV for Fun and Profit

52 Wei_Dai 03 June 2010 08:30PM

It’s the year 2045, and Dr. Evil and the Singularity Institute have been in a long and grueling race to be the first to achieve machine intelligence, thereby controlling the course of the Singularity and the fate of the universe. Unfortunately for Dr. Evil, SIAI is ahead in the game. Its Friendly AI is undergoing final testing, and Coherent Extrapolated Volition is scheduled to begin in a week. Dr. Evil learns of this news, but there’s not much he can do, or so it seems.  He has succeeded in developing brain scanning and emulation technology, but the emulation speed is still way too slow to be competitive.

There is no way to catch up with SIAI's superior technology in time, but Dr. Evil suddenly realizes that maybe he doesn’t have to. CEV is supposed to give equal weighting to all of humanity, and surely uploads count as human. If he had enough storage space, he could simply upload himself, and then make a trillion copies of the upload. The rest of humanity would end up with less than 1% weight in CEV. Not perfect, but he could live with that. Unfortunately he only has enough storage for a few hundred uploads. What to do…

Ah ha, compression! A trillion identical copies of an object would compress down to be only a little bit larger than one copy. But would CEV count compressed identical copies to be separate individuals? Maybe, maybe not. To be sure, Dr. Evil gives each copy a unique experience before adding it to the giant compressed archive. Since they still share almost all of the same information, a trillion copies, after compression, just manages to fit inside the available space.

Now Dr. Evil sits back and relaxes. Come next week, the Singularity Institute and rest of humanity are in for a rather rude surprise!

Suspended Animation Inc. accused of incompetence

38 CronoDAS 18 November 2010 12:20AM

I recently found something that may be of concern to some of the readers here.

On her blog, Melody Maxim, former employee of Suspended Animation, provider of "standby services" for Cryonics Institute customers, describes several examples of gross incompetence in providing those services. Specifically, spending large amounts of money on designing and manufacturing novel perfusion equipment when cheaper, more effective devices that could be adapted to serve their purposes already existed, hiring laymen to perform difficult medical procedures who then botched them, and even finding themselves unable to get their equipment loaded onto a plane because it exceeded the weight limit.

An excerpt from one of her posts, "Why I Believe Cryonics Should Be Regulated":

It is no longer possible for me to believe what I witnessed was an isolated bit of corruption, and the picture gets bigger, by the year...

For forty years, cryonics "research" has primarily consisted of laymen attempting to build equipment that already exists, and laymen trying to train other laymen how to perform the tasks of paramedics, perfusionists, and vascular surgeons...much of this time with the benefactors having ample funding to provide the real thing, in regard to both equipment and personnel. Organizations such as Alcor and Suspended Animation, which want to charge $60,000 to $150,000, (not to mention other extra charges, or years worth of membership dues), are not capable of preserving brains and/or bodies in a condition likely to be viable in the future. People associated with these companies, have been known to encourage people, not only to leave hefty life insurance policies with their organizations listed as the beneficiaries, to pay for these amateur surgical procedures, but to leave their estates and irrevocable trusts to cryonics organizations.

...

Again, I have no problem with people receiving their last wishes. If people want to be cryopreserved, I think they should have that right. BUT...companies should not be allowed to deceive people who wish to be cryopreserved. They should not be allowed to publish photos of what looks like medical professionals performing surgery, but in actuality, is a group of laymen playing doctor with a dead body...people whose incompetency will result in their clients being left warm (and decaying), for many hours while they struggle to perform a vascular cannulation, or people whose brains will be underperfused or turned to mush, by laymen who have no idea how to properly and safely operate a perfusion circuit. Cryonics companies should not be allowed to refer to laymen as "Chief Surgeon," "Surgeon," "Perfusionist," when these people hold no medical credentials.

View more: Next