Self-serving meta: Whoever keeps block-downvoting me, is there some way to negotiate peace?

16 ialdabaoth 16 November 2013 04:35AM

I'm just tired of the signal pollution, and would like to be able to use karma to honestly appraise the worth of my articles and posts, without seeing 80% of my downvotes come in chunks that correspond precisely to how many posts I've made since the last massive downvote spree.

 

EDIT to add data points:

Spurious downvoting stopped soon after I named a particular individual (not ALL downvoting stopped, but the downvotes I got all seemed on-the-level.) 

One block of potentially spurious downvoting occurred approximately one week ago, but then karma patterns returned to expected levels. I consider this block dubious, because it reasonably matches what I'd expect to see if someone noticed several of my posts together and disagreed with all of them, and did not match the usual pattern of starting with the earliest or latest post that I had made and downvoting everything (it downvoted all posts in a few threads, but not in other threads), so I'm just adding for completeness.

Spurious, indiscriminate downvoting started up again approximately half an hour ago on Sunday (12/1/2013), around noon MDT.

Edit: And now on Tuesday, 12/3/2013, at 10 AM, I'm watching my karma go down again... about 30 points so far.

Edit: And now on Saturday, 12/14/2013, at 2 PM, I'm watching my karma go down again... about 15 points so far, at a rate of about 1-2 points per second.

Help the Brain Preservation Foundation

24 aurellem 13 November 2013 09:18PM

(First time poster, long time reader)

I'm currently volunteering for the Brain Preservation Foundation (http://www.brainpreservation.org/), and I'd like to ask for your  help.

The purpose of the BPF is to incentivize and evaluate the development of technology which can preserve a human brain in such intricate detail that all of the brain's cells and connections are preserved. It's the only prize of its kind for a relatively endangered, yet essential type of research.

We run a cash prize ($100,000 USD) called the "Brain Preservation Technology Prize" for the first team that can preserve a large mammal's brain to our high standards. The first $25,000 of that prize goes to the first team that can preserve the ultrastructure of a mouse brain.

Steve Aoki (http://steveaoki.com/), a musician that you might have heard of, is currently planning to give around $50,000 to one of four brain-related charities. One of these charities is the Brain Preservation Foundation! Whichever charity gets the most votes will win all the money.

This money is critically important to us to get the necessary supplies and lab time to administer the brain preservation technology prize. Evaluating brains that people send us involves electron microscopy, which is quite expensive (around $8,000 to evaluate a brain!) We are currently getting submissions and this extra money will give us the funds we need to run the prize.

To vote, just visit http://on.fb.me/15XFdTG, and click the "like" button by the "Brain Preservation Foundation" comment. You can see a graph of the votes at http://aurellem.org/bpf/votes.png (updates every 15 minutes). Thanks for taking the time to read
and vote!

More about the Brain Preservation Foundation :
http://www.brainpreservation.org/

More about the charity:
https://www.facebook.com/photo.php?fbid=10151608608587461

Votes graph:
http://aurellem.org/bpf/votes.png


I'd also love to hear your own opinions on the BPF and your assessment of its effectiveness, as well as your thoughts on  chemopreservation vs cryopreservation.

A diagram for a simple two-player game

22 ciphergoth 10 November 2013 08:59AM

(Copied from my blog)

I always have a hard time making sense of preference matrices in two-player games. Here are some diagrams I drew to make it easier. This is a two-player game:

1

North wants to end up on the northernmost point, and East on the eastmost. North goes first, and chooses which of the two bars will be used; East then goes second and chooses which point on the bar will be used.

North knows that East will always choose the easternmost point on the bar picked, so one of these two:

2

North checks which of the two points is further north, and so chooses the leftmost bar, and they both end up on this point:

3

Which is sad, because there’s a point north-east of this that they’d both prefer. Unfortunately, North knows that if they choose the rightmost bar, they’ll end up on the easternmost, southernmost point.

Unless East can somehow precommit to not choosing this point:

4

Now East is going to end up choosing one of these two points:

5

So North can choose the rightmost bar, and the two players end up here, a result both prefer:

6

I won’t be surprised if this has been invented before, and it may even be superceded – please do comment if so :)

Here’s a game where East has to both promise and threaten to get a better outcome:

0,1-1,3_2,2-3,0

0,1-1,3_2,2-3,0-x

Megameetup on December 13-15th, NYC

12 Raemon 05 October 2013 09:56PM

This winter, we'll be hosting a megameetup on December 13th-15th. This is the weekend of the Winter Solstice, a big event we're putting together the rationality, humanist and transhumanist communities of the area. (The Solstice celebration is on Saturday evening - if you'd like to attend, you should check out the kickstarter and back it. Seating is limited and tickets are sold in advance are $25).

Eight members of the New York Rationality community recently moved into a gorgeous house in Brooklyn. It's got 5500 square feet. The first floor, approximately 1800 square feet, has four areas that with sliding doors that can either be treated as a single, huge meetup space, or broken into smaller areas.

Also it has secret doors.

We have named it "Highgarden."

    

We're really looking forward to turning this into a genuine rationality community center. We have self improvement meetups every other Sunday (the next one is on the October 13th), and have other one-off events in the works.

Friday night and Saturday afternoon will primarily casual hangouts, before most of us head over to the Solstice event. On Sunday there will be a presentation on the current state of Effective Altruism. We're aiming to have other presentations as well but details are not finalized yet.

We have a large (but not unlimited) array of crash space, so if you'd like to spend Friday and/or Saturday night at Highgarden, you should let us know in advance.

Looking forward to seeing many of you there!

When + Where

Highgarden House -851 Park Place Brooklyn NY, 11261

Friday, December 13th, 7:00 PM - Saturday, December 15th, 7:00 PM

[Link] Low-Hanging Poop

36 GLaDOS 16 October 2013 08:51PM

Related: Son of Low Hanging Fruit

Another post on finding low hanging fruit from Gregory Cochran's and Henry Harpending's blog West Hunter.

Clostridium difficile causes a potentially serious kind of diarrhea triggered by antibiotic treatments. When the normal bacterial flora of the colon are hammered by a broad-spectrum antibiotic, C. difficile often takes over and causes real trouble.  Mild cases are treated by discontinuing antibiotic therapy, which often works: if not, the doctors try oral metronidazole (Flagyl), then vancomycin , then intravenous metronidazole.  This doesn’t always work, and C. difficile infections kill about 14,000 people a year in the US.

One recent trial shows that fecal bacteriotherapy, more commonly called a stool transplant, works like gangbusters, curing ~94% of patients. The trial was halted because the treatment worked so well that refusing to poopify the control group was clearly unethical.  I read about this, but thought I’d heard about such stool transplants some time ago.  I had.  It was mentioned in The Making of a Surgeon, by William Nolen, published in 1970. Some crazy intern – let us call him Hogan – tried a stool transplant on a woman with a C. difficile infection. He mixed some normal stool with chocolate milk and fed it to the lady.  It made his boss so mad that he was dropped from the program at the end of the year.  It also worked. It was inspired by a article in Annals of Surgery, so this certainly wasn’t the first try.  According to Wiki,  there are more than 150 published reports on stool transplant, going back to 1958.

So what took so damn long?  Here we have a simple, cheap, highly effective treatment for C. difficile infection that has only become officially valid this year. Judging from the H. pylori  story, it may still take years before it is in general use.

Obviously, sheer disgust made it hard for doctors to embrace this treatment.  There’s a lesson here: in the search for low-hanging fruit,  reconsider approaches that are embarrassing, or offensive, or downright disgusting.

Investigate methods were abandoned because people hated them, rather because of solid evidence showing that they didn’t work.

Along those lines, no modern educational reformer utters a single syllable about corporal punishment: doesn’t that make you suspect it’s effective?  I mean, why we aren’t we caning kids anymore?  The Egyptians said that a boy’s ears are in his back: if you do not beat him he will not listen. Maybe they knew a thing or three.

Sometimes, we hate the idea’s authors: the more we hate them, the more likely we are to miss out on their correct insights. Even famous assholes had to be competent in some areas, or they wouldn’t have been able to cause serious trouble.

Open Thread, July 1-15, 2013

4 Vaniver 01 July 2013 05:10PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Public Service Announcement Collection

37 Eliezer_Yudkowsky 27 June 2013 05:20PM

P/S/A:  There are single sentences which can create life-changing amounts of difference.

  • P/S/A:  If you're not sure whether or not you've ever had an orgasm, it means you haven't had one, a condition known as primary anorgasmia which is 90% treatable by cognitive-behavioral therapy.
  • P/S/A:  The people telling you to expect above-trend inflation when the Federal Reserve started printing money a few years back, disagreed with the market forecasts, disagreed with standard economics, turned out to be actually wrong in reality, and were wrong for reasonably fundamental reasons so don't buy gold when they tell you to.
  • P/S/A:  There are many many more submissive/masochistic men in the world than there are dominant/sadistic women, so if you are a woman who feels a strong temptation to command men and inflict pain on them, and you want a large harem of men serving your every need, it will suffice to state this fact anywhere on the Internet and you will have fifty applications by the next morning.
  • P/S/A:  Most of the personal-finance-advice industry is parasitic and/or self-deluded, and it's generally agreed on by economic theory and experimental measurement that an index fund will deliver the best returns you can get without huge amounts of effort.
  • P/S/A:  If you are smart and underemployed, you can very quickly check to see if you are a natural computer programmer by pulling up a page of Python source code and seeing whether it looks like it makes natural sense, and if this is the case you can teach yourself to program very quickly and get a much higher-paying job even without formal credentials.

 

For FAI: Is "Molecular Nanotechnology" putting our best foot forward?

48 leplen 22 June 2013 04:44AM

Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It's not really clear to me why. In many of the examples of "How could AI's help us" or "How could AI's rise to power" phrases like "cracks protein folding" or "making a block of diamond is just as easy as making a block of coal" are thrown about in ways that make me very very uncomfortable. Maybe it's all true, maybe I'm just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.

I must post the disclaimer that I have done a little bit of materials science, so maybe I'm just annoyed that you're making me obsolete, but I don't see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it's still not clear to me that MNT is a likely element of the future. It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it's not at all clear to me that it's even possible.

I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details.  I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me. 

I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don't think convincing people that smarter than human AI's have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.

Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class "people on the internet who have their own ideas about physics". It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.

And maybe it's just me. Maybe this did not bother anyone else, and it's an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.

Elites and AI: Stated Opinions

10 lukeprog 15 June 2013 07:52PM

Previously, I asked "Will the world's elites navigate the creation of AI just fine?" My current answer is "probably not," but I think it's a question worth additional investigation.

As a preliminary step, and with the help of MIRI interns Jeremy Miller and Oriane Gaillard, I've collected a few stated opinions on the issue. This survey of stated opinions is not representative of any particular group, and is not meant to provide strong evidence about what is true on the matter. It's merely a collection of quotes we happened to find on the subject. Hopefully others can point us to other stated opinions — or state their own opinions.

continue reading »

Do Earths with slower economic growth have a better chance at FAI?

30 Eliezer_Yudkowsky 12 June 2013 07:54PM

I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.

And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress:  Economic growth = good.

But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI.  So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.  Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces.  This means that UFAI parallelizes better than FAI.  UFAI also probably benefits from brute-force computing power more than FAI.  Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.  I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.

Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing.  I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.

I have various cute ideas for things which could improve a country's economic growth.  The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people.  I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore.  However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny.  And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first).  Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly.  (An extremely understandable position which would typically be taken by good and virtuous people).

Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news.  I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.

To head off some obvious types of bad reasoning in advance:  Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.

Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier.  But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI.  Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research.  If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.

So I pose the question:  "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"?  So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".

This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.

EDIT:  To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky.  The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

EDIT 2:  Carl Shulman's opinion can be found on the Facebook discussion here.

View more: Prev | Next