Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Persistence of Long-Term Memory in Vitrified and Revived C. elegans worms

8 Rangi 24 May 2015 03:43AM

http://online.liebertpub.com/doi/pdf/10.1089/rej.2014.1636

This is a paper published in 2014 by Natasha Vita-More and Daniel Barranco, both associated with the Alcor Research Center (ARC).

The abstract:

Can memory be retained after cryopreservation? Our research has attempted to answer this long-standing question by using the nematode worm Caenorhabditis elegans (C. elegans), a well-known model organism for biological research that has generated revolutionary findings but has not been tested for memory retention after cryopreservation. Our study’s goal was to test C. elegans’ memory recall after vitrification and reviving. Using a method of sensory imprinting in the young C. elegans we establish that learning acquired through olfactory cues shapes the animal’s behavior and the learning is retained at the adult stage after vitrification. Our research method included olfactory imprinting with the chemical benzaldehyde (C₆H₅CHO) for phase-sense olfactory imprinting at the L1 stage, the fast cooling SafeSpeed method for vitrification at the L2 stage, reviving, and a chemotaxis assay for testing memory retention of learning at the adult stage. Our results in testing memory retention after cryopreservation show that the mechanisms that regulate the odorant imprinting (a form of long-term memory) in C. elegans have not been modified by the process of vitrification or by slow freezing.

[Link] Mainstream media writing about rationality-informed approaches

3 Gleb_Tsipursky 24 May 2015 01:18AM

Wanted to share two articles published in mainstream media, namely Ohio newspapers, about how rationality-informed strategies help people improve their lives.

This one is about improving one's thinking, feeling, and behavior patterns overall, and especially one's highest-order goals, presented as "meaning and purpose."

This one is about using rationality to deal with mental illness, and specifically highlights the strategy of "in what world do I want to live?"

I know about these two articles because I was personally involved in their publication as part of my broader project of spreading rationality widely. What other articles are there that others know about?

[Link] Throwback Thursday: Are asteroids dangerous?

1 Gunnar_Zarncke 23 May 2015 08:00AM

Throwback Thursday: Are asteroids dangerous? by StartsWithABang:

When it comes to risk assessment, there's one type that humans are notoriously bad at: the very low-frequency but high-consequence risks and rewards. It's why so many of us are so eager to play the lottery, and simultaneously why we're catastrophically afraid of ebola and plane crashes, when we're far more likely to die from something mundane, like getting hit by a truck. One of the examples where science and this type of fear-based fallacy intersect is the science of asteroid strikes. With all we know about asteroids today, here's the actual risk to humanity, and it's much lower than anyone cares to admit. -- summary from slashdot.

Weekly LW Meetups

2 FrankAdamek 22 May 2015 03:18PM

This summary was posted to LW Main on May 15th. The following week's summary is here.

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Weekly LW Meetups

2 FrankAdamek 22 May 2015 03:16PM

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Boston, Brussels, Buffalo, Cambridge UK, Canberra, Columbus, London, Madison WI, Melbourne, Moscow, Mountain View, New York, Philadelphia, Research Triangle NC, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.

continue reading »

Leaving LessWrong for a more rational life

29 Mark_Friedenbach 21 May 2015 07:24PM

You are unlikely to see me posting here again, after today. There is a saying here that politics is the mind-killer. My heretical realization lately is that philosophy, as generally practiced, can also be mind-killing.

As many of you know I am, or was running a twice-monthly Rationality: AI to Zombies reading group. One of the bits I desired to include in each reading group post was a collection of contrasting views. To research such views I've found myself listening during my commute to talks given by other thinkers in the field, e.g. Nick Bostrom, Anders Sandberg, and Ray Kurzweil, and people I feel are doing “ideologically aligned” work, like Aubrey de Grey, Christine Peterson, and Robert Freitas. Some of these were talks I had seen before, or generally views I had been exposed to in the past. But looking through the lens of learning and applying rationality, I came to a surprising (to me) conclusion: it was philosophical thinkers that demonstrated the largest and most costly mistakes. On the other hand, de Grey and others who are primarily working on the scientific and/or engineering challenges of singularity and transhumanist technologies were far less likely to subject themselves to epistematic mistakes of significant consequences.

Philosophy as the anti-science...

What sort of mistakes? Most often reasoning by analogy. To cite a specific example, one of the core underlying assumption of singularity interpretation of super-intelligence is that just as a chimpanzee would be unable to predict what a human intelligence would do or how we would make decisions (aside: how would we know? Were any chimps consulted?), we would be equally inept in the face of a super-intelligence. This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This post is not about the singularity nature of super-intelligence—that was merely my choice of an illustrative example of a category of mistakes that are too often made by those with a philosophical background rather than the empirical sciences: the reasoning by analogy instead of the building and analyzing of predictive models. The fundamental mistake here is that reasoning by analogy is not in itself a sufficient explanation for a natural phenomenon, because it says nothing about the context sensitivity or insensitivity of the original example and under what conditions it may or may not hold true in a different situation.

A successful physicist or biologist or computer engineer would have approached the problem differently. A core part of being successful in these areas is knowing when it is that you have insufficient information to draw conclusions. If you don't know what you don't know, then you can't know when you might be wrong. To be an effective rationalist, it is often not important to answer “what is the calculated probability of that outcome?” The better first question is “what is the uncertainty in my calculated probability of that outcome?” If the uncertainty is too high, then the data supports no conclusions. And the way you reduce uncertainty is that you build models for the domain in question and empirically test them.

The lens that sees its own flaws...

Coming back to LessWrong and the sequences. In the preface to Rationality, Eliezer Yudkowsky says his biggest regret is that he did not make the material in the sequences more practical. The problem is in fact deeper than that. The art of rationality is the art of truth seeking, and empiricism is part and parcel essential to truth seeking. There's lip service done to empiricism throughout, but in all the “applied” sequences relating to quantum physics and artificial intelligence it appears to be forgotten. We get instead definitive conclusions drawn from thought experiments only. It is perhaps not surprising that these sequences seem the most controversial.

I have for a long time been concerned that those sequences in particular promote some ungrounded conclusions. I had thought that while annoying this was perhaps a one-off mistake that was fixable. Recently I have realized that the underlying cause runs much deeper: what is taught by the sequences is a form of flawed truth-seeking (thought experiments favored over real world experiments) which inevitably results in errors, and the errors I take issue with in the sequences are merely examples of this phenomenon.

And these errors have consequences. Every single day, 100,000 people die of preventable causes, and every day we continue to risk extinction of the human race at unacceptably high odds. There is work that could be done now to alleviate both of these issues. But within the LessWrong community there is actually outright hostility to work that has a reasonable chance of alleviating suffering (e.g. artificial general intelligence applied to molecular manufacturing and life-science research) due to concerns arrived at by flawed reasoning.

I now regard the sequences as a memetic hazard, one which may at the end of the day be doing more harm than good. One should work to develop one's own rationality, but I now fear that the approach taken by the LessWrong community as a continuation of the sequences may result in more harm than good. The anti-humanitarian behaviors I observe in this community are not the result of initial conditions but the process itself.

What next?

How do we fix this? I don't know. On a personal level, I am no longer sure engagement with such a community is a net benefit. I expect this to be my last post to LessWrong. It may happen that I check back in from time to time, but for the most part I intend to try not to. I wish you all the best.

A note about effective altruism…

One shining light of goodness in this community is the focus on effective altruism—doing the most good to the most people as measured by some objective means. This is a noble goal, and the correct goal for a rationalist who wants to contribute to charity. Unfortunately it too has been poisoned by incorrect modes of thought.

Existential risk reduction, the argument goes, trumps all forms of charitable work because reducing the chance of extinction by even a small amount has far more expected utility than would accomplishing all other charitable works combined. The problem lies in the likelihood of extinction, and the actions selected in reducing existential risk. There is so much uncertainty regarding what we know, and so much uncertainty regarding what we don't know that it is impossible to determine with any accuracy the expected risk of, say, unfriendly artificial intelligence creating perpetual suboptimal outcomes, or what effect charitable work in the area (e.g. MIRI) is have to reduce that risk, if any.

This is best explored by an example of existential risk done right. Asteroid and cometary impacts is perhaps the category of external (not-human-caused) existential risk which we know the most about, and have done the most to mitigate. When it was recognized that impactors were a risk to be taken seriously, we recognized what we did not know about the phenomenon: what were the orbits and masses of Earth-crossing asteroids? We built telescopes to find out. What is the material composition of these objects? We built space probes and collected meteorite samples to find out. How damaging an impact would there be for various material properties, speeds, and incidence angles? We built high-speed projectile test ranges to find out. What could be done to change the course of an asteroid found to be on collision course? We have executed at least one impact probe and will monitor the effect that had on the comet's orbit, and have on the drawing board probes that will use gravitational mechanisms to move their target. In short, we identified what it is that we don't know and sought to resolve those uncertainties.

How then might one approach an existential risk like unfriendly artificial intelligence? By identifying what it is we don't know about the phenomenon, and seeking to experimentally resolve that uncertainty. What relevant facts do we not know about (unfriendly) artificial intelligence? Well, much of our uncertainty about the actions of an unfriendly AI could be resolved if we were to know more about how such agents construct their thought models, and relatedly what language were used to construct their goal systems. We could also stand to benefit from knowing more practical information (experimental data) about in what ways AI boxing works and in what ways it does not, and how much that is dependent on the structure of the AI itself. Thankfully there is an institution that is doing that kind of work: the Future of Life institute (not MIRI).

Where should I send my charitable donations?

Aubrey de Grey's SENS Research Foundation.

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

If you feel you want to spread your money around, here are some non-profits which have I have vetted for doing reliable, evidence-based work on singularity technologies and existential risk:

  • Robert Freitas and Ralph Merkle's Institute for Molecular Manufacturing does research on molecular nanotechnology. They are the only group that work on the long-term Drexlarian vision of molecular machines, and publish their research online.
  • Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.
  • B612 Foundation is a non-profit seeking to launch a spacecraft with the capability to detect, to the extent possible, ALL Earth-crossing asteroids.

I wish I could recommend a skepticism, empiricism, and rationality promoting institute. Unfortunately I am not aware of an organization which does not suffer from the flaws I identified above.

Addendum regarding unfinished business

I will no longer be running the Rationality: From AI to Zombies reading group as I am no longer in good conscience able or willing to host it, or participate in this site, even from my typically contrarian point of view. Nevertheless, I am enough of a libertarian that I feel it is not my role to put up roadblocks to others who wish to delve into the material as it is presented. So if someone wants to take over the role of organizing these reading groups, I would be happy to hand over the reigns to that person. If you think that person should be you, please leave a reply in another thread, not here.

EDIT: Obviously I'll stick around long enough to answer questions below :)

Communicating via writing vs. in person

4 adamzerner 22 May 2015 04:58AM

There's a lot that I really like about communicating via writing. Communicating in person is sometimes frustrating for me, and communicating via writing addresses a lot of those frustrations:

1) I often want to make a point that depends on the other person knowing X. In person, if I always paused and did the following, it'd add a lot of friction to conversations: "Wait, do you know X? If yes, good, I'll continue. If no, let me think about how to explain it briefly. Or do you want me to explain it in more depth? Or do you want to try to proceed without knowing X and see how it goes?". But if I don't do so, then it risks miscommunication (because the other person may not have the dependency X).

In writing, I could just link to an article. If the other person doesn't have the dependency, they have options. They could try to proceed without knowing X and see how it goes. If it doesn't work out, they could come back and read the link. Or they could read the link right away. And in reading the link, they have their choice of how deeply they want to read. Ie. they could just skim if they want to.

Alternatively, if you don't have something to link to, you could add a footnote. I think that a UI like Medium's side comments is very preferable to putting the footnotes at the bottom of the page. I hope to see this adopted across the internet some time in the next 5 years or so.

2) I think that in general, being precise about what you're saying is actually quite difficult/time consuming*. For example, I don't really mean what I just said. I'm actually not sure how often that it's difficult/time consuming to be precise with what you're saying. And I'm not sure how often it's useful to be precise about what you're saying (or really, more precise...whatever that means...). I guess what I really mean is that it happens often enough where it's a problem. Or maybe just that for me, it happens enough where I find it to be a problem.

Anyway, I find that putting quotes around what I say is a nice way to mitigate this problem.

Ex. It's "in my nature" to be strategic.

The quotes show that the word inside them isn't precisely what I mean, but that it's close enough to what I mean that it should communicate the gist of it. I sense that this communication often happens through empathetic inference.

*I also find that I feel internal and external pressure to be consistent with what I say, even if I know I'm oversimplifying. This is a problem and has negatively effected me. I recently realized what a big problem it is, and will try very hard to address it (or really, I plan on trying very hard but I'm not sure blah blah blah blah blah...).

Note 1: I find internal conversation/thinking as well as interpersonal conversation to be "chaotic". (What follows is rant-y and not precisely what I believe. But being precise would take too long, and I sense that the rant-y tone helps to communicate without detracting from the conversation by being uncivil.) It seems that a lot of other people (much less so on LW) have more "organized" thinking patterns. I can't help but think that that's BS. Well, maybe they do, but I sense that they shouldn't. Reality is complicated. People seem to oversimplify things a lot, and to think in terms of black-white. When you do that, I could see how ones thoughts could be "organized". But when you really try to deal with the complexities of reality... I don't understand how you could simultaneously just go through life with organized thoughts.

Note 2: I sense that this post somewhat successfully communicates my internal thought process and how chaotic it could be. I'm curious how this compares to other people. I should note that I was diagnosed with a mild-moderate case of ADHD when I was younger. But that was largely based off of iffy reporting from my teachers. They didn't realize how much conscious thought motivated my actions. Ie. I often chose to do things that seem impulsive because I judged it to be worth it. But given that my mind is always racing so fast, and that I have a good amount of trouble deciding to pay attention to anything other than the most interesting thing to me, I'd guess that I do have ADHD to some extent. I'm hesitant to make that claim without ever having been inside someone else's mind before though (how incredibly incredibly cool would that be!!!) - appearances could be deceiving.

3) It's easier to model and traverse the structure of a conversation/argument when it's in writing. You could break things into nested sections (which isn't always a perfect way to model the structure, but is often satisfactory). In person, I find that it's often quite difficult for two people (let alone multiple people) to stay in sync with the structure of the conversation. The outcome of this is that people rarely veer away from extremely superficial conversations. Granted, I haven't had the chance to talk to many smart people in real life, and so I don't have much data on how deep a conversation between two smart people could get. My guess is that it could get a lot deeper than what I'm used to, but that it'd be pretty hard to make real progress on a difficult topic without outlining and diagramming things out. (Note: I don't mean "deep as in emotional", I mean "deep as in nodes in a graph")


There are also a lot of other things to say about communicating in writing vs. in person, including:

  • The value of the subtle things like nonverbal communication and pauses.
  • The value of a conversation being continuous. When it isn't, you have to download the task over and over again.
  • How much time you have to think things through before responding.
  • I sense that people are way more careful in writing, especially when there's a record of it (rather than, say PM).

This is a discussion post, so feel free to comment on these things too (or anything else in the ballpark).

Reminder: Oslo Lesswrong meetup...

2 FourFire 22 May 2015 06:54AM

... is happening 17:00 local time today at the UiO Science library.

 

There will be cookies and popcorn, and those other reasons for attending a meetup.

If enough of the people who turn up are interested, this may become an annual, monthly, or even weekly event.

Current topics we will be discussing (subject to change of course):

* Introduction

* Raising the sanity waterline

* Effective Altruism

* Transhumanism / Futurism

(and sub-topics thereof)

 

There will be a sign with explicit directions to where in the building the meetup is located at the entrance.

Further planned meetups will be fleshed out then.

I look forward to seeing you there ;)

 

"Risk" means surprise

2 PhilGoetz 22 May 2015 04:47AM

I lost about $20,000 in 2013 because I didn't notice that a company managing some of my retirement funds had helpfully reallocated them from 100% stocks into bonds and real estate, to "avoid risk". My parents are retired, and everyone advising them tells them to put most of their money in "safe" investments like bonds.

continue reading »

Brainstorming new senses

21 lululu 20 May 2015 07:53PM

What new senses would you like to have available to you?

Often when new technology first becomes widely available, the initial limits are in the collective imagination, not in the technology itself (case in point: the internet). New sensory channels have a huge potential because the brain can process senses much faster and more intuitively than most conscious thought processes.

There are a lot of recent "proof of concept" inventions that show that it is possible to create new sensory channels for humans with and without surgery. The most well known and simple example is an implanted magnet, which would alert you to magnetic fields (the trade-off being that you could never have an MRI). Cochlear implants are the most widely used human-created sensory channels (they send electrical signals directly to the nervous system, bypassing the ear entirely), but CIs are designed to emulate a sensory channel most people already have brain space allocated to. VEST is another example. Similar to CIs, VEST (versatile extra-sensory transducer) has 24 information channels, and uses audio compression to encode sound. Unlike CIs, they are not implanted in the skull but instead information is relayed through vibrating motors on the torso. After a few hours of training, deaf volunteers are capable of word recognition using the vibrations alone, and to do so without conscious processing. Much like hearing, the users are unable to describe exactly what components make a spoken word intelligible, they just understand the sensory information intuitively. Another recent invention being tested (with success) is BrainPort glasses, which send electrical signals through the tongue (which is one of the most sensitive organs on the body). Blind people can begin processing visual information with this device within 15 minutes, and it is unique in that it is not implanted but instead. The sensory information feels like pop rocks at first before the brain is able to resolve it into sight. Niel Harbisson (who is colorblind) has custom glasses which use sound tones to relay color information. Belts that vibrate when facing north give people an sense of north. Bottlenose can be built at home and gives a very primitive sense of echolocation. As expected, these all work better if people start young as children. 

What are the craziest and coolest new senses you would like to see available using this new technology? I think VEST at least is available from Kickstarter and one of the inventors suggested that it could be that it could be programmed to transmit any kind of data. My initial ideas which I heard about this possibility are just are senses that some unusual people already have or expansions on current senses. I think the real game changers are going to be totally knew senses unrelated to our current sensory processing. Translating data into sensory information gives us access to intuition and processing speed otherwise unavailable. 

My initial weak ideas:

  • mass spectrometer (uses reflected lasers to determine the exact atomic makeup of anything and everything)
  • proximity meter (but I think you would begin to feel like you had a physical aura or field of influence)
  • WIFI or cell signal
  • perfect pitch and perfect north, both super easy and only need one channel of information (an smartwatch app?)
  • infrared or echolocation
  • GPS (this would involve some serious problem solving to figure out what data we should encode given limited channels, I think it could be done with 4 or 8 channels each associated with a cardinal direction)

Someone working with VEST suggested:

  • compress global twitter sentiments into 24 channels. Will you begin to have an intuitive sense of global events?
  • encode stockmarket data. Will you become an intuitive super-investor?
  • encode local weather data (a much more advanced version of "I can feel it's going to rain in my bad knee)

Some resources for more information:

 

 

More?

View more: Next