Mind Hacks

9 RyanCarey 03 February 2014 07:32PM

I've been in the bay area for a week, and already I've heard so many tips and tricks for becoming smarter that I can barely keep track. I think this is a very good thing. If effective altruists and rationalists can become smarter, then it should improve the probability of favourable far-future outcomes. Note that:

1) cognitive enhancement fits with the ideas that you will achieve most of your impact in your middle age, and that increasing your career capital is integral to achieving impact.

2) it might be possible to reinvest returns from cognitive enhancement by doing further research into cognitive enhancement. This is not to say that an intelligence explosion could occur within a human substrate - our capacity to alter our neural structure and thinking speed is likely to run up against hard evolutionary constraints in a way that machine intelligence will not. Nonetheless, cognitively enhanced humans could have an advantage in the creation of a friendly AI team.

I suggest that we collate our mind hacks here. Then we can vote them all up and down. This will generate a list of 'top rated posts of all time', which could give hints for curriculum design to organisations like CFAR. Here are a few suggestions:

  • The reversal test for status quo bias
  • Thinking for five minutes of plans that can be executed in five minutes
  • Mind maps
  • Asana
  • Anki
  • Speed reading
  • Goal factoring
  • Caffeine
  • Modafinil
  • Nicotine
  • Creatine
  • Transcranial magnetic stimulation
  • and so on

Of course, if something is less plausibly obtainable than transcranial magnetic stimulation, then it won't get any votes at all and doesn't meaningfully belong on this list (e.g. deep brain stimulation, brain-computer interfaces)

 

Inferential credit history

33 RyanCarey 24 July 2013 02:12PM

Here’s an interview with Seth Baum. Seth is an expert in risk analysis and a founder of the Global Catastrophic Risk Institute. As expected, Bill O’Reilly caricatured Seth as extreme, and cut up his interview with dramatic and extreme events from alien films. As a professional provocateur, it is his job to lay the gauntlet down to his guests. Also as expected, Seth put on a calm and confident performance. Was the interview net-positive or negative? It’s hard to say, even in retrospect. Getting any publicity for catastrophic risk reduction is good, and difficult. Still, I’m not sure just how bad publicity has to be before it really is bad publicity…

Explaining catastrophic risks to the audience of Fox News is perhaps equally difficult to explaining the risk of artificial intelligence to anyone. This is a task that frustrated Eliezer Yudkowsky so deeply that he was driven to write the epic LessWrong sequences. In his view, the inferential distance was too large to be bridged by a single conversation. There were too many things that he knew that were prerequisites to understanding his current plan. So he wrote this sequence of online posts that set out everything he knew about cognitive science and probability theory, applied to help readers think more clearly and live out their scientific values. He had to write a thousand words per day for about two years before talking about AI explicitly. Perhaps surprisingly, and as an enormous credit to Eliezer’s brain, these sequences formed the founding manifesto of the quickly growing rationality movement, many of whom now share his concerns about AI. Since he wrote these, his Machine Intelligence Research Institute (formely the singularity Institute) has grown precipitously and spun off the Center for Applied Rationality, a teaching facility and monument to the promotion of public rationality.

Why have Seth and Eliezer had such a hard time? Inferential distance explains a lot, but I have a second explanation, Seth and Eliezer had to build an inferential credit history. By the time you get to the end of the sequences, you have seen Eliezer bridge many an inferential distance, and you trust him to span another! If each time I loan Eliezer some attention, and suspend my disbelief, he has paid me back (in the currency of interesting and useful insight), then I will listen to him saying things that I don’t yet believe for a long time.

When I watch Seth on The Factor, his interview is coloured by his Triple A credit rating. We have talked before, and I have read his papers. For the rest of the audience, he had no time to build intellectual rapport. It’s not just that the inferential distance was large, it’s more that he didn’t have a credit rating of sufficient quality to take out a loan of that magnitude!

I contend that if you want to explain something abstract and unfamiliar, first you have to give a bunch of small and challenging chunks of insight, some of which must be practically applicable, and ideally you will lead your audience on a trek across a series of inferential distances, each slightly bigger than the last. It’ll sure be helpful fills in some of the steps toward understanding the bigger picture, but not necessary.

This proposal could explain why historical explanations are often effective. Explanations that go like:

Initially I wanted to help people. And then I read The Life You Can Save. And then I realised I had been neglecting to think about large numbers of people. And then I read about scope insensitivity, which made me think this, and then I read Bostrom’s Fable of the Dragon Tyrant, which made me think that, and so on…

This kind of explanation is often disorganised, with frequent detours, and false turns – steps in your ideological history that turned out to be wrong or unhelpful. The good thing about historical explanations is that they are stories, and that they have a main character – you – and this all makes the story more compelling. I will argue that a further advantage is that they give you the opportunity to borrow lots of small amounts of your audience’s attention, and accrete a good credit rating, that you will need to make your boldest claims.

Lastly, let me present an alternative philosophy to overcoming inferential distances. It will seem to contradict what I have said so far, although I find it also useful.

If you say that X idea is crazy, then this can often become a self-fulfilling prophesy.

On this view, those who publicise AI risk should never complain about, and rarely talk about the large inferential distance before them, least of all publicy. They should normalise their proposal by treating it as normal. I still think it’s important for them to acknowledge any intuitive reluctance on the part of their audience to entertain an idea. It’s like how if you don’t appear embarrassed after committing a faux-pas, you’re seen as untrustworthy. But after acknowledging this challenge, they had best get back to their subject material, as any normal person would!

So if you believe in inferential distanceinferential credit history (building trust), and acting normal, then explain hard things by first beginning with lots of easy things, build larger and larger bridges, and acknowledge, but beware overemphasising any difficulties.

[also posted on my blog]

Meetup : Melbourne, social meetup: 31 May 2013 07:00PM

1 RyanCarey 27 May 2013 11:10AM

Discussion article for the meetup : Melbourne, social meetup: 31 May 2013 07:00PM

WHEN: 31 May 2013 09:09:16PM (+1000)

WHERE: Malvern East

You probably all think we're about due for a practical rationality meetup, but not quite - it's the fifth Friday of the month, so come to my place instead!

Brayden will share stories from his San Francisco trip. Also, Chris will come, you can meet a couple of my non LW rationalist friends; we'll likely get Thai. Good ways to get there by public transport are walking from Darling station (5mins) or getting a lift from Caulfield by calling me on 0413275523. The address is available via the Google Group. As usual, those who commit to come are rewarded with an upvote. You're welcome to come any time from 6.30

Discussion article for the meetup : Melbourne, social meetup: 31 May 2013 07:00PM

Collating widely available time/money trades

17 RyanCarey 19 November 2012 10:57PM

In the xkcd comic Working, a man is seen filling up his gas tank. "Why are you going here", says the observer, "Gas is ten cents a gallon cheaper at the station five minutes that way". He responds "Because a penny saved is a penny earned". Randal's pragmatically spirited caption says "If you spend nine minutes of your time to save a dollar, you're working for less than the minimum wage."

Our opportunities to convert time into money and vice versa, though not unlimited, are numerous.

We work (sell our free time) when we…

  • Seek overtime shifts
  • Subscribe to a mailing list for local discounts e.g. GroupOn
  • Bargain with one more car dealer or travel agent, in search of a better deal

We buy free time when we…

  • Employ cleaners, chefs, babysitters, secretaries and others.
  • Buy productivity software
  • Buy a medication that improves our sleep

How can we evaluate these trades? It seems like we ought to only purchase free time when it comes cheaper than a certain figure, $x/hr, and ought to only work if we can sell our free time for more than $x/hr. Indeed, comparing trades to this time/money exchange rate is the only unexploitable way to behave.

Most of the time, when we share our estimates of the value of these trades, our comments are too vague to be helpful. If my father, a doctor tells me, a student, that "subscribing to discount mailing lists is a waste of time", what does he mean? He might mean that these mailing lists are poor value for me, he might mean the much stronger statement that they are poor value for everyone, or the much weaker statement, that they are poor value just for him (his time is obviously worth the most). I have to try to get him to disentangle his estimation from his jugement. I have to ask him "What low value would a person have to place on their time for discount mailing lists to be worthwhile?"

The easiest way for all individuals with different time/money exchange rates to share their estimates will be to quantify them. e.g. being on a discount mailing lists only saves $x per hour spent. Out of my father and I, this might represent value to none, one or both of us.

When we share these quantitative estimates, it would be silly to discuss deals that are only available privately like job offers, that are so dependent on our particular skills and qualification. Instead,we will gain the most by listing time-money trades that are likely to apply across domains, such as repairing a car on the one hand or catching a cab on the other.

By doing so, we stand to learn that many of the trades we have been carrying out have represented poor value, and we should learn of new trades that we had not previously considered. Of course, there are associated costs, like the time spent gathering this information, and the risk of becoming unduly preocuppied with these decisions, but it still seems worth doing.

A last point of order is that it will be best to indicate how far we can expect each estimate to generalise. For example, the cost of something like melatonin will differ between states or between countries, and that is worth mentioning.

So in this thread, please share your estimates in $/hr for potential ways to work or buy free time.

Online Meetup: The High Impact Network

2 RyanCarey 19 November 2012 02:55AM

Update: The High Impact Network will meet at 7pm on Saturday the 24th of November, Eastern US time. Please email me to be invited to these hangouts:

https://plus.google.com/u/1/events/cj4831btptb0ngde1efb3tfsjtc

https://plus.google.com/u/1/events/cbnr8dqqbgra7a5391msu1e1dc4

 

Effective altruists, not all of whom are geographically located together, benefit from being connected and brought up to date with effective ideas and plans regarding areas of interest to them.

Mark Lee and I want to meet aspiring effective altruists and talk about how their talents and ideas might fit into the greater scheme of organised altruistic effort.

Due to the popularity of the previous meetup, the new discussion will be divided into two smaller groups that will host simultaneous discussions on:

1. Addressing Global Poverty - how can we best alleviate global poverty?

2. Beyond Global Poverty - what are other highly important causes and how can we address them?

Participants are welcome to suggest up to 3 ways that they are interested in addressing these problems, and then we'll discuss the strengths and weaknesses of these approaches. The agenda is broad so as not to preempt or undermine new suggestions likely to be effective. More targeted follow-up meetings can be later arranged if required.

ark and I will chair one conversation each. Both will take place through Google Hangouts, at a the democratically determined time of 7pm on Saturday the 24th of November, Eastern US time.

Please RSVP if you want to be added to the Google Hangout - you are welcome to specify which discussion you prefer to be involved in and topics that you would like attached to the agenda .

View more: Prev