Comment author: ancientcampus 26 August 2014 03:48:05AM 6 points [-]

Personally, I think a big way to help ensure success is to not worry too much about drawing self-identifying "effective altruists", but primarily focus on simply drawing active altruists. Obviously, if it only recruits from LW and its handful of bosom-buddies, there'll hardly be a large-enough population. The mere title of the forum should do 70% of the work to keep everything on topic, and friendly reminders from mods should handle the rest.

I think that's important enough I'm going to stress it again: if the EA community is 80% LW-ians, then I think it will loose most of it's potential. LW already discusses effective altruism. Drawing people such as full-time active workers in existing charities such as public health works, economic support, missionary work, etc, seems a far higher priority to me. LW-like people already think along those lines, and generally have less energy/money dedicated to altruism than full-time charity workers. Attracting the later group would have a greater impact on the individuals, and target more important individuals to boot.

In that vein, I've already scoped out the blog, because I have several friends in mind who would take to this well. Currently, the front article has math. Lots of math. Here on LW that's almost the norm, but if the whole EA site is like that, it'll scare off a lot of good people. That's not to say "don't use math because people don't like it" - math is very important - but rather "decide what audience to target on your front page."

I'm excited! Thanks for putting it together.

Comment author: ancientcampus 26 August 2014 02:48:41AM 1 point [-]

Thanks to Stuart_Armstrong for getting me thinking about narrow intelligence.

The immediate real-world uses of Friendly AI research

6 ancientcampus 26 August 2014 02:47AM

Much of the glamor and attention paid toward Friendly AI is focused on the misty-future event of a super-intelligent general AI, and how we can prevent it from repurposing our atoms to better run Quake 2. Until very recently, that was the full breadth of the field in my mind. I recently realized that dumber, narrow AI is a real thing today, helpfully choosing advertisements for me and running my 401K. As such, making automated programs safe to let loose on the real world is not just a problem to solve as a favor for the people of tomorrow, but something with immediate real-world advantages that has indeed already been going on for quite some time. Veterans in the field surely already understand this, so this post is directed at people like me, with a passing and disinterested understanding of the point of Friendly AI research, and outlines an argument that the field may be useful right now, even if you believe that an evil AI overlord is not on the list of things to worry about in the next 40 years.

 

Let's look at the stock market. High-Frequency Trading is the practice of using computer programs to make fast trades constantly throughout the day, and accounts for more than half of all equity trades in the US. So, the economy today is already in the hands of a bunch of very narrow AIs buying and selling to each other. And as you may or may not already know, this has already caused problems. In the “2010 Flash Crash”, the Dow Jones suddenly and mysteriously hit a massive plummet only to mostly recover within a few minutes. The reasons for this were of course complicated, but it boiled down to a couple red flags triggering in numerous programs, setting off a cascade of wacky trades.

 

The long-term damage was not catastrophic to society at large (though I'm sure a couple fortunes were made and lost that day), but it illustrates the need for safety measures as we hand over more and more responsibility and power to processes that require little human input. It might be a blue moon before anyone makes true general AI, but adaptive city traffic-light systems are entirely plausible in upcoming years.

 

To me, Friendly AI isn't solely about making a human-like intelligence that doesn't hurt us – we need techniques for testing automated programs, predicting how they will act when let loose on the world, and how they'll act when faced with unpredictable situations. Indeed, when framed like that, it looks less like a field for “the singularitarian cultists at LW”, and more like a narrow-but-important specialty in which quite a bit of money might be made.

 

After all, I want my self-driving car.

 

(To the actual researchers in FAI – I'm sorry if I'm stretching the field's definition to include more than it does or should. If so, please correct me.)

Comment author: ancientcampus 23 August 2014 06:18:12PM 1 point [-]

Regarding the "reducing mortality" example, in biostats, mortality is "death due to X, divided by population". So "reducing cardiovascular mortality" would be dangerous, because it might kill its patients with a nerve poison. Reducing general mortality, though, shouldn't cause it to kill people, as long as it agrees with your definition of "death." (Presumably you would also have it list all side effects, which SHOULD catch the nerve-poison &etc.)

Comment author: Baughn 22 August 2014 12:24:39PM *  1 point [-]

Reducing, through reducing the population beforehand.

I am sorely tempted to adjust its goal. Since this is a narrow AI, it shouldn't be smart enough to be friendly; we can't encode the real utility function into it, even if we knew what it is. I wonder if that means it can't be made safe, or just that we need to be careful?

Comment author: ancientcampus 23 August 2014 06:13:13PM 1 point [-]

Best I can tell, the lesson is to be very careful with how you code the objective.

Comment author: HungryHobo 21 August 2014 02:24:01PM *  9 points [-]

Thing is: a narrow AI that doesn't model human minds and attempts to disrupt it's strategies isn't going to hide how it plans to do it.

So you build your narrow super-medicine-bot and ask it to plan out how it will achieve the goal you've given it and to provide a full walkthrough and description.

it's not a general AI, it doesn't have any programming for understanding lying or misleading anyone so it lays out the plan in full for the human operator. (why would it not?)

who promptly changes the criteria for success and tries again.

Comment author: ancientcampus 23 August 2014 06:11:52PM 1 point [-]

I think this sums it up well. To my understanding, I think it would only require someone "looking over its shoulder", asking its specific objective for each drug and the expected results of the drug. I doubt a "limited intelligence" would be able to lie. That is, unless it somehow mutated/accidentally became a more general AI, but then we've jumped rails into a different problem.

It's possible that I'm paying too much attention to your example, and not enough attention to your general point. I guess the moral of the story is, though, "limited AI can still be dangerous if you don't take proper precautions", or "incautiously coded objectives can be just as dangerous in limited AI as in general AI". Which I agree with, and is a good point.

Comment author: ancientcampus 23 August 2014 06:02:30PM 1 point [-]

So, there's a fair amount of interest here in post-singularity life-preserving things like cryogenics, uploading one's mind to a computer system, etc. There's a videogame on sale at the moment called "Master Reboot", where you wake up after having uploaded your mind, and something inevitably goes wrong (because otherwise there would be no story). The general impression I've gathered from others is "mediocre low-budget game, interesting concept". I figured someone here may find it their cup of tea.

If you're interested, it's on sale in the Humble Weekly Bundle until 2 PM on 8/28 - bundled with 4 other games for ~$8. You can watch the trailer here, or find an honest video review here.

Comment author: Metus 20 August 2014 09:19:09PM *  2 points [-]
  • Installed RescueTime to track where I spend time. I hardly never check the dashboard so I don't think it's very effective.

Did the same with the same result. It falls under the category of information that is easy to gather but I don't base actions on, so it is useless in the literal sense.

  • On Reddit, my default settings only show posts for the latest months, so in the few subreddits I follow regularly, there'll rarely be new things (and I avoid at looking at other kinds of feed like new or the front page), and I don't worry about missing things. This doesn't make visiting reddit very rewarding, but that's a feature :)

I could block Reddit completely and send the top posts from the week to my kindle on a weekly basis. Though blocking websites usually doesn't help me.

*I do regularly cull low quality stuff from my RSS feeds, so I rarely have that much

The problem here is that I don't have low quality feeds, but that they are not high quality in regular fashion, meaning that I sometimes get good content. Though I imagine I could look for substitute streams that are more consistent in their quality and/or figure out a way to filter the noise.

  • I occasionally do pomodoros (not a fully ingrained habit yet), which works on getting myself to stay focused.

That I will have to try. But it does not seem like they solve a problem I have, namely wasting my time on consuming information I actually don't care about.

  • I have no fear of "missing some information", that's just silly, in ten years I don't think my life will be changed because I didn't read a blog post or some news. Most journalism is a waste of time anyway, reading wikipedia or textbooks is more effective.

I regularly get great information from lesswrong, reddit, hacker news and my RSS feeds, which seems to be the exact problem. Cutting it all out completely and replacing it with textbooks and wikipedia seems too extreme.

Comment author: ancientcampus 23 August 2014 05:59:29PM *  1 point [-]

Installed RescueTime to track where I spend time. I hardly never check the dashboard so I don't think it's very effective.

Did the same with the same result. It falls under the category of information that is easy to gather but I don't base actions on, so it is useless in the literal sense.

I also had the same experience. I couldn't have phrased the above better.

Comment author: SoerenMind 11 March 2014 04:11:36PM *  2 points [-]

There's an Anki Deck on "The 20 rules of formulating knowledge [in SRS]". It's highly recommended for frequent Anki users. Here's some examples:

  • Start with the big picture
  • Refer to other memories
  • Use mnemonic techniques
  • Use imagery
  • Use graphic deletion (e.g. for diagrams, anatomy etc.)
  • Avoid sets ("contraindications of Metronidazole" would probably be a set) ...

So it seems that many of the points you mention are addressed if you use Anki effectively. Your post makes sense though: In my impression 1) most people are not using it as effectively as they could 2) it's not obvious how to use it effectively 3) effective use of SRS takes time and practice. There's certainly also cases where it's just not always the best technique.

Article: http://www.supermemo.com/articles/20rules.htm Deck: http://alexvermeer.com/download/How-to-Formulate-Knowledge.anki

PS: I second the post on memory palaces, sounds really interesting!

Comment author: ancientcampus 18 March 2014 04:22:42PM *  0 points [-]

For what it's worth: Though I do not claim to be a perfect user of SRS flashcards, I used them intensively for 3 years of medical school, constantly refining my technique. Many people here have suggested ways to improve my strategies. I have not yet seen an idea that I have not already tried extensively. Though I'm far from perfect, I think it's safe to say I have a better understanding than most beginners. There certainly is room for me to improve, but not much. If someone is considering using SRS long term for high volumes in medical school, here is my advice: it is possible a Perfect SRS User could use it more effectively than I did, but if you haven't already used SRS for years, you aren't such a person.

I never read that article, but I figured out many of those on my own. I agree with many of them, disagree with some. My input, for those that use it:

-Cloze deletion is simple, but to me, it is far too easy to "guess the teacher's password" using that technique, and is of limited use. It's great for high-school level fact regurgitation, but less useful for post-graduate stuff. You will quickly become good at the deck, but it does not strongly help your understanding of the material. That's an important point: your skill at answering questions in the deck does not necessarily translate to your skill at answering questions in real life.

-Graphic deletion - I used to do this all the time, but it is really time consuming to set up. I consider myself fast with an image-editor, but it's still a big drain. (Again, this is more of an issue in high volume) It also runs into the Cloze deletion problem.

-Use imagery: heck yes. I highly agree, in any situation (flashcards or no)

-Any technique splitting a larger whole into many smaller flashcards (the article lists several): This is possibly the WORST suggestion for high volume. While this is certainly very useful, again, when you use it in high volume I have found mental fatigue to become an issue. If you don't include the entire whole, you miss out on the big picture in a situation where the big picture truly is important. If you DO include the whole, you run into the cloze deletion "guessing the teacher's password" problem. That said, it has its uses in smaller volume, but I will never again use it in a high-volume deck.

To give an example: As the article suggests, I used to take a diagram, set up graphic deletion (make a series of images where a single element was blotted out), and run through the cards.

1) this takes a lot of startup time

2) Even ignoring the time to make the cards, I found reviewing the cards to be more time consuming than simply looking at the diagram, covering up the lables, and attempting to recall.

3) You get no practice recalling the diagram from memory

4) This technique is most effective if you will later see that exact diagram in real life/on the test, I argue it is a pitfall for guessing the teacher's password and provides less intuitive understanding of the diagram.

The strength in SRS comes from not wasting time on the easy parts and only spending time on the hard parts of the diagram. The theory is, after the first two cycles, you're only reviewing the "hard" parts of the diagram. On the other hand, you've spent more time making the cards, more time for the first and second card cycles, you're taking a big hit to the "big picture" style, and have no practice conjuring the diagram itself from memory. Ignoring the big picture and general understanding elements: if SRS provides any time benefit for the rote memorization vs going without SRS cards, i would only expect the benefit to "catch up" in 3 weeks BARE minimum; for me (for fatigue reasons in high volume) I pin the crossing point at 3 to 6 months, assuming it's an unintuitive diagram I use infrequently enough that I will forget it without review. I also argue that it provides a weaker general understanding of the diagram as a whole.

Comment author: Metus 10 March 2014 07:44:28PM 41 points [-]

If there's enough demand on LW I can write up a summary.

Please do.

Comment author: ancientcampus 18 March 2014 03:28:35PM 3 points [-]

Now that this topic is buried on page 2, I don't know if anyone will see this post. However, I've begun work on my tutorial. I intend to do a "demo", constructing a memory palace. Is there a particular list (of about 5-9 items) that people might find universally useful? Memory palaces really need to be constructed by the individual, but for the demo, I'd prefer to to something at least mildly relevant.

View more: Prev | Next