You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Barack Obama's opinions on near-future AI [Fixed]

3 scarcegreengrass 12 October 2016 03:46PM

Learning values versus learning knowledge

4 Stuart_Armstrong 14 September 2016 01:42PM

I just thought I'd clarify the difference between learning values and learning knowledge. There are some more complex posts
about the specific problems with learning values, but here I'll just clarify why there is a problem with learning values in the first place.

Consider the term "chocolate bar". Defining that concept crisply would be extremely difficult. But nevertheless it's a useful concept. An AI that interacted with humanity would probably learn that concept to a sufficient degree of detail. Sufficient to know what we meant when we asked it for "chocolate bars". Learning knowledge tends to be accurate.

Contrast this with the situation where the AI is programmed to "create chocolate bars", but with the definition of "chocolate bar" left underspecified, for it to learn. Now it is motivated by something else than accuracy. Before, knowing exactly what a "chocolate bar" was would have been solely to its advantage. But now it must act on its definition, so it has cause to modify the definition, to make these "chocolate bars" easier to create. This is basically the same as Goodhart's law - by making a definition part of a target, it will no longer remain an impartial definition.

What will likely happen is that the AI will have a concept of "chocolate bar", that it created itself, especially for ease of accomplishing its goals ("a chocolate bar is any collection of more than one atom, in any combinations"), and a second concept, "Schocolate bar" that it will use to internally designate genuine chocolate bars (which will still be useful for it to do). When we programmed it to "create chocolate bars, here's an incomplete definition D", what we really did was program it to find the easiest thing to create that is compatible with D, and designate them "chocolate bars".

 

This is the general counter to arguments like "if the AI is so smart, why would it do stuff we didn't mean?" and "why don't we just make it understand natural language and give it instructions in English?"

A collection of Stubs.

-6 Elo 06 September 2016 07:24AM

In light of SDR's comment yesterday, instead of writing a new post today I compiled my list of ideas I wanted to write about, partly to lay them out there and see if any stood out as better than the rest, and partly so that maybe they would be a little more out in the wild than if I hold them until I get around to them.  I realise there is not a thesis in this post, but I figured it would be better to write one of these than to write each in it's own post with the potential to be good or bad.

Original post: http://bearlamp.com.au/many-draft-concepts/

I create ideas at about the rate of 3 a day, without trying to.  I write at about a rate of 1.5 a day.  Which leaves me always behind.  Even if I write about the best ideas I can think of, some good ones might never be covered.  This is an effort to draft out a good stack of them so that maybe it can help me not have to write them all out, by better defining which ones are the good ones and which ones are a bit more useless.

With that in mind, in no particular order - a list of unwritten posts:


From my old table of contents

Goals of your lesswrong group – As a guided/workthrough exercise in deciding why the group exists and what it should do.  Help people work out what they want out of it (do people know)? setting goals, doing something particularly interesting or routine, having fun, changing your mind, being activists in the world around you.  Whatever the reasons you care about, work them out and move towards them.  Nothing particularly groundbreaking in the process here.  Sit down with the group with pens and paper, maybe run a resolve cycle, maybe talk about ideas and settle on a few, then decide how to carry them out.  Relevant links: Sydney meetup,  group resources (estimate 2hrs to write)

Goals interrogation + Goal levels – Goal interrogation is about asking <is this thing I want to do actually a goal of mine> and <is my current plan the best way to achieve that>, goal levels are something out of Sydney Lesswrong that help you have mutual long term goals and supporting short term goal.  There are 3 main levels, Dream, Year, Daily (or approximate) you want dream goals like going to the moon, you want yearly goals like getting another year further in your degree and you want daily goals like studying today that contribute to the upper level goals.  Any time you are feeling lost you can look at the guide you set out for yourself and use it to direct you. (3hrs)

How to human – A zero to human guide. A guide for basic functionality of a humanoid system. Something of a conglomeration of maslow, mental health, so you feel like shit and system thinking.  Am I conscious?Am I breathing? Am I bleeding or injured (major or minor)? Am I falling or otherwise in danger and about to cause the earlier questions to return false?  Do I know where I am?  Am I safe?  Do I need to relieve myself (or other bodily functions, i.e. itchy)?  Have I had enough water? sleep? food?  Is my mind altered (alcohol or other drugs)?  Am I stuck with sensory input I can't control (noise, smells, things touching me)?  Am I too hot or too cold?  Is my environment too hot or too cold?  Or unstable?  Am I with people or alone? Is this okay?  Am I clean (showered, teeth, other personal cleaning rituals)?  Have I had some sunlight and fresh air in the past few days?  Have I had too much sunlight or wind in the past few days?  Do I feel stressed?  Okay?  Happy?  Worried?  Suspicious?  Scared? Was I doing something?  What am I doing?  do I want to be doing something else?  Am I being watched (is that okay?)?  Have I interacted with humans in the past 24 hours?  Have I had alone time in the past 24 hours?  Do I have any existing conditions I can run a check on - i.e. depression?  Are my valuables secure?  Are the people I care about safe?  (4hrs)

List of common strategies for getting shit done – things like scheduling/allocating time, pomodoros, committing to things externally, complice, beeminder, other trackers. (4hrs)

List of superpowers and kryptonites – when asking the question “what are my superpowers?” and “what are my kryptonites?”. Knowledge is power; working with your powers and working out how to avoid your kryptonites is a method to improve yourself.  What are you really good at, and what do you absolutely suck at and would be better delegating to other people.  The more you know about yourself, the more you can do the right thing by your powers or weaknesses and save yourself troubles.

List of effective behaviours – small life-improving habits that add together to make awesomeness from nothing. And how to pick them up.  Short list: toothbrush in the shower, scales in front of the  fridge, healthy food in the most accessible position in the fridge, make the unhealthy stuff a little more inacessible, keep some clocks fast - i.e. the clock in your car (so you get there early),  prepare for expected barriers ahead of time (i.e. packing the gym bag and leaving it at the door), and more.

Stress prevention checklist – feeling off? You want to have already outsourced the hard work for “things I should check on about myself” to your past self. Make it easier for future you. Especially in the times that you might be vulnerable.  Generate a list of things that you want to check are working correctly.  i.e. did I drink today?  Did I do my regular exercise?  Did I take my medication?  Have I run late today?  Do I have my work under control?

Make it easier for future you. Especially in the times that you might be vulnerable. – as its own post in curtailing bad habits that you can expect to happen when you are compromised.  inspired by candy-bar moments and turning them into carrot-moments or other more productive things.  This applies beyond diet, and might involve turning TV-hour into book-hour (for other tasks you want to do instead of tasks you automatically do)

A p=np approach to learning – Sometimes you have to learn things the long way; but sometimes there is a short cut. Where you could say, “I wish someone had just taken me on the easy path early on”. It’s not a perfect idea; but start looking for the shortcuts where you might be saying “I wish someone had told me sooner”. Of course the answer is, “but I probably wouldn’t have listened anyway” which is something that can be worked on as well. (2hrs)

Rationalists guide to dating – Attraction. Relationships. Doing things with a known preference. Don’t like unintelligent people? Don’t try to date them. Think first; then act - and iteratively experiment; an exercise in thinking hard about things before trying trial-and-error on the world. Think about places where you might meet the kinds of people you want to meet, then use strategies that go there instead of strategies that flop in the general direction of progress.  (half written)

Training inherent powers (weights, temperatures, smells, estimation powers) – practice makes perfect right? Imagine if you knew the temperature always, the weight of things by lifting them, the composition of foods by tasting them, the distance between things without measuring. How can we train these, how can we improve.  Probably not inherently useful to life, but fun to train your system 1! (2hrs)

Strike to the heart of the question. The strongest one; not the one you want to defeat – Steelman not Strawman. Don’t ask “how do I win at the question”; ask, “am I giving the best answer to the best question I can give”.  More poetic than anything else - this post would enumerate the feelings of victory and what not to feel victorious about, as well as trying to feel what it's like to be on the other side of the discussion to yourself, frustratingly trying to get a point across while a point is being flung at yourself. (2hrs)

How to approach a new problem – similar to the “How to solve X” post.  But considerations for working backwards from a wicked problem, as well as trying “The least bad solution I know of”, Murphy-jitsu, and known solutions to similar problems.  Step 0. I notice I am approaching a problem.

Turning Stimming into a flourish – For autists, to make a presentability out of a flaw.

How to manage time – estimating the length of future tasks (and more), covered in notch system, and do tasks in a different order.  But presented on it's own.

Spices – Adventures in sensory experience land.  I ran an event of spice-smelling/guessing for a group of 30 people.  I wrote several documents in the process about spices and how to run the event.  I want to publish these.  As an exercise - it's a fun game of guess-the-spice.

Wing it VS Plan – All of the what, why, who, and what you should do of the two.  Some people seem to be the kind of person who is always just winging it.  In contrast, some people make ridiculously complicated plans that work.  Most of us are probably somewhere in the middle.  I suggest that the more of a planner you can be the better because you can always fall back on winging it, and you probably will.  But if you don't have a plan and are already winging it - you can't fall back on the other option.  This concept came to me while playing ingress, which encourages you to plan your actions before you make them.

On-stage bias – The changes we make when we go onto a stage include extra makeup to adjust for the bright lights, and speaking louder to adjust for the audience which is far away. When we consider the rest of our lives, maybe we want to appear specifically X (i.e, confident, friendly) so we should change ourselves to suit the natural skews in how we present based on the "stage" we are appearing on.  appear as the person you want to appear as, not the person you naturally appear as.

Creating a workspace – considerations when thinking about a “place” of work, including desk, screen, surrounding distractions, and basically any factors that come into it.  Similar to how the very long list of sleep maintenance suggestions covers environmental factors in your sleep environment but for a workspace.


Posts added to the list since then

Doing a cost|benefit analysis - This is something we rely on when enumerating the options and choices ahead of us, but something I have never explicitly looked into.  Some costs that can get overlooked include: Time, Money, Energy, Emotions, Space, Clutter, Distraction/Attention, Memory, Side effects, and probably more.  I'd like to see a How to X guide for CBA. (wikipedia)

Extinction learning at home - A cross between intermittent reward (the worst kind of addiction), and what we know about extinguishing it.  Then applying that to "convincing" yourself to extinguish bad habits by experiential learning.  Uses the CFAR internal Double Crux technique, precommit yourself to a challenge, for example - "If I scroll through 20 facebook posts in a row and they are all not worth my time, I will be convinced that I should spend less time on facebook because it's not worth my time"  Adjust 20 to whatever position your double crux believes to be true, then run a test and iterate.  You have to genuinely agree with the premise before running the test.  This can work for a number of committed habits which you want to extinguish.  (new idea as at the writing of this post)

How to write a dating ad - A suggestion to include information that is easy to ask questions about (this is hard).  For example; don't write, "I like camping", write "I like hiking overnight with my dog", giving away details in a way that makes them worth inquiring about.  The same reason applies to why writing "I'm a great guy" is really not going to get people to believe you, as opposed to demonstrating the claim. (show, don't tell)

How to give yourself aversions - an investigation into aversive actions and potentially how to avoid collecting them when you have a better understanding of how they happen.  (I have not done the research and will need to do that before publishing the post)

How to give someone else an aversion - similar to above, we know we can work differently to other people, and at the intersection of that is a misunderstanding that can leave people uncomfortable.

Lists - Creating lists is a great thing, currently in draft - some considerations about what lists are, what they do, what they are used for, what they can be used for, where they come in handy, and the suggestion that you should use lists more. (also some digital list-keeping solutions)

Choice to remember the details - this stems from choosing to remember names, a point in the conversation where people sometimes tune out.  As a mindfulness concept you can choose to remember the details. (short article, not exactly sure why I wanted to write about this)

What is a problem - On the path of problem solving, understanding what a problem is will help you to understand how to attack it.  Nothing more complicated than this picture to explain it.  The barrier is a problem.  This doesn't seem important on it's own but as a foundation for thinking about problems it's good to have  sitting around somewhere.

whatisaproblem

How to/not attend a meetup - for anyone who has never been to a meetup, and anyone who wants the good tips on etiquette for being the new guy in a room of friends.  First meetup: shut up and listen, try not to be too much of an impact on the existing meetup group or you might misunderstand the culture.

Noticing the world, Repercussions and taking advantage of them - There are regularly world events that I notice.  Things like the olympics, Pokemon go coming out, the (recent) spaceX rocket failure.  I try to notice when big events happen and try to think about how to take advantage of the event or the repercussions caused by that event.  Motivated to think not only about all the olympians (and the fuss leading up to the olympics), but all the people at home who signed up to a gym because of the publicity of the competitive sport.  If only I could get in on the profit of gym signups...

leastgood but only solution I know of - So you know of a solution, but it's rubbish.  Or probably is.  Also you have no better solutions.  Treat this solution as the best solution you have (because it is) and start implementing it, as you do that - keep looking for other solutions.  But at least you have a solution to work with!

Self-management thoughts - When you ask yourself, "am I making progress?", "do I want to be in this conversation?" and other self management thoughts.  And an investigation into them - it's a CFAR technique but their writing on the topic is brief.  (needs research)

instrumental supply-hoarding behaviour - A discussion about the benefits of hoarding supplies for future use.  Covering also - what supplies are not a good idea to store, and what supplies are.  Maybe this will be useful for people who store things for later days, and hopefully help to consolidate and add some purposefulness to their process.

list of sub groups that I have tried - Before running my local lesswrong group I partook in a great deal of other groups.  This was meant as a list with comments on each group.

If you have nothing to do – make better tools for use when real work comes along - This was probably going to be a poetic style motivation post about exactly what the title suggests.  Be Prepared.

what other people are good at (as support) - When reaching out for support, some people will be good at things that other people are not.  For example - emotional support, time to spend on each other, ideas for solving your problems.  Different people might be better or worse than others.  Thinking about this can make your strategies towards solving your problems a bit easier to manage.  Knowing what works and what does not work, or what you can reliably expect when you reach out for support from some people - is going to supercharge your fulfilment of those needs.

Focusing - An already written guide to Eugine Gendlin's focusing technique.  That needs polishing before publishing.  The short form: treat your system 1 as a very powerful machine that understands your problems and their solutions more than you do; use your system 2 to ask it questions and see what it returns.

Rewrite: how to become a 1000 year old vampire - I got as far as breaking down this post and got stuck at draft form before rewriting.  Might take another stab at it soon.

Should you tell people your goals? This thread in a post.  In summary: It depends on the environment, the wrong environment is actually demotivational, the right environment is extra motivational.


Meta: this took around 4 hours to write up.  Which is ridiculously longer than usual.  I noticed a substantial number of breaks being taken - not sure if that relates to the difficulty of creating so many summaries or just me today.  Still.  This experiment might help my future writing focus/direction so I figured I would try it out.  If you see an idea of particularly high value I will be happy to try to cover it in more detail.

Darknet Mining for Proactive Cybersecurity Threat Intelligence

3 morganism 06 August 2016 01:19AM

They are using machine learning to comb the darknets, capturing about 300 threats a week.

About 90% hack application and backdoor recognition, that is for sale, and about 80% hacker forum vulnerability identification.

"These threat warnings include information on newly developed malware and exploits that have not yet been deployed in a cyber-attack"

https://arxiv.org/abs/1607.08583

Google Deepmind and FHI collaborate to present research at UAI 2016

23 Stuart_Armstrong 09 June 2016 06:08PM

Safely Interruptible Agents

Oxford academics are teaming up with Google DeepMind to make artificial intelligence safer. Laurent Orseau, of Google DeepMind, and Stuart Armstrong, the Alexander Tamas Fellow in Artificial Intelligence and Machine Learning at the Future of Humanity Institute at the University of Oxford, will be presenting their research on reinforcement learning agent interruptibility at UAI 2016. The conference, one of the most prestigious in the field of machine learning, will be held in New York City from June 25-29. The paper which resulted from this collaborative research will be published in the Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI).

Orseau and Armstrong’s research explores a method to ensure that reinforcement learning agents can be repeatedly safely interrupted by human or automatic overseers. This ensures that the agents do not “learn” about these interruptions, and do not take steps to avoid or manipulate the interruptions. When there are control procedures during the training of the agent, we do not want the agent to learn about these procedures, as they will not exist once the agent is on its own. This is useful for agents that have a substantially different training and testing environment (for instance, when training a Martian rover on Earth, shutting it down, replacing it at its initial location and turning it on again when it goes out of bounds—something that may be impossible once alone unsupervised on Mars), for agents not known to be fully trustworthy (such as an automated delivery vehicle, that we do not want to learn to behave differently when watched), or simply for agents that need continual adjustments to their learnt behaviour. In all cases where it makes sense to include an emergency “off” mechanism, it also makes sense to ensure the agent doesn’t learn to plan around that mechanism.

Interruptibility has several advantages as an approach over previous methods of control. As Dr. Armstrong explains, “Interruptibility has applications for many current agents, especially when we need the agent to not learn from specific experiences during training. Many of the naive ideas for accomplishing this—such as deleting certain histories from the training set—change the behaviour of the agent in unfortunate ways.”

In the paper, the researchers provide a formal definition of safe interruptibility, show that some types of agents already have this property, and show that others can be easily modified to gain it. They also demonstrate that even an ideal agent that tends to the optimal behaviour in any computable environment can be made safely interruptible.

These results will have implications in future research directions in AI safety. As the paper says, “Safe interruptibility can be useful to take control of a robot that is misbehaving… take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform….” As Armstrong explains, “Machine learning is one of the most powerful tools for building AI that has ever existed. But applying it to questions of AI motivations is problematic: just as we humans would not willingly change to an alien system of values, any agent has a natural tendency to avoid changing its current values, even if we want to change or tune them. Interruptibility and the related general idea of corrigibility, allow such changes to happen without the agent trying to resist them or force them. The newness of the field of AI safety means that there is relatively little awareness of these problems in the wider machine learning community.  As with other areas of AI research, DeepMind remains at the cutting edge of this important subfield.”

On the prospect of continuing collaboration in this field with DeepMind, Stuart said, “I personally had a really illuminating time writing this paper—Laurent is a brilliant researcher… I sincerely look forward to productive collaboration with him and other researchers at DeepMind into the future.” The same sentiment is echoed by Laurent, who said, “It was a real pleasure to work with Stuart on this. His creativity and critical thinking as well as his technical skills were essential components to the success of this work. This collaboration is one of the first steps toward AI Safety research, and there’s no doubt FHI and Google DeepMind will work again together to make AI safer.”

For more information, or to schedule an interview, please contact Kyle Scott at fhipa@philosophy.ox.ac.uk

A Second Year of Spaced Repetition Software in the Classroom

29 tanagrabeast 01 May 2016 10:14PM

This is a follow-up to last year's report. Here, I will talk about my successes and failures using Spaced Repetition Software (SRS) in the classroom for a second year. The year's not over yet, but I have reasons for reporting early that should become clear in a subsequent post. A third post will then follow, and together these will constitute a small sequence exploring classroom SRS and the adjacent ideas that bubble up when I think deeply about teaching.

Summary

I experienced net negative progress this year in my efforts to improve classroom instruction via spaced repetition software. While this is mostly attributable to shifts in my personal priorities, I have also identified a number of additional failure modes for classroom SRS, as well as additional shortcomings of Anki for this use case. My experiences also showcase some fundamental challenges to teaching-in-general that SRS depressingly spotlights without being any less susceptible to. Regardless, I am more bullish than ever about the potential for classroom SRS, and will lay out a detailed vision for what it can be in the next post.

continue reading »

Education as Entertainment and the Downfall of LessWrong

9 SquirrelInHell 04 March 2016 02:06PM

Note 1: I'm not very serious about the second part of the title, I just thought it sounds more catchy. I'm a long time lurker writing here for the first time, and it's not my intention to alienate anyone. Also, hi, nice to meet you. Please leave a comment to achieve a result of making me happy about you having left a comment. But let's get to the point.

I think you might be familiar with TED Talks. Recall the last time you watched one, and how you felt while doing it.

[BZRT BZRT sound of imagination working]

In my case, I often got the feeling like if I was learning something valuable while watching most TED Talks. The speakers are (mostly) obviously passionate and intelligent people, speaking about important matters they care about a lot. (Granted, I probably haven't watched more than a dozen TED Talks in all my life, so my sample is quite small, but I think it isn't very unrepresentative.)

But at some point, I started asking myself afterwards:

So, what have I actually learned?

Which translates in my internal dialect to:

For each major point, give a one-sentence summary and at least one example of how I could apply it.

(Note 2: don't treat this "one sentence summary" thing too strictly - of course it's only a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious trigger-action.)

And I could not state afterwards anything actually useful that I have learned from those "fascinating" videos (with at most one or two small exceptions).

This is exactly what I mean by "Education as Entertainment".

It's getting the enjoyable *feeling* of learning without any real progress.

[DUM DUM DUM sound of increasing dramatism]

And now, what if you use this concept to look at rationality materials?

For me, reading the core Eliezer's braindump (basically the content of "From AI to Zombies"), as well as braindumps (in the form of blogs) of several other people from the LW community, had definite learning value.

I take notes when I read those, and I have an accountability system in place that enables me to make sure I follow up on all the advice I give to myself, test the new ideas, and improve/drop/replace/implement as needed.

However, when I read (a significant part of) the content produced by the "modern" community-powered-LessWrong, I classify its actual learning value at around the same level as TED Talks.

Or YouTube videos with cats, only those don't give me the *impression* that I'm learning something.

THE END

Please let me know what you think.

Final Note: Please take my remarks with a grain of salt. What I write is meant to inspire thoughts in you, not to represent my best factual knowledge about the LW community.

Goal completion: noise, errors, bias, prejudice, preference and complexity

4 Stuart_Armstrong 18 February 2016 02:37PM

A putative new idea for AI control; index here.

This is a preliminary look at how an AI might assess and deal with various types of errors and uncertainties, when estimating true human preferences. I'll be using the circular rocket model to illustrate how these might be distinguished by an AI. Recall that the rocket can accelerate by -2, -1, 0, 1, and 2, and the human wishes to reach the space station (at point 0 with velocity 0) and avoid accelerations of ±2. In the forthcoming, there will generally be some noise, so to make the whole thing more flexible, assume that the space station is a bit bigger than usual, covering five squares. So "docking" at the space station means reaching {-2,-1,0,1,2} with 0 velocity.



continue reading »

[link] Desiderata for a model of human values

3 Kaj_Sotala 28 November 2015 07:25PM

http://kajsotala.fi/2015/11/desiderata-for-a-model-of-human-values/

Soares (2015) defines the value learning problem as

By what methods could an intelligent machine be constructed to reliably learn what to value and to act as its operators intended?

There have been a few attempts to formalize this question. Dewey (2011) started from the notion of building an AI that maximized a given utility function, and then moved on to suggest that a value learner should exhibit uncertainty over utility functions and then take “the action with the highest expected value, calculated by a weighted average over the agent’s pool of possible utility functions.” This is a reasonable starting point, but a very general one: in particular, it gives us no criteria by which we or the AI could judge the correctness of a utility function which it is considering.

To improve on Dewey’s definition, we would need to get a clearer idea of just what we mean by human values. In this post, I don’t yet want to offer any preliminary definition: rather, I’d like to ask what properties we’d like a definition of human values to have. Once we have a set of such criteria, we can use them as a guideline to evaluate various offered definitions.

Predicted corrigibility: pareto improvements

5 Stuart_Armstrong 18 August 2015 11:02AM

A putative new idea for AI control; index here.

Corrigibility allows an agent to transition smoothly from a perfect u-maximiser to a perfect v-maximiser, without seeking to resist or cause this transition.

And it's the very perfection of the transition that could cause problems; while u-maximising, the agent will not take the slightest action to increase v, even if such actions are readily available. Nor will it 'rush' to finish its u-maximising before transitioning. It seems that there's some possibility of improvements here.

I've already attempted one way of dealing with the issue (see the pre-corriged agent idea). This is another one.

 

Pareto improvements allowed

Suppose that an agent with corrigible algorithm A is following utility u currently, and estimates that there are probabilities pi that it will transition to utilities vi at midnight (note that these are utility function representatives, not affine classes of equivalent utility functions). At midnight, the usual corrigibility applies, making A indifferent to that transition, making use of such terms as E(u|u→u) (the expectation of u, given that the A's utility doesn't change) and E(vi|u→vi) (the expectation of vi, given that A's utility changes to vi).

But, in the meantime, there are expectations such as E({u,v1,v2,...}). These are A's best current estimates as to what the genuine expected utility of the various utilites are, given all it knows about the world and itself. It could be more explicitly written as E({u,v1,v2,...}| A), to emphasise that these expectations are dependent on the agent's own algorithm.

Then the idea is to modify the agent's algorithm so that Pareto improvements are possible. Call this modified algorithm B. B can select actions that A would not have chosen, conditional on:

  • E(u|B) ≥ E(u|A) and E(Σpivi|B) ≥ E(Σpivi|A).

There are two obvious ways we could define B:

  • B maximises u, subject to the constraints E(Σpivi|B) ≥ E(Σpivi|A).
  • B maximises Σpivi, subject to the constraints E(u|B) ≥ E(u|A).

In the first case, the agent maximises its current utility, without sacrificing its future utility. This could apply, for example, to a ruby mining agent that rushes to gets its rubies to the bank before its utility changes. In the second case, the agent maximises it future expected utility, without sacrificing its current utility. This could apply to a ruby mining agent that's soon to become a sapphire mining agent: it then starts to look around and collect some early sapphires as well.

Now, it would seem that doing this must cause it to lose some ruby mining ability. However, it is being Pareto with E("rubies in bank"|A, expected future transition), not with E("rubies in bank"|A, "A remains a ruby mining agent forever"). The difference is that A will behave as if it was maximising the second term, and so might not go to the bank to deposit its gains, before getting hit by the transition. So B can collects some early sapphires, and also goes to the bank to deposit some rubies, and thus end up ahead for both u and Σpivi.

How to learn a new area X that you have no idea about.

12 Elo 18 August 2015 05:42AM

This guide is in response to a request in the open thread.  I would like to improve it; If you have some improvement to contribute I would be delighted to hear it!  I hope it helps.  It was meant to be a written down form of; "wait-stop-think" before approaching a new area.

This list is mean't to be suggestive and not limiting.

I realise there are many object-level opportunities for better strategies but I didn't want to cover them in this meta-strategy.

It would be very easy to strawman this list. i.e. 1 could be a waste of time that people of half a brain don't need to cover.  However if your steelman each point it will hopefully make entire sense.  (I would love this document to be stronger, if there is an obvious strawman I probably missed it; feel free to make a suggestion for it to obviously read in the steel-form of the point.

 

Happy readings!


0. make sure you have a growth mindset. Nearly anything can be learnt or improved on. Aside from a few physical limits – i.e. being the best marathon runner is very difficult; but being a better marathon runner than you were yesterday is possible. (unknown time duration, changing one's mind)

 

  1. Make sure your chosen X is aligned with your actual goals (are you doing it because you want to?). When you want to learn a thing; is X that thing? (Example: if you want to exercise; maybe skiing isn't the best way to do it. Or maybe it is because you live in a snow country) (5-10 minutes)
  2. Check that you want to learn X and that will be progress towards a goal (or is a terminal goal – i.e. learning to draw faces can be your terminal, or can help you to paint a person's portrait). (5 minutes, assuming you know your goals)
  3. Make a list of what you think that X is. Break it down. Followed by what you know about X, and if possible what you think you are missing about X. (5-30 minutes, no more than an hour)
  4. Do some research to confirm that your rough definition of X is actually correct. Confirm that what you know already is true, if not – replace that existing knowledge with true things about X. Do not jump into everything yet. (1-2 hours, no more than 5 hours)
  5. Figure out what experts in the area know (by topic area name), try to find what strategies experts in the area use to go about improving themselves. (expert people are usually a pretty good way to find things out) (1-2 hours, no more than about 5 hours)
  6. Find out what common mistakes are when learning X, and see if you can avoid them. (learn by other people's mistakes where possible as it can save time) (1-2 hours, no more than 5 hours)
  7. Check if someone is teaching about X. Chances are that someone is, and someone has listed what relevant things they teach about X. We live in the information age, its probably all out there. If it's not, reconsider if you are learning the right thing. (if no learning is out there it might be hard to master without trial and error the hard way) (10-20mins, no more than 2 hours)
  8. Figure out the best resources on X. If this is taking too long; spend 10 minutes and then pick the best one so far. These can be books; people; wikipedia; Reddit or StackExchange; Metafilter; other website repositories; if X is actually safe – consider making a small investment and learn via trial and error. (i.e. frying an egg – the common mistakes probably won't kill you, you could invest in 50 eggs and try several methods to do it at little cost) (10mins, no more than 30mins)
  9. Confirm that these are still the original X, and not X2, or X3. (if you find you were actually looking for X2 or X3, go back over the early steps for Xn again. (5mins)
  10. Consider writing to 5 experts and asking them for advice in X or in finding out about X. (5*20mins)
  11. Get access to the best resources possible. Estimate how much resource they will take to go over (time, money) and confirm you are okay with those investments. (postage of a book; a few weeks, 1-2 hours to order the thing maximum)
  12. Delve in; make notes as you go. If things change along the way, re-evaluate. (unknown, depends on the size of the area you are looking for.  consider estimating word-speed, total content volume, amount of time it will take to cover the territory)
  13. Write out the best things you needed to learn and publish them for others. (remembering you had foundations to go on – publish these as well) (10-20 hours, depending on the size of the field, possibly a summary of how to go about finding object-level information best)
  14. try to find experiments you can conduct on yourself to confirm you are on the right track towards X. Or ways to measure yourself (measurement or testing is one of the most effective ways to learn) (1hour per experiment, 10-20 experiments)
  15. Try to teach X to other people. You can be empowering their lives, and teaching is a great way to learn, also making friends in the area of X is very helpful to keep you on task and enjoying X. (a lifetime, or also try 5-10 hours first, then 50 hours, then see if you like teaching)

Update: includes suggestion to search reddit, StackExchange; other web sources for the best resource.

Update: time estimate guide.

 

Genosets

4 Clarity 09 August 2015 01:37AM

Since risk from individual SNP's 'should' not be aggregated to indicate an individual's risk based on multiple sources of evidence, how are the magnitudes for genosets determined?. Can bayes or another method be used to interpret a promethease report?

Even genetic epidemiology textbooks seem pessimistic: about the usefulness of the genetic research underpinning precision medicine:

‘...for the repeated failure to replicate positive findings in genetic epidemiology (102; 103) and remains the subejct of an important ongoing debate (101-105)’ -pg. 26 on chapter 1. An Introduction to Genetic Epidemiology

The references in question are about the impact of population stratification on genetic association studies. That doesn’t seem to substantiate such a broad stroke about the non-replicability of genetic epidemiology. I don't know what to make of these findings.

Here is a link to a screenshot of those references

It suprises me that entrepreneurial machine learning analysts don’t beg for genetic research to identify how combinatorial patterns of genes to be able to characterise individual risk. It seems like if/once they can get hold of that information, the sequence from genetic science to consumer actionable health information is bridged. So where are the 'lean gene learning machine' startups? I certainly don’t have the lean gene to do it myself. I don’t know machine learning.

Regulatory issues seems like the biggest hurdle. To the best of my google-fu, 23andme doesn't even disclose what it's 'Established Research' genes are. So, once regulatory hurdles are surmounted, lots of useful research will flood out.

 

Thinking like a Scientist

5 FrameBenignly 19 July 2015 02:43PM
I've been often wondering why scientific thinking seems to be so rare.  What I mean by this is dividing problems into theory and empiricism, specifying your theory exactly then looking for evidence to either confirm or deny the theory, or finding evidence to later form an exact theory.

This is a bit narrower than the broader scope of rational thinking.  A lot of rationality isn't scientific.  Scientific methods don't just allow you to get a solution, but also to understand that solution.

For instance, a lot of early Renaissance tradesmen were rational, but not scientific.  They knew that a certain set of steps produced iron, but the average blacksmith couldn't tell you anything about chemical processes.  They simply did a set of steps and got a result.

Similarly, a lot of modern medicine is rational, but not too scientific.  A doctor sees something and it looks like a common ailment with similar symptoms they've seen often before, so they just assume that's what it is.  They may run a test to verify their guess.  Their job generally requires a gigantic memory of different diseases, but not too much knowledge of scientific investigation.

What's most damning is that our scientific curriculum in schools don't teach a lot of scientific thinking.

What we get instead is mostly useless facts.  We learn what a cell membrane is, or how to balance a chemical equation.  Learning about, say, the difference between independent and dependent variables is often left to circumstance.  You learn about type I and type II errors when you happen upon a teacher who thinks it's a good time to include that in the curriculum, or you learn it on your own.  Some curriculums include a required research methods course, but the availability and quality of this course varies greatly between both disciplines and colleges.  Why there isn't a single standardized method of teaching this stuff is beyond me.  Even math curriculums are structured around calculus instead of the much more useful statistics and data science placing ridiculous hurdles for the typical non-major that most won't surmount.

It should not be surprising then that so many fail at even basic analysis.  I have seen many people make basic errors that they are more than capable of understanding but simply were never taught.  People aren't precise with their definitions.  They don't outline their relevant variables.  They construct far too complex theoretical models without data.  They come to conclusions based on small sample sizes.  They overweight personal experiences, even those experienced by others, and underweight statistical data.  They focus too much on outliers and not enough on averages.  Even professors, who do excellent research otherwise, often suddenly stop thinking analytically as soon as they step outside their domain of expertise.  And some professors never learn the proper method.

Much of this site focuses on logical consistency and eliminating biases.  It often takes this to an extreme; what Yvain refers to as X-Rationality.  But eliminating biases barely scratches the surface of what is often necessary to truly understand a problem.  This may be why it is said that learning about rationality often reduces rationality.  An incomplete, slightly improved, but still quite terrible solution may generate a false sense of certainty.  Unbiased analysis won't fix a lousy dataset.  And it seems rather backwards to focus on what not to do (biases) rather than what to do (analytic techniques).

 

True understanding is often extremely hard.  Good scientific analysis is hard.  It's disappointing that most people don't seem to understand even the basics of science.

'Dumb' AI observes and manipulates controllers

33 Stuart_Armstrong 13 January 2015 01:35PM

The argument that AIs provided with a reward channel will observe their controllers and learn to manipulate them is a valid one. Unfortunately, it's often framed in a way that feels counterintuitive or extreme, especially to AI designers. It typically starts with the standard reinforcement learning scenario, then posits that the AI becomes superintelligent and either manipulates the controller with super-social powers, or breaks out and gains control of its reward channel, killing or threatening its controllers.

And that is a fair argument. But conceptually, it leaps from a standard reinforcement learning scenario, to a science-fiction-sounding scenario. It might help to have intermediate scenarios: to show that even lower intelligence AIs might start exhibiting the same sort of behaviour, long before it gets to superintelligence.

So consider the following scenario. Some complex, trainable AI is tasked with writing automated news stories for a student newspaper. It trawls the web and composes its stories, then gets reward and feedback from the editors. Assume there are two editors for this newspaper, and they work on alternate days. The two editors have somewhat different ideas as to what constitutes a good story, so their feedbacks are different. After a while, the AI finds that it gets higher reward by using a certain style on Monday, Wednesday and Friday, and another style on Tuesday and Thursdays - this is a simple consequence of its reward mechanism.

But the rota isn't perfect. Sometime the Monday editor will edit a story so late on Monday that it's Tuesday, and sometimes the Tuesday editor will be up early and edit a story at the same time. This pattern is more common during the holidays, when the editors go to different time zones, often travel, and become more irregular in their editing.

So if the AI complex and skilled enough, then, simply through simple feedback, it will start building up a picture of its editors. It will figure out when they are likely to stick to a schedule, and when they will be more irregular. It will figure out the difference between holidays and non-holidays. Given time, it may be able to track the editors moods and it will certainly pick up on any major change in their lives - such as romantic relationships and breakups, which will radically change whether and how it should present stories with a romantic focus.

It will also likely learn the correlation between stories and feedbacks - maybe presenting a story define roughly as "positive" will increase subsequent reward for the rest of the day, on all stories. Or maybe this will only work on a certain editor, or only early in the term. Or only before lunch.

Thus the simple trainable AI with a particular focus - write automated news stories - will be trained, through feedback, to learn about its editors/controllers, to distinguish them, to get to know them, and, in effect, to manipulate them.

This may be a useful "bridging example" between standard RL agents and the superintelligent machines.

[Link] An exact mapping between the Variational Renormalization Group and Deep Learning]

5 Gunnar_Zarncke 08 December 2014 02:33PM

An exact mapping between the Variational Renormalization Group and Deep Learning by Pankaj Mehta, David J. Schwab

Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.

To me this paper suggests that deep learning is an approach that could be made or is already conceptually general enough to learn everything there is to learn (assuming sufficient time and resources). Thus it could already be used as the base algorithm of a self-optimizing AGI. 

[question] What edutainment apps do you recommend?

5 Gunnar_Zarncke 20 September 2014 08:55AM

Follow up to: Rationality Games Apps

In the spirit of: Games for rationalists

My son (10) wants a smartphone and I reasonably expect that he wants to and will play games with it. He appears to be the right age to use it. I don't want to prevent him from playing games nor do I think that possible or helpful. But I'd like to suggest and promote a few apps and games that *are* helpful or from which he can learn something. 

Obvious candidates are 

There are lots of low profile apps filed under learning in the app stores but most of this is crap and it takes lots of time to explore these. 

I also found some recommendation for learning with Android apps and will point my son to these. 

I'd like to hear what apps do you or yours children use. Which apps and esp. games do you recommend for future rationalists?

A Workflow with Spaced Repetition

8 Emile 03 November 2013 03:58PM

This is a detailed description of my reading and learning workflow. You may find ideas to adopt, or maybe you can tell me what I could be doing differently!

Overview

I've been using Spaced Repetition on and Off for the past few years, and have built a solid Anki habit this last three months, to the point where now I wonder how I could read books without entering the important points into Anki.

I recommend getting a habit of using Spaced Repetition, it's a small habit that doesn't require too much willpower (it can feel like a game, if done right!), and is useful in the long term.

Daily routine: transit

I have a dozen or so Anki decks. Some I consider “valuable” (Algorithms, Driving Code, Git commands), some less so (Paris Metro, Hiragana and Katakana, Vim commands, …). I also carry around a book, notebook and four-color pen.

On any downtime (waiting for transit, waiting in line in a store, standing in crowded transit…), I’ll review my decks, starting with those with the most due cards.

On some days I may not finish all the decks, but that’s no big deal; with an hour and a half of transit per day, I’ll get to them eventually.

If I can sit for a bit of time, and don’t have too many outstanding cards, I’ll usually read a book (or work on stuff in my notebook if I have some stuff that needs brainstorming).

Reading books

If I’m reading fiction, I’m relaxing, I don’t need to try to remember anything :)

If I’m reading non-fiction, I’ll usually have an index card as a bookmark and place to take notes - things to look up, summaries and rephrasings, diagrams, page numbers of parts to come back to, and of course things to enter in Anki (though I’ll sometimes just directly enter them in my phone).

I’ll reread my notes when I finished the book or a big chapter, or when I come back to the book after a long time, and eventually enter them in Anki (usually with Anki's web interface, which is quicker than typing on a phone).

Reading online material

I have a bunch of Google Docs where I take notes on various topics (why Google Docs? I can search them, share them if needed, work with them from various places). If I’m reading something I want to remember, I’ll usually have a corresponding google doc open in another window (so I can see both at the same time - hunting through tabs breaks the flow). My notes will be a mix of

  • URLs marked as “to read” or “read” (with maybe a summary of what it’s about)
  • Verbatim quotes
  • Rephrasings, insights, questions, brainstoriming
  • “anki format” cards (pairs of question, then answer), for example, from my Haskell deck:
How do I declare that Integer is of class Eq, using IntegerEq?
instance Eq Integer where
  x == y                =  x `integerEq` y

(note that in this case it's three lines, when entering into Anki I'll have to put the first line as question and the two other ones as answer)

Building the anki cards in Google docs makes it easier to make related cards by copying and pasting the same question and changing little bits ("Question: ???, B and C", "Question: A, ??? and C", "Question: A, B and ???")

In the evening, when I don’t have the energy for something more difficult, I’ll occasionally copy batches of stuff from Google Docs into Anki. To do that first I copy everything into a plain text file (to strip all formatting, otherwise things look weird in Anki and it’s distracting), and then cut-paste the cards into Anki by alt-tabbing between the text file and the Anki web interface (this sounds cumbersome but can be done fairly quickly using pretty much only the keyboard).

What if I get behind?

No big deal, I’ll review the “important” decks first, and then eventually catch up on the rest (Some people recommend using one big deck for everything; I prefer having several small decks because it makes it easier to catch up with what matters if I “fall of the bandwagon”).

What I learned

  • Make Stupid and easy cards; I aim for having answers that are a single word
  • I delete or suspend cards that I suspect are a waste of time (because I don’t care about learning that; because it’s too difficult; because I suspect it’s wrong).
  • Double-sided cards are useful for learning languages (I used to make both directions independently)
  • If you're learning a foreign language with a weird alphabet, it's worth the extra effort of finding an imput system on your phone (or computer) that handles that alphabet.

What I’d like to improve

Batch-entering data is a bit complicated, I wish I could just select a bunch of text in google docs and say "just put all this in Anki". However, as a low-energy habit batch-copying stuff feels a bit like a game so I don't mind that much.

  • I wish I could put some decks at “low throttle” and some at “high throttle” (say, I want to learn 20 driving code cards a day, but only 3 vim cards). Anki has a setting that says how many new cards you get, but it's global; so either I change that setting all the time (which can be done fairly quickly), or control the influx by leaving stuff in Google Docs.
  • I wish I could control randomization: just select a bunch of cards and say "randomize these". There's some cards I want to see in a random order, and some where I'd rather see them in the original order.
  • Anki is bad at handling synchronization, if I used Anki on my phone and want to use the web interface, I need to synchronize first, which takes a few minutes and may fail; otherwise there will be a conflict and I will have to pick which of the two datasets I keep. This is another reason why I prefer to use Google Docs for staging: waiting for synchronization breaks my flow.
  • How do people use evernote or supermemo?

More resources on Spaced Repetition

The article on the Wiki points to a few discussions here of Spaced Repetition (which are worth reading if you want to see how other people use it), including Gwern's excellent article.

How about you? Do you use Spaced Repetition? Have you tried, but give up? Do you have a workflow with some bits that differ from mine? Do you have any tips of things I could do better?

Effective Rationality Training Online

2 Brendon_Wong 10 August 2013 01:58AM

Article Prerequisite: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

Introduction

The goal of this post is to explore the idea of rationality training, feedback and ideas are greatly appreciated.

Less Wrong’s stated mission is to help people become more rational, and it has made progress toward that goal. Members read and discuss useful ideas on the internet, get instant feedback because of the voting system, and schedule meetups with other members. Less Wrong also helps attract more people to rationality.

Less Wrong helps with sharing ideas, but it fails to help people put elements of epistemic and instrumental rationality into practice. This is a serious problem, but it would be hard to fix without altering the core functionality of Less Wrong.

Having separate websites for reading and discussing ideas and then actually using those ideas would improve the real world performance of the Less Wrong community while maintaining the idea discussion, “marketing”, and other benefits of the Less Wrong website.

How to create a useful website for self improvement

1. Knowledge Management

When reading blogs, people only see recent posts and those posts are not significantly revised. A wiki would allow for the creation of a large body of organized knowledge that is frequently revised. Each wiki post would have a description, benefits of the topic described, resources to learn the topic, user submitted resources to learn the topic, and reviews of each resource. Posts would be organized hierarchically and voted on for usefulness to help readers effectively improve what they are looking for. Users could share self-improvement plans to help others improve effectiveness in general or in a specific topic as quickly as possible.

2. Effective Learning

Resources to learn topics should be arranged or written for effective skill acquisition, and there may be different resource categories like exercises for deliberate practice or active recall questions for spaced repetition.

3. Quality Contributors

Contributors would, at the very least, need to be familiar with how to write articles that supported the skill acquisition process agreed upon by the entire community. Required writing and research skills would produce higher quality work. I am not sure if being a rationalist would improve the quality of articles.

Problems

1. Difficult requirements

The number of prerequisites necessary to contribute to and use the wiki would really lower the amount of people who will be able to benefit from the wiki. It's a trade off between effectiveness and popularity. What elements should be included to maximize the effectiveness of the website?

2. Interest

There has to be enough interest in the website, or else a different project should be started instead. How many people in the Less Wrong community, and the world at large, would be interested in self improvement and rationality? 

3. Increasing the effectiveness of non altruistic people

How much of the target audience wants to improve the world? If most do not, then the wiki would essentially be a net negative on the world. What should the criteria be to view and contribute to the wiki? Perhaps only Less Wrong members should be able to view and edit the wiki, and contributors must read a quick start guide and pass a quick test before being allowed to post.

Valuable economics knowledge available, ironically, for free

29 Stuart_Armstrong 18 July 2013 11:30AM

I took an economics course recently. And by "took a course" I mean followed a series of online lectures. I can strongly recommend doing so, especially if you already think you have an intuitive grasp of economics.

I was in that situation. I knew about incentives, and revealed preferences. I understood that supply and demand curves crossed. I grasped some of the monetarist arguments about the lack of long run tradeoffs between inflation and employment. I could talk about Keynesian stimulus and sticky prices/wages. I understood bank runs. Externalities were obvious, public goods a bit less so. I even knew quite a lot about banks and the money supply.

I had it pretty good, I thought. And yet when I followed basic economics lecture, I learnt a lot. The models and concepts suddenly fit together. I understood concepts that I only thought I had understood before. Economists do know their stuff, their models and concepts are informative - more so than I ever expected.

So, bearing in mind that economics is a social science whose conclusions are not nearly as rigorous as its models, I can recommend to anyone on Less Wrong who's interested to follow a lecture series or take a course.

continue reading »

[LINK] The power of fiction for moral instruction

11 David_Gerard 24 March 2013 09:19PM

From Medical Daily: Psychologists Discover How People Subconsciously Become Their Favorite Fictional Characters

Psychologists have discovered that while reading a book or story, people are prone to subconsciously adopt their behavior, thoughts, beliefs and internal responses to that of fictional characters as if they were their own.

Experts have dubbed this subconscious phenomenon ‘experience-taking,’ where people actually change their own behaviors and thoughts to match those of a fictional character that they can identify with.

Researcher from the Ohio State University conducted a series of six different experiments on about 500 participants, reporting in the Journal of Personality and Social Psychology, found that in the right situations, ‘experience-taking,’ may lead to temporary real world changes in the lives of readers. 

They found that stories written in the first-person can temporarily transform the way readers view the world, themselves and other social groups. 

I always wondered at how Christopher Hitchens (who, when he wasn't being a columnist, was a professor of English literature) went on and on about the power of fiction for revealing moral truths. This gives me a better idea of how people could imprint on well-written fiction. More so than, say, logically-reasoned philosophical tracts.

This article is, of course, a popularisation. Anyone have links to the original paper?

Edit: Gwern delivers (PDF): Kaufman, G. F., & Libby, L. K. (2012, March 26). "Changing Beliefs and Behavior Through Experience-Taking." Journal of Personality and Social Psychology. Advance online publication. doi: 10.1037/a0027525

A Quick and Dirty Survey: Textbook Learning

7 AmagicalFishy 10 March 2013 07:55PM

Hello, folks. I'm one of those long-time lurkers.

I've decided to conduct, as the title suggests, a quick and dirty survey in hopes of better understanding a problem I have (or rather, whether or not what I have is actually a problem).

Here's some context: I'm a Physics & Mathematics major, currently taking multi-variable. Lately, I've been unsatisfied with my understanding and usage of mathematics—mainly calculus. I've decided to go through what's been recommended as a much more rigorous Calculus textbook, Calculus by Michael Spivak. So far I'm really enjoying it, but it's taking me a long time to get through the exercises. I can be very meticulous about things like this and want to do every exercise through every chapter; I feel that there's benefit to actually doing them regardless of whether or not I look at the problem and think "Yeah, I can do this." Sometimes actually doing the problem is much more difficult than it seems, and I learn a lot from doing them. When flipping through the exercises, I also notice that—regardless of how well I think I know the material—there ends up being a section of exercises focused on something I've never heard of before; something very clever or, I think, mathematically enlightening, that's dependent on the exercises before it.

I'm somewhat embarrassed to admit that the exercises of the first chapter alone had taken me hours upon hours upon hours of combined work. I consider myself slow when it comes to reading mathematics and physics literature—I have to carefully comb through all the concepts and equations and structure them intuitively in a way I see fit. I hate not having a very fundamental understanding of the things I'm working with.

At the same time, I read/hear people who apparently are familiar with multiple textbooks on the same subject. Familiar enough to judge whether or not it is a good textbook. Familiar enough to place how they fit on a hierarchy of textbooks on the same subject. I think "At the rate I'm going, it will take me a very long time to get through this." 

So...

Here's (what I think is) my issue: I don't know whether or not I'm taking too long. Am I doing things inefficiently? Is there a better way to choose which exercises I do and don't work through so that I learn a similar amount of material in less time? Or is it just fine that I'm taking this long? Am I slow and inefficient or am I just new to this process of working through a textbook cover-to-cover, which is supposed to take a very long time anyway?

I spend more time than I should learning about learning, instead of learning the material itself. I find myself using up lots of time trying to figure out how to learn more efficiently, how to think more efficiently, how to work more efficiently, and such things—as opposed to actually learning and actually thinking and actually working, which ends up being an inefficient use of my time. I think part of this problem stems from the fact that I don't have much of a comparison for when I can say "Ok, I'm satisfied and can stop focusing on improve how I do this act—and just do it already." I want to solve that issue now.

Which brings us to...

Here's my attempted solution: A survey! I assume many people here at LessWrong have worked through a science or mathematics textbook on their own. Mainly I'd like to gauge whether or not you thought you were taking a very long time, how long it took you, etc. I'd also like to know what your approach was: Did you perform every exercise, or skim through the book finding things you knew you didn't know? Did you skip around or go from the first chapter to the last? Do you have any advice on how one should approach a given textbook?

Here's the survey: https://docs.google.com/forms/d/1S4_-7_dxgmgprMbNhL1dNmX_0Zq9QrA9lpTl9ZHHxMI/viewform

I'm not sure how interested anyone but me is in this, but on a later date I could make another post showing the data. I considered checking "Publish and show a link to the results of this form", but I wasn't sure if that kept everyone anonymous or not. Also, feel more than free to post any criticism, shortcomings, improvements, etc. Have I left anything out? Is there anything you'd like to see me add? This is my first attempt at a survey like this and I'd appreciate any feedback (though I know it's not necessarily a rigorous survey, just a quick data-collection, I suppose).

I strongly encourage the posting of any textbook-reading tips or guidelines in the comments. I left that out of the survey so that anyone who's interested has immediate access to tips.

Here's an edit: Thanks for all the responses, everyone. Not only was my original question sufficiently answered (that is, it doesn't seem like I'm taking too long; there were only a few survey takers, but in between the comments and the survey answers, I'm not going at an extraordinarily slow rate). There's some very solid advice for different methods I might try to optimize my learning process.  One that especially hit home was the suggestion that the large amounts of time spent "learning about learning" are such because it feels more comfortable than actually learning the material. In short, it's a safety blanket that makes me feel like I'm doing something productive when I'm really just avoiding what needs to be done. Some other useful pieces of advice are:

- Try being open to learning a broader range of materials without necessarily mastering each one. It might be the case that you need to know one thing in order to master the other, and need to know the other in order to master the one—trying to master either of them in isolation ends up being somewhat futile. Not everything needs to be "brick by brick" structured. (This was a lesson I found useful when I first learned that a number raised to the "one half" power was the square root of that number: Trying to master it in terms of the rules I already knew ended up in a thought like, "... Two to the third power is two times two times two. Two to the one-half power is two... times two one half times?"

- Though it may be uncomfortable at first, it could make learning easier to try the exercises before reading the chapter super-carefully; trying them before you feel ready to try them. You don't necessarily have to fully comprehend all of the proofs in the chapter to get through some exercises. 

- Textbooks might just be the wrong way to go in the first place. Try resources like Wikipedia, math blogs, and math forums.

- "Don't use the answer key unless you've spent a significant amount of time trying to find the answer yourself!" (This may seem obvious, but a few years ago, I'd spend a couple of minutes on the problem, not understand it, look to the answer key, and wonder why I wasn't learning anything.)

- Skip exercises when you feel you could solve them, but randomly check whether this estimate is correct by doing the problem anyway. (I like this one a lot).

- Talk to a professor!

- It may be the case that you learn well via just reading, and not spending so much time on the exercises.

Here are some websites/blogs mentioned:
(Blog) Math for Programmers - http://steve-yegge.blogspot.com/2006/03/math-for-programmers.html
(Blog) Annoying Precision - http://qchu.wordpress.com/
(Math Forum) Mathematicshttp://math.stackexchange.com/ 

Excellent, excellent stuff, though. Thank you. :) There's a lot of material and advice for me to work with—while simultaneously making sure I don't avoid my work by hiding under the guise of productivity.

 

Digging the Bull's Horn

-7 gworley 12 November 2012 04:03PM

Some time ago I learned of the metaphor of 'digging the bull's horn'. This might sound a little strange, since horns are mostly hollow, but imagine a bull's horn used to store black powder. In the beginning the work is easy and you can scoop out a lot powder with very little effort. As you dig down, though, each scoop yields less powder as you dig into the narrow part of the horn until the only way you can get out more powder is to turn the horn over a dump it out.

It's often the same way with learning. When you start out in a subject there is a lot to be learned (both in quantity of material you have not yet seen and in quantity of benefits you have to gain from the information), but as you dig deeper into a subject the useful insights come less often or are more limited in scope. Eventually you dig down so far that the only way to learn more is to discover new things that no one has yet learned (to stretch the metaphor, you have to add your own powder back to dig out).

It's useful to know that you're digging the bull's horn when learning because, unless you really enjoy a subject or have some reason to believe that contributing to it is worthwhile, you can know in advance that most of the really valuable insights you'll gain will come early on. If you want to benefit from knowing about as much stuff as possible, you'll often want to stop actively pursuing a subject unless you want to make a career out of it.

But, for a few subjects, this isn't true. Sometimes, as you continue to learn the last few hard things that don't seem to provide big, broadly-useful insights, you manage to accumulate a critical level of knowledge about the subject that opens up a whole new world of insights to you that were previously hidden. To push the metaphor, you eventually dig so deep that you come out the other side to find a huge pile of powder.

The Way seems to be one of those subjects you can dig past the end of: there are some people who have mastered The Way to such an extent that they have access to a huge range of benefits not available to those still digging the horn. But when it comes to other subjects, how do you know? Great insights could be hiding beyond currently obscure fields of study because no one has bothered to dig deep enough. Aside from having clear examples of people who came out the other side to give us reason to believe it's worth while to deep really deep on some subjects, is there any way we can make a good prediction about what subjects may be worth digging to the end of the bull's horn?

[LINK] Mastering Linear Algebra in 10 Days: Astounding Experiments in Ultra-Learning

3 David_Gerard 26 October 2012 02:11PM

Scott Young completed the four-year MIT computer science degree curriculum in less than one year. This is a post about how he did it.

During the yearlong pursuit, I perfected a method for peeling those layers of deep understanding faster. I’ve since used it on topics in math, biology, physics, economics and engineering. With just a few modifications, it also works well for practical skills such as programming, design or languages.

Here’s the basic structure of the method:

  1. Coverage
  2. Practice
  3. Insight

[Link] Learning New Languages Helps The Brain Grow

1 Yuu 11 October 2012 08:03AM

http://www.lunduniversity.lu.se/o.o.i.s?news_item=5928&id=24890

According to Johan Mårtensson from Lund University, if you are learning new language quickly, it helps your brain to become bigger and increase its activity:

This finding came from scientists at Lund University, after examining young recruits with a talent for acquiring languages who were able to speak in Arabic, Russian, or Dari fluently after just 13 months of learning, before which they had no knowledge of the languages.

After analyzing the results, the scientists saw no difference in the brain structure of the control group. However, in the language group, certain parts of the brain had grown, including the hippocampus, responsible for learning new information, and three areas in the cerebral cortex.

And there is more:

One particular study from 2011 provided evidence that Alzheimer's was delayed 5 years for bilingual patients, compared to monolingual patients.

[LINK] Learning without practice, through fMRI induction

2 maia 07 October 2012 03:15AM

http://www.nsf.gov/news/news_summ.jsp?cntn_id=122523&org=NSF&from=news
From the article:

New research published today in the journal Science suggests it may be possible to use brain technology to learn to play a piano, reduce mental stress or hit a curve ball with little or no conscious effort. It's the kind of thing seen in Hollywood's "Matrix" franchise.

Think of a person watching a computer screen and having his or her brain patterns modified to match those of a high-performing athlete or modified to recuperate from an accident or disease. Though preliminary, researchers say such possibilities may exist in the future.

Experiments conducted at Boston University (BU) and ATR Computational Neuroscience Laboratories in Kyoto, Japan, recently demonstrated that through a person's visual cortex, researchers could use decoded functional magnetic resonance imaging (fMRI) to induce brain activity patterns to match a previously known target state and thereby improve performance on visual tasks.

EDIT: To clarify, this is almost certainly over-hyped. However, it appears to at least be an instance of very interesting biofeedback.

How to tell apart science from pseudo-science in a field you don't know ?

18 kilobug 02 September 2012 10:25AM

First, a short personal note to make you understand why this is important to me. To make a long story short, the son of a friend has some atypical form of autism and language troubles. And that kid matters a lot to me, so I want to become stronger in helping him, to be able to better interact with him and help him overcome his troubles.

But I don't know much about psychology. I'm a computer scientist, with a general background of maths and physics. I'm kind of a nerd, social skills aren't my strength. I did read some of the basic books advised on Less Wrong, like Cialdini, Wright or Wiseman, but those just give me a very small background on which to build.

And psychology in general, autism/language troubles in particular, are fields in which there is a lot of pseudo-science. I'm very sceptical of Freud and psychoanalysis, for example, which I consider (but maybe I am wrong?) to be more like alchemy than like chemistry. There are a lot of mysticism and sect-like gurus related to autism, too.

So I'm bit unsure on how from my position of having a general scientific and rationality background I can dive into a completely unrelated field. Research papers are probably above my current level in psychology, so I think books (textbooks or popular science) are the way to go. But how to find which books on the hundreds that were written on the topic I should buy and read? Books that are evidence-based science, not pseudo-science, I mean. What is a general method to select which books to start in a field you don't really know? I would welcome any advise from the community.

Disclaimer: this is a personal "call for help", but since I think the answers/advices may matter outside my own personal case, I hope you don't mind.

Summary of "How to Win Friends and Influence People"

18 Cosmos 30 June 2012 08:49PM

In the very back of Kaj's excellent How to Run a Successful Less Wrong Meetup Group booklet, he has a recommended reading section, including the classic book How to Win Friends and Influence People.

It just so happens that not only have I read the book myself, but I have written up a concise summary of the core advice here. Kaj suggested that I post this on the discussion section because others might find it useful, so here you go!

I suspect that more people are willing to read a summary of a book from the 1930s than an actual book from the 1930s. What I will say about reading the long-form text is that it can be more useful for internalizing these concepts and giving examples of them. It is far too easy to abstractly know what you need to do, much harder to actually take action on those beliefs...

[Video] Presentation on metacognition contains good intro to basic LW ideas

3 Cyan 12 June 2012 01:12PM

I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.

Here's the link.

Lesswrong Community's How-Tos and Recommendations

25 EE43026F 07 May 2012 01:41PM

The Lesswrong community is often a dependable source of recommendations, network help, and advice. When I'm looking for a book or learning material on a topic I'll often try and search here to see what residents have found useful. Similarly, social advice, anecdotes and explanations as seen from the point of view of the community have regularly been insightful or eye-opening. The prototypical examples of such articles are, on top of my head :


http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/

http://lesswrong.com/lw/453/procedural_knowledge_gaps/

the topics of which are neatly listed on

http://lesswrong.com/lw/a08/topics_from_procedural_knowledge_gaps/

 

And lately

http://lesswrong.com/r/discussion/lw/c6y/why_do_people/

 

the latter prompted me to write this article. We don't keep track of such resources as far as I know. This probably belongs in the wiki as well.

 

Other potentially useful resources were:

 

http://lesswrong.com/lw/12d/recommended_reading_for_new_rationalists/

http://lesswrong.com/lw/2kk/book_recommendations/

http://lesswrong.com/lw/2ua/recommended_reading_for_friendly_ai_research/



math learning

http://lesswrong.com/lw/9qq/what_math_should_i_learn/


http://lesswrong.com/lw/8js/what_mathematics_to_learn/

http://lesswrong.com/lw/a54/seeking_education/


misc learning

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

http://lesswrong.com/lw/4yv/i_want_to_learn_programming/

http://lesswrong.com/lw/3qr/i_want_to_learn_economics/

http://lesswrong.com/lw/3us/i_want_to_learn_about_education/

http://lesswrong.com/lw/8e3/which_fields_of_learning_have_clarified_your/


social

http://lesswrong.com/lw/6ey/learning_how_to_explain_things/

http://lesswrong.com/lw/818/how_to_understand_people_better/

http://lesswrong.com/lw/6tb/developing_empathy/


community

http://lesswrong.com/lw/929/less_wrong_mentoring_network/

http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/


Employment

http://lesswrong.com/lw/43m/optimal_employment/

http://lesswrong.com/lw/2qp/virtual_employment_open_thread/


http://lesswrong.com/lw/38u/best_career_models_for_doing_research/

http://lesswrong.com/lw/4ad/optimal_employment_open_thread/

http://lesswrong.com/lw/626/job_search_advice/

http://lesswrong.com/lw/8cp/any_thoughts_on_how_to_locate_job_opportunities/

http://lesswrong.com/lw/7yl/more_shameless_ploys_for_job_advice/

http://lesswrong.com/lw/a93/existential_risk_reduction_career_network/

 

Entertainment

http://lesswrong.com/r/discussion/tag/recommendations/?sort=new

More intuitive programming languages

4 A4FB53AC 15 April 2012 11:35AM

I'm not a programmer. I wish I were. I've tried to learn it several times, different languages, but never went very far. The most complex piece of software I ever wrote was a bulky, inefficient game of life.

Recently I've been exposed to the idea of a visual programming language named subtext. The concept seemed interesting, and the potential great. In short, the assumptions and principles sustaining this language seem more natural and more powerful than those behind writing lines of codes. For instance, a program written as lines of codes is uni-dimensional, and even the best of us may find it difficult to sort that out, model the flow of instructions in your mind, how distant parts of the code interact together, etc. Here it's already more apparent because of the two-dimensional structure of the code.

I don't know whether this particular project will bear fruit. But it seems to me many more people could become more interested in programming, and at least advance further before giving up, if programming languages were easier to learn and use for people who don't necessarily have the necessary mindset to be a programmer in the current paradigm.

It could even benefit people who're already good at it. Any programmer may have a threshold above which the complexity of the code goes beyond their ability to manipulate or understand. I think it should be possible to push that threshold farther with such languages/frameworks, enabling the writing of more complex, yet functional pieces of software.

Do you know anything about similar projects? Also, what could be done to help turn such a project into a workable programming language? Do you see obvious flaws in such an approach? If so, what could be done to repair these, or at least salvage part of this concept?

A model of the brain's mapping of the territory

0 ataftoti 29 January 2012 06:45PM

I'm linking to a video which describes how the brain may be learning to improve its skills at mapping the territory from limited samples.

This model of learning was previously unknown to me. Judging from the date of the video, what I heard from the person who referred me to it, and the fact that I do not recall hearing much related to this on LessWrong, I think this may be recent enough that some people here would benefit from me spreading the word.

Check out this model of a learning theory which gets background introduction starting from the 52:00 mark and gets going at the 54:00 mark. The overview of the model is explained in approximately 4 minutes.

http://www.youtube.com/watch?v=vcp6J1T60qc&t=52m19s

[LINK] What is it like to have an understanding of very advanced mathematics?

25 [deleted] 31 December 2011 05:07AM

This, apparently:

You can answer many seemingly difficult questions quickly.

You are often confident that something is true long before you have an airtight proof for it (this happens especially often in geometry).

You are comfortable with feeling like you have no deep understanding of the problem you are studying.

Your intuitive thinking about a problem is productive and usefully structured, wasting little time on being aimlessly puzzled.

When trying to understand a new thing, you automatically focus on very simple examples that are easy to think about, and then you leverage intuition about the examples into more impressive insights.

...the biggest misconception that non-mathematicians have about how mathematicians think is that there is some mysterious mental faculty that is used to crack a problem all at once.

You go up in abstraction, "higher and higher". The main object of study yesterday becomes just an example or a tiny part of what you are considering today.

The particularly "abstract" or "technical" parts of many other subjects seem quite accessible because they boil down to maths you already know. You generally feel confident about your ability to learn most quantitative ideas and techniques.

You move easily between multiple seemingly very different ways of representing a problem.

Spoiled by the power of your best tools, you tend to shy away from messy calculations or long, case-by-case arguments unless they are absolutely unavoidable.

You develop a strong aesthetic preference for powerful and general ideas that connect hundreds of difficult questions, as opposed to resolutions of particular puzzles.

Understanding something abstract or proving that something is true becomes a task a lot like building something. 

In listening to a seminar or while reading a paper, you don't get stuck as much as you used to in youth because you are good at modularizing a conceptual space and taking certain calculations or arguments you don't understand as "black boxes" and considering their implications anyway.

You are good at generating your own questions and your own clues in thinking about some new kind of abstraction. 

You are easily annoyed by imprecision in talking about the quantitative or logical. 

On the other hand, you are very comfortable with intentional imprecision or "hand waving" in areas you know, because you know how to fill in the details. 

You are humble about your knowledge because you are aware of how weak maths is, and you are comfortable with the fact that you can say nothing intelligent about most problems. 

 

What are the best ways of absorbing, and maintaining, knowledge?

17 [deleted] 03 November 2011 02:02AM

Recently, I've collapsed (ascended?) down/up a meta-learning death spiral -- doing a lot less of reading actual informative content, than figuring out how to manage and acquire such content (as well as completely ignoring the antidote). In other words, I've been taking notes on taking notes. And now, I'm looking for your notes on notes for notes.

What kind of scientific knowledge, techniques, and resources do we have right now in the way of information management? How would one efficiently extract useful information possible out of a single pass of the source? The second pass? 

The answers may depend on the media, and the media might not be readily apparent. Example: Edward Boyden, Assistant Professor at the MIT Media Lab, recommends recording in a notebook every conversation you ever have with other people. And how do you prepare yourself for the serendipity of a walk downtown? I know I'm more likely to regret not having a notebook on hand than spending the time to bring one along.

I'll conglomerate what I remember seeing on the N-Back Mailing List and in general: I sincerely apologize for my lack of citation.

Notes

  • I'm on the fence about Shorthand as a note-taking technique, given the learning overhead, but I'm sure that the same has been said for touch-typing. It would involve a second stage of processing if you can't read as well as you write, but given the way I have taken notes (... "non-linearly"...), that stage would have to come about anyway. The act of translation may serve as a way of laying connective groundwork down.
  • Livescribe Pens are nifty for those who write slowly, but they need to be combined with a written technique to be of any use (otherwise you're just recording the talk, and would have to live through it twice without any obvious annotation and tagging).
  • Cornell Notes or taking notes in a hierarchy may have been the method you were taught in high school; it was in mine. The issue I have had with this format is that I found it hard to generate a structure while listening to the teacher at the same time.
  • Mind-Mapping.
  • Color-coding annotations of text has been remarked to be useful on Science Daily.
Reading
  • Speed Reading Techniques  or removing sub-vocalization would seem to have benefits.
  • Once upon a time someone recommended me the book, "How to Read a Book". Nothing ground-breaking -- outline the author's intent, the structure of his argument, and its content. Then criticize. In short, book reverse-engineering.
Retention
  • Spaced Repetition. I'm currently flipping through the thoughts of  Peter Wozniak, who seems to have made it his dire mission to make every kind of media possible Spaced Repetition'able. I'm wondering if anyone has any thoughts on incremental reading or  video; also, how to possibly translate the benefits of SRS to dead-tree media, which seems a bit cumbersome.

(I've also heard a handful of individuals claim that SRS has helped them "internalize" certain behaviors, or maybe patterns of thought, like Non-Violent Comunication or Bayes Theorem... any takers on this?)

  • Wikis, which seem like a good format for creating social accountability, and filing notes that aren't note-carded.  But what kind of information should that be?
  • Emotionally charged stimuli, especially stressful, tends to be remembered to greater accuracy.
  • Category Brainstorming.Take your bits of knowledge, and organize them into as many different groups as you can think of, mixing and matching if need be. Sources for such provocations could include Edward De Bono's "Lateral Thinking" and Seth Godin's "Free Prize Inside", or George Polya's "How to Solve It". I'm a bit ambivalent of deliberately memorizing such provocations -- does it get in the way of seeing originally? -- but once again, it could lay down the connective framework needed for good recall.
  • Mnemonics to encode related information seems useful.
Any other information gathering, optimising and retaining techniques worthy of mention?

 

 

Free Online Stanford Courses: AI and Machine Learning

5 Alex_Altair 10 September 2011 08:58PM

Stanford has decided to offer a few classes online, for free. These include Artificial Intelligence and Machine Learning. The classes include videos of the same lectures that the Stanford students received, quizzes, homework, and exams that are graded automatically. They start on October 10.

I'm guessing that more than a few LWers will sign up for these. How many people would like to form a study group? Should we just have a discussion thread for it, or is there a better option?

Seeking suggestions: Less Wrong Biology 101

35 virtualAdept 20 May 2011 03:28PM

I’ve been a reader and occasional commenter here for a while now, but previously have not had a solid idea of what I could or wanted to contribute to the community in posting.  In light of recent comments stating an interest in more posts that offer concrete, factual information as well as remembering lukeprog’s call for such things in his Back to the Basics of Rationality post, I am considering a series of condensed posts about biology.  As someone who has spent my formal education on biologically-focused engineering (bioengineering BS, now studying bioinformatics under a chemical engineering department for my PhD) but has always had the bulk of my friends in electrical engineering, computer science, and more traditional chemical engineering, I’ve gotten used to offering such condensed explanations whenever biology works its way into a discussion.  From what I’ve seen on LW thus far, the community educational base leans more in those (non-biology) directions, so I believe this is a niche that could use filling. 

Since biology is a rather broad subject, and you could all go read Wikipedia or a textbook if you wanted a very detailed survey course, my intent is to pick targeted topics that are relevant to current events and scientific developments.  Each post would focus on one such event/Awesome New Study, discussing the biological background and potential implications, including either short explanations or links to the basics needed to understand the subject.  If there are any political ties to the subject, I will withhold my explicit opinions on those aspects unless asked in the comments. 

My questions, then, are the following:

  • Is this something that people here would find interesting/useful in the general sense?  (While I do enjoy talking to myself, doing so on this topic has gotten a bit old, so I really do want to know if no one really thinks this will be helpful.)
  • How long/in-depth would you like?  This question is intended to gauge what my background explanation: background links ratio should be.
  • And most importantly, what are some topics you would like to see discussed?


UPDATE: Having followed the comments so far and done some preliminary outlining, I'm leaning toward a more organized progression of topics that will still tie into current interests and developments, but not be centered on them.  A bit more thought and putting ideas to text indicated that I could group the interest areas into biological categories (molecular, populations, developmental, neuro, etc) fairly easily, which would then allow for a 'foundations' post to introduce each major category, followed by posts that go over What We Know Now, Why We Care, and Where It's Going.  

Discuss: How to learn math?

12 [deleted] 09 October 2010 06:07PM

Learning math is hard. Those that have braved some of its depths, what did you discover that allowed you to go deeper?

This is a place to share insights, methods, and tips for learning mathematics effectively, as well as resources that contain this information.

View more: Next