Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MattG2 31 May 2017 03:57:15AM 1 point [-]

There seems to be a weird need in this community to over argue obvious conclusions.

This whole post seems to boil down to:

  1. You are altruistic and smart.
  2. You want more altruistic and smart people.
  3. Therefore, you should propagate your genes.

Similar to the recent "Dragon Army Baracks", which seems to boil down to:

  1. We want an effective group organization.
  2. Most effective groups seem to be hierarchical with a clear leader.
  3. Therefore, it might make sense for us to try being hierarchical with a clear leader..

I mean, I get that there's a lot of mental models that led to these conclusions, and you want to share the mental models as well... but it seems like separating out the teaching of the mental models and the arguments themselves into separate pieces of content might make sense.

Comment author: pjeby 04 December 2014 05:55:00PM *  20 points [-]

You can't fix what's wrong with you by trying to fix your parents or your relationship with them. As HPMOR would say, there is no point in assigning blame to a part of the system that you can't actually change.

That doesn't mean you're responsible for anything, it just means that when it comes to this type of dysfunction, the real problem exists in your "inner parents" rather than your outer ones. That is, your mental models of your parents, specifically the part of your brain that predicts how they will act in certain situations, and decides how you should feel/act about that.

Lots of people will say stuff about getting away from your parents, or how wrong they are or how you should speak up to them or whatever. This is irrelevant, because if your mental models make you feel bad about cutting off contact or speaking up, then you're going to have problems doing that. What you need more is to give up hope regarding your parents, whether you cut off contact or not.

What do I mean by "give up hope"? I mean that the thing that keeps you bound to their opinions is your desire to get something from them: love, acknowledgment, respect, etc. -- just as you're already starting to realize. As long as you feel it's possible for you to receive any of this, you'll be stuck trying to do things their way, or at least feeling like you should.

This happened because your parents put out "bait" that implies it's possible for you to get their good opinion (e.g. by being like your sister). Your brain thinks, "ah, so if I do what they want, maybe they'll give it to me."

A quick way to begin poking holes in this belief is to imagine that you have done everything perfectly to their desire, been the exact person they wanted you to be... and then ask that same part of your brain, "What would happen then?"

Most likely, if you actually reflect on it, you'll see that what your parents would do at that point is ignore you, or possibly tell your sister to be more like you... but there will not actually be any love, respect, etc. coming.

Don't logic this out; though. It's necessary for the part of your brain that makes emotional predictions to work this out for itself, by you asking questions and reflecting. Otherwise, it will just be in the logic part of your head, not the emotional part. The emotional part has to see, "oh, my prediction is in error - that wouldn't work out the way I want." Otherwise, you will be stuck knowing you should do something different, but still doing the same things as you did before.

Unfortunately, this is not a one-time, one-off thing to fix. Your brain may have hundreds of specific expectations that "If I do this thing in this kind of circumstance, they will finally love/appreciate/whatever me", and each one may have to be handled separately. In addition, it can sometimes seem to generate new ones! The emotional brain doesn't generalize in the same way the logical brain does, and doesn't seem (in my experience) to abstract across different classes of expectations when making these kinds of changes. But that's my personal experience and YMMV.

In a sense, the root cause of these "If only I do X, I will get Y" beliefs is a belief (alief, really, since it's not a logical thing) that you aren't worthy of receiving Y. There's like a part of our brains that imprints on our parents' behavior in order to learn what we're entitled to in our tribe, so to speak, and what we can expect to receive from others. If they don't give us love, respect, whatever, there's a part that learns "I have to earn this, then."

(The technical term, by the way, for this feeling you don't deserve good things (love, appreciation, respect, etc.) and need to earn your "pats on the back", is shame, and it's the #1 byproduct of being treated the way narcissistic parents treat us. So when you look at or for books on dealing with this, that's a keyword you want to look out for.)

Giving up hope that you can earn these pats on the back in a particular area is one way to uproot shame, but there's another method that seems to have an advantage in bypassing this and going after a feeling that you deserve to receive Y as a fundamental right... thereby eliminating the feeling that you need to earn it.

The method is described in this book, though it does contain some new age babble (that can be safely ignored if you focus on the specific instructions rather than theory).

The approach described does seem to be able to work at a higher level of abstraction than the other method I've described, by "giving up hope" of fixing various aspects of one's self rather than the hope of getting positive interactions from others. It still will need to be applied to a lot of things, but it may cover more ground faster, if you can make it work for you. I don't have as much experience with it personally as the other method, but it does seem at first glance to bring a much deeper sense of being at peace with myself and with the people whose "pats on the back" I previously sought.

Comment author: MattG2 08 May 2017 03:07:27AM 0 points [-]

The link no longer works... what is the book?

Comment author: MrMind 09 November 2016 09:56:56AM *  10 points [-]

The most important quality for a rationalist is to admit that you were wrong and change your mind accordingly: so I will say, as an excercise in strength and calibration, that I was totally wrong.

I thought, with a high degree of probability, that Clinton was going to be the next POTUS. Instead it's Trump. My model of the world was wrong, and I'll adjust accordingly.

Comment author: MattG2 11 November 2016 05:33:56PM 0 points [-]

Are you tracking your calibration with something like prediction book? You may be generally calibrated And this could have just been an instance of a low probability event happening

Comment author: Pimgd 11 November 2016 09:25:33AM 2 points [-]

Good for you that you're doing this.

But... as has been said before... not everyone (and I mean "only 0-15%") seems to like (or even care!) about your posts.

I don't mean to say that you should stop posting here all together (for that we have bans, if need be), but... maybe you should stop posting about InIn here? We're not your target audience. Some people here (I myself included) disagree with your methods (like those endorsements by other organizations that turn out to be fake).

If you observe your recent interactions with LessWrong, would you say they are positive interactions? Maybe so, I don't know how you view yourself. But all I'm seeing is disinterested folk. People who do not care about you. My view is perhaps distorted, but what doesn't help are the shills - people who post on your posts with generic but praise giving commentary, which you accept with thanks. But then it turns out they're part of your organization, or that you pay them. And not in a "this guy lives in my street, I paid him to paint my windowframes" - more of a "I pay this guy to promote my ideas". That sort of thing adds a generic excuse for any positive reactions you may get - "That person might only be saying that because they're a shill".

Keeping this reputation in mind, you think that it is a good idea to go ahead and ask for money from those disinterested folk? Why?

Here's my take on the idea. This is probably wrong because I am not you, but I'm going to say it anyway in the hope some parts of it match and you realize that if different people come to a different viewpoint with the same information then someone is either missing evidence or has made a mistake.

You're partially stuck in a bubble that is your own organization. You saw the elections and the media surrounding it and experienced a disconnect. You feel your goal is to help society in general, and you think that building a platform and spreading the word will help. After some reflection you've come to realize that this whole platform idea doesn't work, and that if you'll want to convince others, you'll probably have to go and convince them in person. But... that's going to take a lot of time, and thus you'll need money. So you'll make a single post outlining your thoughts and just link everywhere where it's a relevant thing.

There is nothing wrong with this plan of action at first sight. Take the last part - "make a single post, then link it everywhere in the hope of support". Great idea! Except if you do this repeatedly, without giving something back, whilst taking actions that cost reputation... you start to elicit responses like this. Responses that say "please stop".

Please stop asking for support here.

Please stop posting about InIn here.

You're probably a somewhat clever fellow, so you can probably contribute to a discussion just fine. So I don't think it would help if people were to block you (if that's even possible? ... It's probably possible, with a userscript if need be). But the posting of crap should stop.

Post some interesting articles. That voting thing was interesting, perhaps the calculations were off (?) but it sparked an interesting discussion. "I was on a radio show"/"I was on tv"/"InIn has ..." is not interesting.


This is not a kind post, but I believe it to be true and necessary. I don't know whether I crossed any social boundaries, but this is how I feel.

Comment author: MattG2 11 November 2016 05:29:02PM -1 points [-]

You're making a lot of assumptions here about what other people think.

I like Gleb's content, and think that people who criticize his methods have a point, but also at times veer away from consequentialism into virtue ethics.

Comment author: MattG2 03 November 2016 05:27:04AM 0 points [-]

So if I have a 1 in 60 million chance of being the decisive vote, and 1,000,000 other voters who also voted for the same candidate could also be seen as the "decisive vote", wouldn't that mean that my EV was 640,000/1,000,000 = .64 cents?

Intuitively it seems like 640,000 for voting is way overvalued compared to some other actions, and this diffusion of responsibility argument seems to make some sort of sense.

Comment author: gwern 20 September 2016 04:38:27PM 2 points [-]

You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they're more intended for fixed up-front designing of experiments. They also tend to be oriented towards overall information or reduction of variance, which doesn't necessarily correspond to your loss function. Having priors affects the optimal design somewhat (usually, you can spend fewer datapoints on the variables with prior information; for a Bayesian experimental design, you can simulate a set of parameters from your priors and then simulate drawing n datapoints with a particular experimental design, fit the model, find your loss or your entropy/variance, record the loss/design, and repeat many times; then find the design with the best average loss.).

If you are running the learning material experiment indefinitely and want to maximize cumulative test scores, then it's a multi-armed bandit and so Thompson sampling on a factorial Bayesian model will work well & handle your 3 desiderata: you set your informative priors on each learning material, model as a linear model (with interactions?), and Thompson sample from the model+data.

If you want to find what set of learning materials is optimal as fast as possible by the end of your experiment, then that's the 'best-arm identification' multi-armed bandit problem. You can do a kind of Thompson sampling there too: best-arm Thompson sampling: http://imagine.enpc.fr/publications/papers/COLT10.pdf https://www.escholar.manchester.ac.uk/api/datastream?publicationPid=uk-ac-man-scw:227658&datastreamId=FULL-TEXT.PDF http://nowak.ece.wisc.edu/bestArmSurvey.pdf http://arxiv.org/pdf/1407.4443v1.pdf https://papers.nips.cc/paper/4478-multi-bandit-best-arm-identification.pdf One version goes: with the full posteriors, find the action A with the best expected loss; for all the other actions B..Z, Thompson sample their possible value; take the action with the best loss out of A..Z. This explores the other arms in proportion to their remaining chance of being the best arm, better than A, while firming up the estimate of A's value.

Comment author: MattG2 22 September 2016 07:17:29PM *  1 point [-]

You want some sort of adaptive or sequential design (right?), so the optimal design not being terribly helpful is not surprising: they're more intended for fixed up-front designing of experiments.

So after looking at the problem I'm actually working on, I realize an adaptive/sequential design isn't really what I'm after.

What I really want is a fractional factorial model that takes a prior (and minimizes regret between information learned and cumulative score). It seems like the goal of multi-armed bandit is to do exactly that, but I only want to do it once, assuming a fixed prior which doesn't update over time.

Do you think your monte-carlo Bayesian experimental design is the best way to do this, or can I utilize some of the insights from Thompson sampling to make this process a bit less computationally expensive (which is important for my particular use case)?

Comment author: MattG2 20 September 2016 04:09:35PM 0 points [-]

Agents based on lookup tables.

Comment author: MattG2 20 September 2016 03:52:15PM *  4 points [-]

Let's say I have a set of students, and a set of learning materials for an upcoming test. My goal is to run an experiment to see which learning materials are correlated with better scores on the test via multiple linear regression. I'm also going to make the simplifying assumption that the effects of the learning materials are independent.

I'm looking for an experimental protocol with the following conditions:

  1. I want to be able to give each student as many learning materials as possible. I don't want a simple RCT, but a factorial experiment where students get many materials and the statistics tease out the linear regression.

  2. I have a prior about which learning materials will do better, I'd like to utilize this prior by originally distributing these materials to more students.

  3. (Bonus) Students are constantly entering this class, I'd love to be able to do some multi-armed bandit thingy where as I get more data I continually change this prior.

I've looked at most of the links going from https://en.wikipedia.org/wiki/Optimal_design but they mostly show the mathematical interpretation of each method, not a clear explanation of in which conditions you'd use that method.

Thanks!