Wiki Contributions

Comments

What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.

Fair. Nevertheless, if the average of the group is around my own level, that's good enough for me if they're also actively trying. (Pretty much by definition of the average, really...)

Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…

... Okay, sorry, two place function. I don't seem to have much trouble distinguishing.

(And yes, you can reasonably ask how I know I'm right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but... well, at some point that all turns into wasted motions. Let's just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I'm fairly confident I'll at least not be easily mislead.)


Actually, no, I explicitly want both 1 and 2. Merely being more X than me doesn't help me nearly as much as being both more X and also always on the lookout for ways to be even more X, because they can give me pointers and keep up with me when I catch up.

And sure, 3 is indeed what often happens.

... First of all, part of the whole point of all of this is to be able to do things that often fail, and succeed at them anyway; being able to do the difficult is something of prerequisite to doing the impossible.

Secondly, all shounen quips aside, it's actually not that hard to tell when someone is merely pretending to be more X. It's easy enough that random faux-philosophical teenagers can do it, after all :V. The hard part isn't staying away from the affective death spiral, it's trying to find the people who are actually trying among them -- the ones who, almost definitionally, are not talking nearly as much about it, because "slay the Buddha" is actually surprisingly general advice.

The thing is -- and here I disagree with your initial comment thread as well -- peer pressure is useful. It is spectacularly useful and spectacularly powerful.

How can I make myself a more X person, for almost any value of X, even values that we would assume entirely inherent or immutable? Find a crowd of X people that are trying to be more X, shove myself in the middle, and stay there. If I want to be a better rationalist, I want friends that are better rationalists than me. If I want to be a better forecaster, I want friends that are better forecasters than me. If I want to be a more effective altruist, earn more to give more, learn more about Y academic topic, or any other similar goal, the single most powerful tool in my toolbox -- or at least the most powerful tool that generalizes so easily -- is to make more friends that already have those traits.

Can this go bad places? Of course it can. It's a positive feedback cycle with no brakes save the ones we give it. But...

... well, to use very familiar logic: certainly, it could end the world. But if we could harness and align it, it could save the world, too.

(And 'crowds of humans', while kind of a pain to herd, are still much much easier than AI.)

That, and the fact that when making decisions, it's *really important* to have non-subjective reasons -- or if you have subjective reasons, you still have objective reasons why they matter, like "if I don't like someone on a personal level, I really shouldn't spend the rest of my life with them" in dating.

So people are used to a mode of thought where a subjective opinion means "you're not done explaining"/"you haven't spent enough mental effort on the problem," and they engage the -- honestly, very productive, very healthy -- same mechanisms they use when justifying a command decision. It just happens to be mis-applied in this case.

I'd like to point out that technically speaking, basically all neural nets are running the exact same code: "take one giant matrix, multiply it by your input vector; run an elementwise function on the result vector; pass it to the next stage that does the exact same thing." So part 1 shouldn't surprise us too much; what learns and adapts and is specialized in a neural net isn't the overall architecture or logic, but just the actual individual weights and whatnot.


Well, it *does* tell us that we might be overthinking things somewhat -- that there might be One True Architecture for basically every task, instead of using LSTM's for video and CNNs for individual pictures and so on and so forth -- but it's not something that I can't see adding up to normality pretty easily.

I was kind of iffy about this post until the last point, which immediately stood out to me as something I vehemently disagree with. Whether or not humans naturally have values or are consistent is irrelevant -- that which is not required will happen only at random and thus tend not to happen at all, and so if you aren't very very careful to actually make sure you're working in a particular coherent direction, you're probably not working nearly as efficiently as you could be and may in fact be running in circles without noticing.

As someone who does a whole lot of pull-based learning, I'm going to chime in and say that using it as your main method of learning is probably not the best idea. tl;dr: Learning on the job is powerful, but it overfits by nature; while there's probably more than a little confirmation bias from us ivory tower types, it's almost certainly drowned out by "everything comes back to math and logic" and "the truth is all of a piece".

There is a fairly natural divide, IMO, between "engineering fields" and "theoretical fields" - fields that are directly aimed at solving actual problems, and fields that are more about exploring what is possible and figuring out what is true in general. Pull-based learning is tempting in engineering fields for all the reasons you list - most of being able to solve a real world problem is precisely knowing about all the little fiddly bits of reality, and there is not yet a good way of predicting which fiddly bits you need to know while sitting in your armchair (metaphorically speaking.) In that regard you can get pretty damn far learning only what you "need to know", to the point of finishing a number of fairly large projects...

... but your knowledge is fundamentally grounded in air, and it's not always obvious how your experience generalizes. To generalize knowledge you need to build an abstract model, and the skill of building abstract models is very, very theoretical (almost by definition!). In particular, using pull-based learning as a primary tool reminds me of trying to learn physics without first learning math as a fundamental. Certainly, you can make use of what other people have done, and when they start pulling out integrals and Lagrangians and group symmetries you can go look that up and learn it then - but that won't let you make your own generalizations, or (as Darmani says) contribute to pushing the boundaries on your own.

Personally, I find pull-based learning most useful as the first/last step in a loop. You do some project, and midway through you find there's a whole bunch of stuff you wish you knew better and learn just enough to get it done. But then you go off and take a course in the dangling ends you discovered, and maybe explore a few branches off that tree too, before you come back to doing some new project challenging enough to force you to learn on the job again - and also enough to make you use your new skills, judge which of them are most important, and generally see them in a new light/in practice. To use the Pokemon metaphor, it's like, after losing against the Elite Four, you boot up someone else's save file right before the Elite Four in a totally different version, and try to pick up general "anti-Elite Four" tricks in general rather than "oh, this particular Elite Four has a Ghost specialist, Ice specialist, Fire specialist, and Steel/Psychic specialist, in that order", which doesn't generalize to other games.

All of your advice seems designed for a longer post published outside LW. None of them seem appropriate for a ~1k word short published in the same place as and three days after both the last chapter of Inadequate Equilibria and "Hero Licensing," both of which I mention in the text.

With the partial exception of the first, but I have been using "linkhyrule5" as an alias and "link" as a nickname for the better part of two decades now, and have not been led to believe that it was particularly hard to decypher. Illusion of transparency, yes, but also evidence to the contrary.

A final note, a postscript that doesn't belong in the main article:

The correct word for the final concept is not "arrogance," because arrogance has, as I note in the first sentence, long since been conflated with the other two, with "hubris" and "pride". It is, nonetheless, what I believe many people mean, when they say "arrogance", and so it is the word I use here. And because it is something to be discarded, its linguistic affinity to "hubris" and "pride" mean those related concepts are thrown out with the bathwater.

A better word for the last concept, for "the state of being in the habit of mockery", would be useful - though not for this particular point.

Load More