Perhaps, in a parallel to the kings earlier mentioned, this could be interpreted as Orion having seen the fortunes of continents rise and fall. Orion has seen the prominence of Africa as the source of humanity, and its subjugation by Europe; it has seen the isolation and the global power of the Americas; it has seen the mercantile empires of the West and its dark ages.
While, if successful, such an epistemic technology would be incredibly valuable, I think that the possibility of failure should give us pause. In the worst case, this effectively has the same properties as arbitrary censorship: one side "wins" and gets to decide what is legitimate, and what counts towards changing the consensus, afterwards, perhaps by manipulating the definitions of success or testability. Unlike in sports, where the thing being evaluated and the thing doing the evaluating are generally separate (the success or failure of athletes doesn't impede the abilities of statisticians, and vice versa), there is a risk that the system is both its subject and its controller.
I do think "[a]bility to contribute to the thought process seems under-valued" is very relevant here. A prediction-tracking system captures one...layer[^1], I suppose, of intellectuals; the layer that is concerned with making frequent, specific, testable predictions about imminent events. Those who make theories that are more vague, or with more complex outcomes, or even less frequent[^2][^3], while perhaps instrumental to the frequent, specific, testable predictors, would not be recognized, unless there were some sort of complex system compelling the assi...
Why _haven't_ they already switched? Presumably, these companies are full of people with some vague incentives that point at maximizing efficacy, but they're leaving a "clearly superior" product on the table. It may be that the answer is that this is some sort of systemic, widespread failure of decision-making, or a decision-making success under different criteria (lower tolerance for the risk of change, perhaps, than these same systems have now) rather than a reflection of some inadequacy of RT-LAMP, but "the folks with the expert...
You may be familiar with the term "Technological Singularity" as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.
I don't believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter pos...
The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the "computer" and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind ...
I'm not sure if this is a brilliantly ironic example of the lack of absolute applicability of these guidelines or just a happy accident.
Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm - so a single sperm isn't going to get very far.
In addition to enjoying the content, I liked the illustrations, which I did not find necessary for understanding but which did break up the text nicely. I encourage you to continue using them.
1) Historical counter-examples are valid. Counter-examples of the form of "if you had followed this premise at that time, with the information available in that circumstance, you would have come to a conclusion we now recognize as incorrect" are valid and, in my opinion, quite good. Alternately, this other person has a very stupid argument; just ask about other things which tend to be correlated with what we consider "advanced", such as low infant mortality rates (does that mean human value lies entirely in surviving to age five?) or ta...
I agree that growth shouldn't be a big huge marker of success (at least at this point), but even if it's not a metric on which we place high terminal value, it can still be a very instrumentally valuable metric - for example, if our insight rate per person is very expensive to increase, and growth is our most effective way to increase total insight.
So while growth should be sacrificed for impact on other metrics - for example, if growth is has a strong negative impact on insight rate per person - I would say it's still reasonable to assume it's valuable until proven otherwise.
Are we in any real danger of growing too quickly? If so, this is relevant advice; if not - if, for example, a doubling of our growth rate would bring no significant additional danger - I think this advice has negative value by making an improbable danger more salient.
Not necessarily; the three sorts of excellent organizations you mention are organizations whose excellence is recognized by the rest of the world in some way, granting its members prestige, opportunities, and money. I suspect this is what attracts people to a large extent, not a general ability to detect organizational goodness. This sort of recognition may be very difficult to get without being very good at whatever it is the organization does, but that does not imply that all good organizations are attractive in this way.
Having recently read The Craft & The Community: A Post-Mortem & Resurrection I think that its advice on recruiting makes a lot of sense: meet people in person, evaluate whom you think would be a good fit - especially those who cover skill or viewpoint gaps that we have - and bring them to in-person events.,
I would be very interested in reading, say a blog post (or series thereof) exploring why this happens (and, if remotely possible, directing motivated individuals towards ways to support faster adoption of successful treatments).
First, I think this is an excellent idea, and I wish you the best of luck.
Second, what mechanisms do you have in place for getting feedback about the content you produce? I'm aware that for a broadcast medium using a platform over which you do not have full control, your feasible options may be limited, but I strongly encourage you to consider (possibly when this project has reached a stable state, because this will take a non-trivial amount of resources) some amount of focus group A/B testing for comprehension and internalization. From the beginning,...
I think this is a very valuable concept to keep fresh in the public consciousness.
However, I think it is in need of better editing; right now its formatting and organization make it, for me at least, less engaging. This is less of an issue because it's short; I imagine that a longer piece in the same style would suffer more reader attrition.
It might help to read over your piece and then try to distill it down to the essentials, repeatedly; it reads right now as if it is only a few steps removed from straight stream-of-consciousness. Or it might not; a...
Perhaps part of the desire to avoid conformity is a desire to avoid comparability, for fear of where one might end up in a comparison.
If I am one of one hundred people doing the same thing in the same way - working on a particular part of an important problem, or embracing a very specific style - I run the psychological risk of discovering that I am strictly worse than a large number of other people.
If, instead, I am one of one hundred people doing different things in different ways, things about me - the skills I bring to bear on the problem - cannot easi...
You have the right to have beliefs which you know or could reasonably conclude are probably false, though it is advisable you not exercise it.
You have the right to have beliefs which you have reason to believe are probably true, even if an overwhelming majority of well-informed experts disagrees, though it is advisable you exercise it only when you have a very good reason to believe you are right (i.e. when you have carefully considered expert majority disagreement as evidence of a strength relative to the capability of the experts and the nature of the sy...
This seems quite similar to the "Gish gallop" rhetorical technique.