AndHisHorse

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This seems quite similar to the "Gish gallop" rhetorical technique.

Perhaps, in a parallel to the kings earlier mentioned, this could be interpreted as Orion having seen the fortunes of continents rise and fall. Orion has seen the prominence of Africa as the source of humanity, and its subjugation by Europe; it has seen the isolation and the global power of the Americas; it has seen the mercantile empires of the West and its dark ages.

While, if successful, such an epistemic technology would be incredibly valuable, I think that the possibility of failure should give us pause. In the worst case, this effectively has the same properties as arbitrary censorship: one side "wins" and gets to decide what is legitimate, and what counts towards changing the consensus, afterwards, perhaps by manipulating the definitions of success or testability. Unlike in sports, where the thing being evaluated and the thing doing the evaluating are generally separate (the success or failure of athletes doesn't impede the abilities of statisticians, and vice versa), there is a risk that the system is both its subject and its controller.

I do think "[a]bility to contribute to the thought process seems under-valued" is very relevant here. A prediction-tracking system captures one...layer[^1], I suppose, of intellectuals; the layer that is concerned with making frequent, specific, testable predictions about imminent events. Those who make theories that are more vague, or with more complex outcomes, or even less frequent[^2][^3], while perhaps instrumental to the frequent, specific, testable predictors, would not be recognized, unless there were some sort of complex system compelling the assignment of credit to the vague contributors (and presumably to their vague contributors, et cetera, across the entire intellectual lineage or at least some maximum feasible depth).

This would be useful to help the lay public understand outcomes of events, but not necessarily useful in helping them learn about the actual models behind them; it leaves them with models like "trust Alice, Bob, and Carol, but not Dan, Eve, or Frank" rather than "Alice, Bob, and Carol all subscribe to George's microeconomic theory which says that wages are determined by the House of Mars, and Dan, Eve, and Frank's failure to predict changes in household income using Helena's theory that wage increases are caused by three-ghost visitations to CEOs' dreams substantially discredits it". Intellectuals could declare that their successes or failures, or those of their peers, were due to adherence to a specific theory, or the lay people could try to infer as such, but this is another layer of intellectual analysis that is nontrivial unless everyone wears jerseys declaring what theoretical school of thought they follow (useful if there are a few major schools of thought in a field and the main conflict is between them, in which case we really ought to be ranking those instead of individuals; not terribly useful otherwise).

[^1]: I do not mean to imply here that such intellectuals are above or below other sorts. I use layer here in the same way that it is used in neural networks, denoting that its elements are posterior to other layers and closer to a human-readable/human-valued result.

[^2]: For example, someone who predicts the weather will have much more opportunity to be trusted than someone who predicts elections. Perhaps this is how it should be; while the latter are less frequent, they will likely have a wider spread, and if our overall confidence in election-predicting intellectuals is lower than in our predictions of weather-predicting intellectuals, that might just be the right response to a field with relatively fewer data points: less confidence in any specific prediction or source of knowledge.

[^3] On the other hand, these intellectuals may be less applied not because of the nature of their field, but the nature of their specialization; a grand an abstract genius could produce incredibly detailed models of the world, and the several people who run the numbers on those models would be the ones rewarded with a track record of successful predictions.

Why _haven't_ they already switched? Presumably, these companies are full of people with some vague incentives that point at maximizing efficacy, but they're leaving a "clearly superior" product on the table. It may be that the answer is that this is some sort of systemic, widespread failure of decision-making, or a decision-making success under different criteria (lower tolerance for the risk of change, perhaps, than these same systems have now) rather than a reflection of some inadequacy of RT-LAMP, but "the folks with the expertise and incentive to get it right are all getting it wrong and leaving money on the table" sounds like a more complex explanation than "there are shortcomings to RT-LAMP that I haven't considered", and I'd like to see some further evidence in favor of it.

You may be familiar with the term "Technological Singularity" as used to describe what happens in the wake of the development of superintelligent AGI; this term is not merely impressive but refers to the belief that what follows such a development would be incredibly and unpredictably transformative, subject to new phenomena and patterns of which we may not yet be able to conceive.

I don't believe it would be smart to invest with such a scenario in mind; we have little reason to believe that how much pre-Singularity wealth one has would matter post-Singularity in such a way that it would be wise to include such a term in one's expected value and decision-making. It would be not entirely unlike buying stock based on which companies would most benefit from the announcement of an incoming Earth-shattering asteroid. The development of superintelligent AGI is an existential threat to just about every institution, including the stock market and our current conception of the economy in general. A rational, entirely selfish actor or aggregate thereof does not make plans for what happens after its death.

However, I must admit that I have no data on the subject, and while I would not guess that there is much relevant data available, I imagine there is some - did the U.S. stock market account for what companies might be most successful in the case of a Soviet conquest of the U.S.? Is the potential profitability of a company in a world transformed by a global Communist revolution accounted for in its current stock price? I do not know, but I would be very surprised to learn that the stock market priced scenarios in which it and the institutions on which it depends are unlikely to continue to exist in recognizable forms.

The example of the pile of sand sounds a lot like the Chinese Room thought experiment, because at some point, the function for translating between states of the "computer" and the mental states which it represents must begin to (subjectively, at least, but also with some sort of information-theoretic similarity) resemble a giant look-up table. Perhaps it would be accurate to say that a pile of sand with an associated translation function is somewhere on a continuum between an unambiguously conscious (if anything can be said to be conscious) mind (such as a natural human mind) and a Chinese Room. In such a case, the issue raised by this post is an extension of the Chinese Room problem, and may not require a separate answer, but does do the notable service of illustrating a continuum along which the Chinese Room lies, rather than a binary.

Not entirely true; low sperm counts are associated with low male fertility in part because sperm carry enzymes which clear the way for other sperm - so a single sperm isn't going to get very far.

In addition to enjoying the content, I liked the illustrations, which I did not find necessary for understanding but which did break up the text nicely. I encourage you to continue using them.

Load More