Today my coworker Marcello pointed out to me an interesting anti-majoritarian effect.  There are three major interpretations of probability: the "subjective" view of probabilities as measuring the uncertainty of agents, the "propensity" view of probabilities as chances inherent within objects, and the "frequentist" view of probabilities as the limiting value of long-run frequencies.  I was remarking on how odd it was that frequentism, the predominant view in mainstream statistics, is the worst of the three major alternatives (in my view, you have to presume either uncertainty or propensity in order to talk about the limiting frequency of events that have not yet happened).

And Marcello said something along the lines of, "Well, of course.  If anything were worse than frequentism, it wouldn't be there."  I said, "What?"  And Marcello said, "Like the saying that Mac users have, 'If Macs really were worse than Windows PCs, no one would use them.'"

At this point the light bulb went on over my head - a fluorescent light bulb - and I understood what Marcello was saying: an alternative to frequentism that was even worse than frequentism would have dropped off the radar screens long ago.  You can survive by being popular, or by being superior, but alternatives that are neither popular nor superior quickly go extinct.

I can personally testify that Dvorak seems to be much easier on the fingers than Qwerty - but this is not surprising, since if Dvorak really were inferior to Qwerty, it would soon cease to exist.  (Yes, I am familiar with the controversy in this area - bear in mind that this is a politically charged topic since it has been used to make accusations of market failure.  Nonetheless, my fingers now sweat less, my hands feel less tired, my carpal tunnel syndrome went away, and none of this is surprising because I can feel my fingers traveling shorter distances.)

In any case where you've got (1) a popularity effect (it's easier to use something other people are using) and (2) a most dominant alternative, plus a few smaller niche alternatives, then the most dominant alternative will probably be the worst of the lot - or at least strictly superior to none of the others.

Can anyone else think of examples from their experience where there are several major alternatives that you've heard of, and a popularity effect (which may be as simple as journal editors preferring well-known usages), and the most popular alternative seems to be noticeably the worst?

Addendum:  Metahacker said of this hypothesis, "It's wrong, but only sometimes."  Sounds about right to me.

New to LessWrong?

New Comment
55 comments, sorted by Click to highlight new comments since: Today at 12:03 PM

Less popular choices must give advantages to compensate for their unpopularity, but that doesn't mean they are "better." Many a small religious sect is bound together all the stronger for being a persecuted minority, and that bond may well the be advantage they seek.

Example: Cognitive therapy (vs. SSRIs like Prozac) - studies show that good cognitive therapy can bring about similar changes in brain patterns, and relapse is lower once treatment (or medication) is terminated. I don't know how expensive good therapy is (so the user-end costs might be the same), but Pfizer sure isn't making its buck on cognitive therapy. (You can google stats on SSRI use and comparative studies.)

Tons.

Web browsers, Operating Systems, analog videotape formats (Beta/VHS) (I'd look for this effect in hidef videodiscs soon as well), peripheral interconnects (USB/firewire), mobile phone transmission protocols (CDMA/GSM), mobile digital audio players (iPod/Zune/etc), programming languages (C++/Java/Python)...

Some examples here might be better than others.

If you can survive by being popular or superior, how does one get to be popular in the first place? I would think a lot of the times something is more popular because it is in fact superior in some way.

I remember long ago I had a 3do video game console in a time that most people still had SNES or were getting Playstations. I always consoled myself that I had the superior, though far less popular system. But the reality was I had the inferior product, and the popular product was so simply because it was better.

All products have tradeoffs, and for different consumers, the tradeoffs will vary. Some people prefer the most popular web browser because it is the one which will work at the greatest number of web sites. Others prefer a less popular browser because it has features the popular one does not. This allows the less popular browser to continue to exist even though the most popular one is better in some ways. The fact that multiple products exist doesn't prove that the less popular ones are better in general, merely that they are better for some people.

What about time and date formats? While some formats based on a single scalar value (e.g. Unixtime) are common in certain applications, the system most commonly used by humans to specify the time of day uses at least three different units (hours, minutes and seconds), with a conversion factor (60) that isn't a power of ten. The rules for calculating dates are even more complicated. Time zones complicate matters further; a single '%Y-%m-%d %H:%M:%S' string doesn't even unambiguously specify a point in time, unless one already knows what time zone the sender is using. From a purely algorithmical perspective, this is a really poor way of specifying a scalar (approximation; none of the systems mentioned deal with time dilation) value. While it encodes certain additional information (position of a planet relative to the local sun), it also makes performing arithmetic on datetimes a lot more difficult than it needs to be.

Hmmm... this fits with my theory on why English units still survive in a world with the metric system. (Which the time question makes me think of.) Which is to say, it seems to me that the metric system is vastly more convenient for doing math across different units -- not surprisingly, as the system is optimized for that. But in general, the units in English measure are more conveniently sized for everyday use. (Take temperature, for instance -- 0F to 100F is nearly perfect for everyday temperatures, at least here in Michigan -- temperatures outside that range are rare events (just a couple days a year, usually), whereas temperatures from 0-10 and 90-100 are much more common.)

The fun thing here is you can make the argument both ways depending on the scale of your comparison -- either the metric system survives in the scientific community in the US because it is superior for scientific matters, or the English system survives in the US (against the world) because it is superior in everyday use.

Toyota strictly dominates US cars and is more popular. Wall-Mart vs. K-Mart? Southwest Air vs. other Airlines? Interpretations of Quantum Mechanics?

I think this is a good heuristic, but not to be taken too seriously.

Toyota strictly dominates US cars and is more popular

This may be true, but is there a popularity effect in place with regards to Toyota versus other car manufacturers? I ask because I honestly know next to nothing about cars. The only thing I can think of is that it might be easier to get a more popular car repaired, though I have no idea how true this is. Can someone more knowledgeable about the subject than I am weigh in?

Forgive me - I'm not a statistician nor an economist - but isn't this just a pareto distribution thing?

In the comments section of this post http://www.overcomingbias.com/2006/12/bosses_prefer_o.html http://www.overcomingbias.com/2006/12/bosses_prefer_o.html#comments, Perry provided what seemed to me be a very insightful analysis of how organizations, as they scale, switch from being entrepreneurial and innovative to sclerotic and less competitive, largely through the arrival of what I would call "cashflow appropriators", who as Perry says drive out those with real competence forcing them into more entrepreneurial situations. In the market for goods and services (and even education), reputation and cashflows are much more durable than people credit, and can mask incompetence for, well..., longer than you (the innovative competitor) can remain solvent. That means the products that dominate may very often be technically or functionally inferior.

Political managers know they can get away with this (intuitively, or consciously and cynically), and care less for the innovation process, than how to grab the cashflow once created. They avoid the tricky bit, which in the old days literally involved getting your hands dirty, but now probably means knowing some "code". These people have more time on their hands to cultivate their reputations, and so will be more visible to us through the press. They are generally less interesting than their PR, of which business schools can form an important location of support. In truth they are not news, or really newsworthy.

Innovators, where they see an opportunity, will try and attack those markets/companies where there is a chance of undermining the dominant player, and diverting those cashflows. There is no guarantee that a superior product will succeed. There is a lot of necessary delusion and/or stoicism among entrepreneurs, artists and even sportsmen to reach the top, which is overlooked in the backstory. Product innovation may have to go through several cycles and different companies to take off, where each iteration may already be superior to the incumbent, but not good enough or lucky enough to break through.

Then, the last company/product succeeds on the back of the earlier partial innovations. Just like the last person to take the lid of the marmalade jar, the winner attributes success to his own strength rather than the cumulative small, frictional movements of his predecessors. The Harvard case study is written in his favour, and this folklore eventually will be discussed ad nauseum in the tutorial, boardroom, at the watercooler, and in the blogosphere to bolster all sorts of spurious arguments and subsequent copycat strategies, which don't succeed because all the data for success has not been captured.

Of course, functional and technical superiority may be too narrow a definition of the "product". To be fair to incumbents, a product may be superior because the information on reliability that the brand carries, or the service or retail network, reduces the information costs of assessing the alternatives for some significant number of users. This explains some of the attitude of the appropriator type in pursuing a defensive satisficing strategy. Also, on the part of the consumer (and I am like this with PCs), sunk cost biases can also be quite durable. A lot of information may need to reach me before I see the value of switching. The "product guy", a persona we all seem to adopt in these kinds of conversation, underestimates this at his peril.

Academically, I'm not sure how to apply this. But could one propose that neo-classical economics has a lot of vested interest built into it? Behavioural economics, which I hear requires a smaller body of literature to master and may be more applicable/superior to real world marketing and political issues, would require a lot of academics to abandon a lifetime's work, which they are of course not inclined to do. But at some point, this situation will tip.

Wow, I had never heard any claims of superiority for the English measurement system. I think that with respect to temperature, 1C clearly comes closer to the minimum perceptible temperature difference than 1F does. 1cm is clearly better for "something small" than 1 inch, though 1 caliber is possibly better for "something really tiny" than 1mm but aren't used much. Meters are better than feet for large things. Liters than pints for practical fluid volume, grams are a bit small, kilograms a bit large, and Newtons "just right" but not widely used.

A friend of mine claims Fahrenheit is more convenient because of "-ties". "Today it will be in the fifties/sixties/thirties/high seventies." Celsius doesn't have conveniently-spoken ranges that give users a general idea of the weather. I countered with high and low teens, low twenties, but I don't think his point is completely invalid.

You say centimeters are better for small things and meters better for large things, but neither are very useful for things that might constitute an arm-load. I'm not sure that sentence is very clear, so I'll try examples. My laptop is 36 centimeters wide, which is an inconveniently large number of units for it to be, but it's only a little more than a foot. This textbook: about a foot square. That hard-drive is half a foot (I'll admit that "six inches" was easier to the tongue, but in reality it's closer to seven, which I wouldn't say). What I'm trying to say is that the unit "foot" is very convenient for things that we might be handling in everyday situations, unless those things are hand-sized.

I have similar intuitions but I'm pretty sure I wouldn't if I had been raised on the metric system.

The obvious answer is to figure out what people raised with the metric system are thinking.

I was raised with the metric system and I have to agree with your sentiment. Metric lacks convenient human-sized units. Decimeters are maybe acceptable for lengths, but few people use them. I myself often use feet and inches to describe human-sized objects just because they are more convenient. But as soon as I have to do any kind of work with a quantity beyond pure description, I will swap to metric.

I was raised with the metric system, but I think inches and feet would be better than centimetres (too small) and metres (too large) for lots of everyday situations (plus, they have intuitive anthropocentric approximations, namely the breadth of a thumb and the length of a shoe). The litre is similarly too large IMO. I have no strong opinion about kilos vs pounds. On the other hand, I prefer Celsius to Fahrenheit -- having the melting point of ice at such a memorable value is useful. (But I also like Fahrenheit's 100 meaning something close to human body temperature. I might like a hypothetical scale with freezing at 0 and human body temperature at 100 even more.)

[-][anonymous]12y-20

Funnily enough, I was raised with the English system, and use it mainly in every day life, the only exception being liquid volume, which years of backpacking taught be to think of in terms of 1 Liter water bottles.

[This comment is no longer endorsed by its author]Reply

I was raised on the English system and I have essentially the same intuitions about feet and inches vs. cm and meters and about Celsius vs. Fahrenheit, so there may be something to them.

My understanding (please correct me if I'm wrong) is that British and Canadians are essentially raised on both systems, so perhaps they could comment on which is more naturally intuitive.

One, where have you seen a foot-long shoe? That would be, what, 48 or 49 European size? This naming was always curious for me, feet are just… noticeably longer than feet.
Two, metric system has the main advantage of easy scalability. Switching from liter to deciliter to centiliter to millimeter is far easier than jumping between gallons, pints and whatever even is there. That's the main point, not any constant to multiply it on (i.e. a system with inch, dekainch, and so on would be about as good).
Three, I really see no problem in saying things like "36 centimeters" to describe an object's length. I know that my hand is ~17 centimeters, and I use it as a measurement tool in emergencies, but I always convert back to do any kind of reasonable thinking, I never actually count in "two hands and a phalanx".

Your friend is on the right track. The Fahrenheit system has a smaller unit degree than Celsius/Kelvin (1 degree C = 1.8 degrees F), which gives it more precision when discussing temperatures in casual conversation. It also helps that the range 0 to 100 F corresponds roughly to the usual range of temperatures that humans tend to experience. It's a nice, round range, and it's easy to identify "below zero" or "above 100" as relatively extreme.

As a physicist, I do almost all calculations using the SI/metric system, but I have little intuition for those units in everyday life. Much of that is, I'm sure, having been raised to use imperial units, but they do tend to be better adapted to usual human scales.

I remember someone telling me that Fahrenheit was designed so that the ordinary temperatures people would experience would all fit between 0 and 100 on the scale.

Alas, Wikipedia does not comment.

Indeed, having read the actual justification, the above seems like a just-so story based on a happy coincidence. Powers of 2 clearly explain everything better.

More generally, the words for the non-metric units are often much more convenient than the words for the metric ones. I think this effect is much stronger than any difference in convenience of the actual sizes of the units.

I think it's the main reason why many of the the non-metric units are still more popular for everyday use than the metric ones in the UK, even though we've all learned metric at school for the last forty years or so.

This too. Centimetre and kilometre are four syllables each, inch and mile one.

Mile is 1.5 syllables, so to speak, at least as most people I know pronounce it.

In a scientific context I have definitely heard some metric units being given one-syllable pronunciations, for example "mg/ml" as "migs per mil" and mg/kg as "migs per kig".

...is your preferred unit bigger than a breadbox?

What's a breadbox? How big is that?

Well...I've never seen one, but...It has to be bigger than a loaf of bread, right? Otherwise, the bread wouldn't fit inside. And it can't be big enough to hide a body in, or it would definitely be named for that property. So mid size-ish.

If that's all you know, why the hell are you using it as your basis of comparison?

It's just so convenient and fun to say!

The house I lived in in college had a breadbox in which you could hide a body.
At least, it seems that way to me now. I admit I never tested that property at the time.
There was, you see, all this bread in it.

What's a breadbox? How big is that?

Probably not serious, but...

It's a box you keep loaves of home-baked bread in to keep them from going stale. I've only seen a couple in person, but they're about the size of a toaster oven or half the size of a tower-format computer: maybe fourteen inches wide by eight deep and six high, or 35x20x15 cm.

A breadbox used to be a fairly standard kitchen fixture. These days, "Is it bigger than a toaster oven?" might be comparable.

Nitpick: "caliber" has several different meanings, all of which (confusingly) relate to a gun's barrel dimensions. The one you're using is a measure of internal barrel diameter, essentially a shorthand for inches (i.e. .22 caliber); the decimal point often gets dropped in that context, though. It's equally correct to speak of caliber in terms of some other unit, like millimeters. When you're talking about large weapons, though, the word means the length of the weapon's barrel as a multiple of its internal diameter; a tank gun might be 120 mm 55 caliber, making it 6.6 meters long.

The American customary system doesn't as far as I know have a general-use length unit in the millimeter range. There are a couple of typographical) units) defined in terms of the customary system, but they haven't really escaped into the wild.

The American customary system doesn't as far as I know have a general-use length unit in the millimeter range.

Nope, the equivalent unit is x/2^n inches.

For example, metric wrenches might be 4mm or12mm, while "standard" (imperial) might be 5/32'' or 1/2''.

Please don't use URL shorteners. I want to upvote this for informativeness but aagh...

I wouldn't have, but the markup here interacts badly with URLs containing close parentheses. If you know of a workaround, I'd be happy to hear it.

EDIT: ...or I could just look at the extended markup help. There, fixed.

Simply put a backslash before the closing parenthesis. I.E. to link to "http://example.com/foo(bar)" type "link\)"

[-][anonymous]12y20
[This comment is no longer endorsed by its author]Reply

And besides the English measuring system, what about the English language itself - the language of international diplomacy, which has no rules and must all be learned by memorization? Only Chinese ideograms could be less ordered, and I find myself half-inclined to predict, on this basis, that Chinese will displace English as the international language. That's not how the hypothesis is meant to be used, but still...

I thought about applying this argument to programming languages, but it seems to me that the current most popular language families (C++/Java/Perl/Python/LISP) do genuinely different things and do them reasonably well.

And besides the English measuring system, what about the English language itself - the language of international diplomacy, which has no rules and must all be learned by memorization?

I'm told that English is very easy to learn well enough to be functional, compared to other languages. You're pretty much good to go if you just

  • memorize a few thousand core vocabulary words,
  • put things in SVO order,
  • add -s to pluralize and -ed for past tense,
  • use ascending pitch for questions.

You won't be perfectly grammatical, but you will generally be understandable if you, for example, pluralize every noun by adding -s. There's little need to fuss with irregular conjugation rules, and no grammatical gender. English might (?) be harder to speak flawlessly than many other languages are, but it isn't necessary to master these nuances to make yourself understood in a wide variety of contexts. The distance from zero to functional is smaller than it is for other languages.

At least, I have been told this by several people who learned English as a second language.

(I'm a native Italian speaker with very fluent English and a smattering of Spanish and Irish; I studied French and Latin in high school but I've since forgotten most of them.)

Well, grammar-wise English is way easier than most other European languages (though way harder than creoles, Indonesian, or engineered languages), but the phonology is not that simple even by central/northern European standards (let alone by southern European or Asian standards), and the spelling-to-pronunciation “rules” are about as bad as they could be. (As a result, it's much easier to learn to write English than to speak it, and there are lots of people who would be hard to identify as non-native speakers in formal writing but would be quite hard to understand when speaking.)

English has a much larger and more nuanced vocabulary than any other language. The closest contender is Russian, which sounds terrible, takes forever to say, read, or write anything in, has very different written and spoken forms, and in practice consists of streams of stereotyped phrases, not of words. Compared to Russian, English is child's play. The second closest contender is French, which sounds great, is easy to learn, but is hard to pronounce well. It used to be the language of international diplomacy and did a much better job than English. Spanish would be even better in terms of ease, without the difficulties of pronunciation and with an incredible spoken speed, but lacks the vocabulary of the other contenders. Chinese takes the prize for both spoken and reading speed. Even with ideograms it's easier to learn than Russian. It is the only language that I have tried that I find easy to pronounce (glitch in my brain?). Writing is problematic though. Maybe speach to text can fix that, especially given it's (also problematic) small vocabulary. Tonality may impair expressivity by reducing the set of avenues for vocal expression.

English has a much larger and more nuanced vocabulary than any other language.

That's a feature, not a bug.

Russian, which sounds terrible ... French, which sounds great

That's in the eye of the beholder (should I say the ear of the listener). To me, Russian sounds OK, but French sounds just awful. (Plus, the pattern of stressing all last syllables makes any kind of poetry sound like military chants to me.)

French ... is hard to pronounce well.

More like it has higher standards for what counts as ‘well’. In my first days in Dublin everyone could understand me despite my then-awful spoken English, to the point one could suspect they had telepathy or something; on the other hand, the French seem unable (or unwilling) to understand at all any less-than-perfect French.

I prefer the sound of Klingon to the sound of French. I'm not kidding.

Also, my general impression (as one who speaks native English and near-native Russian) is that English has more nouns and adjectives with more nuances, but Russian has more, and more nuanced, verbs.

I think the English measurement story is simply one of path dependence. It is entrenched, lots of people know it, and it would cost a lot in infrastructure and learning to switch, just like the QWERTY keyboard. OTOH, the English language has considerable nuance, given the many languages that go into it.

The Pareto distribution argument is in the right direction. Think of a skewed distribution versus a normal one. So, the mean of the normal one might be higher than that of the skewed one. On average it does better, and hence may be more popular. But the skewed one has this tail that does much better than the normal one a non-trivial amount of the time, so that risk lovers are attracted to it. This is not all that different from the argument about how noise traders survive in financial markets. Most go bankrupt, but those who actually did buy low and sell high do better than anybody else in the market and definitely survive.

You survive by surviving. No shortage of ways to achieve it.

the argument about how noise traders survive

Surely the argument you give--that false beliefs can lead to extra risk, increasing expected returns while decreasing expected utility--is older than the noise trader literature?

Douglas Knight,

There is an older literature for sure, but it was largely dismissed during the heyday of the rational expectations revolution in the 1970s and 1980s. The first break was the presidential speech to the American Finance Association in 1985 by Fischer Black, pubbed in 1986 in the Journal of Finance as "Noise," followed by the 1987 stock market crash, and then a pair of rigorous papers in 1990 and 1991 in the QJE and Journal of Business by the formidable quartet of Bradford DeLong, Andrei Shleifer, Lawrence Summers, and Robert Waldmann. They in particular argued that a rational trader must take into account the beliefs and actions of the noise traders, which can then get one into the realm of the old problem, going back to Keynes, of self-fulfilling prophecies. The issue is not false beliefs, it is understanding that the market may be driven by people who are not following long-run fundamentals and taking accurate advantage of their behavior. It is a matter of accurately forecasting a market, in the case of a bubble, accurately forecasting the path of the bubble, and those who do so can and will make more money than the safe, fundamentalist trader who stays away from the bubble.

Barkley Rosser

How good are the bubble forecasters, and can these phenomenon be usefully measured quantitatively? My impression from Mandelbrot's book was not perfectly yet. I've seen housing market analysis from people like Didier Sornette, who would imply we should have experienced a crash in the UK before now, although there may be special factors here since his analysis like large-scale immigration that have contributed to a soft landing. Are you thinking of the more intuitive, but post-rationalising behaviour of a Soros, or the very calculating activities of a Taleb, but who himself seems to value intuition? How large is the population of rational traders? My guess is not that large, and they too must be quite vulnerable to failure and exit from the market, because of the loss aversion of their employers, as Taleb describes.

knackeredhack,

Forecasting markets in general is a mug's game, only made worse when bubbles go on. Go read all the debates about the (maybe) housing bubble going on over at Econbrowser. There is a deep argument, which Jim Hamilton has long been one of the main promulgators of, that one can never identify for certain econometrically if one is in a bubble or not, although one may be able to do so pretty much for certain sometimes with closed-end funds, where there is a well-defined fundamental in the net asset value of the fund.

Regarding Sornette, his model is one of a rational stochastically crashing bubble, which requires a sharp upward acceleration to provide risk premia for the rising probability of the inevitable crash. These tend to go to infinity at a certain point, which is the basis for forecasting the crash that Sornette uses. Of course there is plenty of reason to believe that people in bubbles are not fully rational, and therefore it is not surprising that Sornette has had a rather mixed record in his forecasting.

Well, interesting topic! But I don't agree that the frequentist view is the worst. Seems to me as the most objective, the most "scientific" one. The one that can best explain the concept of probability. The subjective view doesn't explain much to me, and the propensity just doesn't seem right, because I don't believe in probabilities being inherent to anything, they're can be volatile under a single object.

Or maybe I just might read more about this whole subject, since my opinions come solely from this article.

If someone disagrees, I'd like to know why. Good to know different points of view =)

I think there's substantial overlap between some of your ideas here and Richard Gabriel's famous "Worse is Better". He wrote several follow-on essays, too, all linked to from there. You might want to have a look at them.

Ryan Law brought up videogames. This past console generation offers a good example. The Playstation 2 was weaker, graphically, than the Gamecube and the Xbox. (It cost more than the former and less than the latter.) Moreover, it was plagued with hardware problems--after a couple years of use, a huge majority of first-generation PS2 owners reported numerous Disc Read Errors; their PS2s simply stopped reading games. Sony went as far as to remodel the console entirely, on top of settling a major class action lawsuit (in which they ceded no admission of hardware failure).

However, it wildly outsold the Gamecube and Xbox. It was, in a big way, the lowest common denominator. It could play old Playstation games, so users had a built-in library. They could even use their old controllers--a popular, if standard, design based on the Super Nintendo controller. The Gamecube tried a more innovative layout, with a large "home" button and surrounding buttons designed to accomodate thumb-sliding. The Xbox's original controller was considered too big for most users. As for format, the Gamecube used minidiscs. They did not have as much capacity as DVDs, but they loaded much faster. The Xbox used DVDs and a hard drive (which made it the choice console for game pirating). The PS2 played CDs, DVDs, acted as a DVD player right out of the box, and came out before the other systems.

So the PS2 was a very conservative console. It did a lot of comfortable things, and even though Nintendo and Microsoft's systems were unquestionably more powerful, there's no doubt the PS2 "won": it attracted the most software, as the lowest common denominator would be expected to do. Meanwhile, developers who spent their time on the other consoles enjoyed huge game sales from their smaller user bases: Itagaki and Tecmo on the Xbox, Nintendo (of course) on the Gamecube, etc.

As for the generation before that, I think the PlayStation beat the crap out of the Nintendo 64 because it was much easier to pirate games. :-)

Learning and memorization.

Spaced repetition, the testing effect and the use of mnemonics has not replaced linear non tested study (where you read something over multiple times in a short period of time, don't test yourself and don't space the readings.)

Isn't this just a special case of Berkson's paradox?

Reverse stupidity is not wisdom. Here we have reversed ad populus (aka The Hipster's Fallacy). Pepsi and Macs are not strictly superior to their more popular counterparts by dent of existing. Rather, their existence is explained by comparative advantage in some cases for some users.