A quick sketch on how the Curry-Howard Isomorphism kinda appears to connect Algorithmic Information Theory with ordinal logics

11 [deleted] 19 April 2015 07:35PM

The following is sorta-kinda carried on from a recent comments thread, where I was basically saying I wasn't gonna yack about what I'm thinking until I spent the time to fully formalized it.  Well, Luke got interested in it, and I spewed the entire sketch and intuition to him, and he asked me to put it up where others can participate.  So the following is it.

Basically, Algorithmic Information Theory as started by Solomonoff and Kolmogorov, and then continued by Chaitin, contains a theorem called Chaitin's Incompleteness Theorem, which says (in short, colloquial terms) "you can't prove a 20kg theorem with 10kg of axioms".  Except it says this in fairly precise mathematical terms, all of which are based in the undecidability of the Halting Problem.  To possess "more kilograms" of axioms is mathematically equivalent to being able to computationally decide the halting behavior of "more kilograms" of Turing Machines, or to be able to compress strings to smaller sizes.

Now consider the Curry-Howard Isomorphism, which says that logical systems as computation machines and logical systems as mathematical logics are, in certain precise ways, the same thing.  Now consider ordinal logic as started in Turing's PhD thesis, which starts with ordinary first-order logic and extends it with axioms saying "First-order logic is consistent", "First-order logic extended with the previous axiom is consistent", all the way up to the limiting countable infinity Omega (and then, I believe but haven't checked, further into the transfinite ordinals).

In a search problem with partial information, as you gain more information you're closing in on a smaller and smaller portion of your search space.  Thus, Turing's ordinal logics don't violate Goedel's Second Incompleteness Theorem: they specify more axioms, and therefore specify a smaller "search space" of models that are, up to any finite ordinal level, standard models of first-order arithmetic (and therefore genuinely consistent up to precisely that finite ordinal level).  Goedel's Completeness Theorem says that theorems of a first-order theory/language are provable iff they are true in every model of that first-order theory/language.  The clearest, least mystical, presentation of Goedel's First Incompleteness Theorem is: nonstandard models of first-order arithmetic exist, in which Goedel Sentences are false.  The corresponding statement of Goedel's Second Incompleteness Theorem follows: nonstandard models of first-order arithmetic, which are inconsistent, exist.  To capture only the consistent standard models of first-order arithmetic, you need to specify the additional axiom "First-order arithmetic is consistent", and so on up the ordinal hierarchy.

Back to learning and AIT!  Your artificial agent, let us say, starts with a program 10kg large.  Through learning, it acquires, let us say, 10kg of empirical knowledge, giving it 20kg of "mass" in total.  Depending on how precisely we can characterize the bound involved in Chaitin's Incompleteness Theorem (he just said, "there exists a constant L which is a function of the 10kg", more or less), we would then have an agent whose empirical knowledge enables it to reason about a 12kg agent.  It can't reason about the 12kg agent plus the remaining 8kg of empirical knowledge, because that would be 20kg and it's only a 20kg agent now even with its strongest empirical data, but it can formally prove universally-quantified theorems about how the 12kg agent will behave as an agent (ie: its goal functions, the soundness of its reasoning under empirical data, etc.).  So it can then "trust" the 12kg agent, hand its 10kg of empirical data over, and shut itself down, and then "come back online" as the new 12kg agent and learn from the remaining 8kg of data, thus being a smarter, self-improved agent.  The hope is that the 12kg agent, possessing a stronger mathematical theory, can generalize more quickly from its sensory data, thus enabling it to accumulate empirical knowledge more quickly and generalize more precisely than its predecessor, thus speeding it through the process of compressing all available information provided by its environment and achieving the reasoning power of something like a Solomonoff Inducer (ie: which has a Turing Oracle to give accurate Kolmogorov complexity numbers).

This is the sketch and the intuition.  As a theory, it does one piece of very convenient work: it explains why we can't solve the Halting Problem in general (we do not possess correct formal systems of infinite size with which to reason about halting), but also explains precisely why we appear to be able to solve it in so many of the cases we "care about" (namely: we are reasoning about programs small enough that our theories are strong enough to decide their halting behavior -- and we discover new formal axioms to describe our environment).

So yeah.  I really have to go now.  Mathematical input and criticism is very welcomed; the inevitable questions to clear things up for people feeling confusion about what's going on will be answered eventually.

Agency and Life Domains

5 Gleb_Tsipursky 16 November 2014 01:38AM

Introduction

The purpose of this essay is to propose an enriched framework of thinking to help optimize the pursuit of agency, the quality of living intentionally. I posit that pursuing and gaining agency involves 3 components:

1. Evaluating reality clearly, to

2. Make effective decisions, that

3. Achieve our short and long-term goals.

In other words, agency refers to the combination of assessing reality accurately and achieving goals effectively, epistemic and instrumental rationality. The essay will first explore the concept of agency more thoroughly, and will then consider the application of this concept in different life domains, by which I mean different life areas such as work, romance, friendships, fitness, leisure, and other domains.

The concepts laid out here sprang from a collaboration between myself and Don Sutterfield, and also discussions with Max Harms, Rita Messer, Carlos Cabrera, Michael Riggs, Ben Thomas, Elissa Fleming, Agnes Vishnevkin, Jeff Dubin, and other members of the Columbus, OH, Rationality Meetup, as well as former members of this Meetup such as Jesse Galef and Erica Edelman. Members of this meetup are also collaborating to organize Intentional Insights, a new nonprofit dedicated to raising the sanity waterline through popularizing Rationality concepts in ways that create cognitive ease for a broad public audience (for more on Intentional Insights, see a fuller description here).

Agency

This section describes a framework of thinking that helps assess reality accurately and achieve goals effectively, in other words gain agency. After all, insofar as human thinking suffers from many biases, working to achieve greater agenty-ness would help us lead better lives. First, I will consider agency in relation to epistemic rationality, and then instrumental rationality: while acknowledging fully that these overlap in some ways, I believe it is helpful to handle them in distinct sections.

This essay proposes that gaining agency from the epistemic perspective involves individuals making an intentional evaluation of their environment and situation, in the moment and more broadly in life, sufficient to understand the full extent of one’s options within it and how these options relate to one’s personal short-term and long-term goals. People often make their decisions, both in the moment and major life decisions, based on socially-prescribed life paths and roles, whether due to the social expectations imposed by others or internalized preconceptions, often a combination of both. Such socially-prescribed life roles limit one’s options and thus the capacity to optimize one’s utility in reaching personal goals and preferences. Instead of going on autopilot in making decisions about one’s options, agency involves intentionally evaluating the full extent of one’s options to pursue the ones most conducive to one’s actual personal goals. To be clear, this may often mean choosing options that are socially prescribed, if they also happen to fit within one’s goal set. This intentional evaluation also means updating one’s beliefs based on evidence and facing the truth of reality even when it may seem ugly.

By gaining agency from the instrumental perspective, this essay refers to the ability to achieve one’s short-term and long-term goals. Doing so requires that one first gain a thorough understanding of one’s short-term and long-term goals, through an intentional process of self-evaluation of one’s values, preferences, and intended life course. Next, it involves learning effective strategies to make and carry out decisions conducive to achieving one’s personal goals and thus win at life. In the moment, that involves having an intentional response to situations, as opposed to relying on autopilot reflexes. This statement certainly does not mean going by System 2 at all times, as doing so would lead to rapid ego depletion, whether through actual willpower drain or through other related mechanisms. Agency involves using System 2 to evaluate System 1 and decide when one’s System 1 may be trusted to make good enough decisions and take appropriate actions with minimal oversight, in other words when System 1 has functional cached thinking, feeling, and behavior patterns. In cases where System 1 habits are problematic, agency involves using System 2 to change System 1 habits into more functional ones conducive to one’s goal set, not only behaviors but also changing one's emotions and thoughts. For the long term, agency involves intentionally making plans about one’s time and activities so that one can accomplish one’s goals. This involves learning about and adopting intentional strategies for discovering, setting, and achieving your goals, and implementing these strategies effectively in your life on a daily level.

Life Domains

Much of the discourse on agency in Rationality circles focuses on this notion as a broad category, and the level of agenty-ness for any individual is treated as a single point on a broad continuum of agency (she’s highly agenty, 8/10; he’s not very agenty, 3/10). After all, if someone has a thorough understanding of the concept of agency as demonstrated by the way they talk about agency and goal achievement, combined with their actual abilities to solve problems and achieve their goals in life domains such as their career or romantic relationships, then that qualifies that individual as a pretty high-level agent, right? Indeed, this is what I and others in the Columbus Rationality Meetup believed in the past about agency.

However, in an insight that now seems obvious to us (hello, hindsight bias) and may seem obvious to you after reading this post, we have come to understand that this is far from the case: in other words, just because someone has a high level of agency and success in one life domain does not mean that they have agency in other domains. Our previous belief that those who understand the concept of agency well and seem highly agenty in one life domain created a dangerous halo effect in evaluating individuals. This halo effect led to highly problematic predictions and normative expectations about the capacities of others, which undermined social relationships through creating misunderstandings, conflicts, and general interpersonal stress. This halo effect also led to highly problematic predictions and normative expectations about ourselves when highly inflated conceptions of our personal capacities in each given life domain contrasted with consequent mistakes in efforts at optimization that resulted in losses of time, energy, motivation, and personal stress.

Since that realization, we have come across studies on the difference between rationality and intelligence, as well as on broader re-evaluations of dual process theory, and also on the difference between task-oriented thinking and socio-relationship thinking, indicating the usefulness of parsing out the heuristic of “smart” and “rational,” and examining the various skills and abilities covered by that term. However, such research has not yet explored how significant skill in rational thinking and agency in one life domain may (or may not) transfer to those same skills and abilities in other areas of life. In other words, individuals may not be intentional and agenty about their application of rational thinking across various life domains, something that might be conveyed through the term “intentionality quotient.” So let me tell you a bit about ourselves as case studies in how the concept of domains of agency has proved to be useful in thinking rationally about our lives and gaining agency more quickly and effectively in varied domains.

For example, I have a high level of agency in my career area and in time management and organization, both knowing quite a lot about these areas and achieving my goals within them pretty well. Moreover, I am thoroughly familiar with the concept of agency, both from the Rationality perspective and from my own academic research. From that, I and others who know me expect me to express high levels of agency across all of my life domains.

However, I have many challenges in being rational about maximizing my utility gains in relationships with others. Only relatively recently, within the last couple of years or so, have I began to consider and pursue intentional efforts to reflect on the value that relationships with others has for my life. These intentional efforts resulted from conversations with members of the Columbus Rationality Meetup about their own approaches to relationships, and reading Less Wrong posts on the topic of relationships. As a result of these efforts, I have begun to deliberately invest resources into cultivating some relationships while withdrawing from others. My System 1 self still has a pretty strong ugh field about doing the latter, and my System 2 has to have a very serious talk with my System 1 every time I make a move to distance myself from extant relationships that no longer serve me well.

This personal example illustrates one major reason why people who have a high level of agency in one life domain may not have it in another life domain. Namely, “ugh” fields and cached thinking patterns prevent many who are quite rational and utility-optimizing in certain domains from applying the same level of intentional analysis to another life domain. For myself, as an introverted bookish child, I had few friends. This was further exacerbated by my family’s immigration to the United States from the former Soviet Union when I was 10, with the consequent deep disruption of interpersonal social development. Thus, my cached beliefs about relationships and my role in them served me poorly in optimizing relationship utility, and only with significant struggle can I apply rational analysis and intentional decision-making to my relationship circles. Still, since starting to apply rationality to my relationships here, I have substantially leveled up my abilities in that domain.

Another major reason why people who have a high level of agency in one life domain may not have it in another life domain results from the fact that people have domain-specific vulnerabilities to specific kinds of biases and cognitive distortions. For example, despite knowing quite a bit about self-control and willpower management, I suffer from challenges managing impulse control over food. I have worked to apply both rational analysis and proven habit management and change strategies to modify my vulnerability to the Kryptonite of food and especially sweets. I know well what I should be doing to exhibit greater agency in that field and have made very slow progress, but the challenges in that domain continually surprise me.

My assessment of my level of agency, which sprang from the areas where I had high agency, caused me to greatly overestimate my ability to optimize in areas where I had low levels of agency, e.g., in relationships and impulse control. As a result, I applied incorrect strategies to level up in those domains, and caused myself a great deal of unnecessary stress, and much loss of time, energy, and motivation.

My realization of the differentiated agency I had across different domains resulted in much more accurate evaluations and optimization strategies. For some domains, such as relationships, the problem resulted primarily from a lack of rational self-reflection. This suggests one major fix to differentiated levels of agency across different life domains – namely, a project that involves rationally evaluating one’s utility optimization in each life area. For some domains, the problem stems from domain-specific vulnerability to certain biases, and that requires applying self-awareness, data gathering, and tolerance toward one’s personally slow optimization in these areas.

My evaluation of the levels of agency of others underwent a similar transformation after the realization that they had different levels of agency in different life domains. Previously, mistaken assessments resulting from the halo effect about agency undermined my social relationships through misunderstandings, conflicts, and general interpersonal stress. For instance, before this realization I found it difficult to understand how one member of the Columbus Rationality Meetup excelled in some life areas, such as managing relationships and social interactions, but suffered from deep challenges in time management and organization. Caring about this individual deeply as a close friend and collaborator, I invested much time and energy resources to help improve this life domain. The painfully slow improvement and many setbacks experienced by this individual caused me to experience much frustration and stress, and resulted in conflicts and tensions between us. However, after making the discovery of differentiated agency across domains, I realized that not only was such frustration misplaced, but that the strategies I was suggesting were targeted too high for this individual, in this domain. A much more accurate assessment of his current capacities and the actual efforts required to level up resulted in much less interpersonal stress and much more effective strategies that helped this individual. Besides myself, other Columbus Rationality Meetup members have experienced similar benefits in applying this paradigm to themselves and to others.

Final Thoughts

To sum up, this essay provided an overview and some strategies for achieving greater agency - a highly instrumental framework of thinking that helps empower individuals to optimize their ability to assess reality accurately and achieve goals effectively. The essay in particular aims to enrich current discourse on agency by highlighting how individuals have different levels of agency across various life domains, and underscoring the epistemic and instrumental implications of this perspective on agency. While the strategies listed above help achieve specific skills and abilities required to gain greater agency, I would suggest that one can benefit greatly from tying positive emotions to the framework of thinking about agency described above. For instance, one might think to one’s self, “It is awesome to take an appropriately fine grained perspective on how agency works, and I’m awesome for dedicating cycles to that project.” Doing so motivates one’s System 1 to pursue increasing levels of agency: it’s the emotionally rational step to assess reality accurately, achieve goals effectively, and thus gain greater agency in all life domains.

 

 

 

 

To like, or not to like?

2 PhilGoetz 14 November 2013 02:26AM

Do you like Shakespeare?

I've been reading the Paris Review interviews with famous authors of the 20th century. Famous authors don't always like other famous authors. Hemingway, Faulkner, Joyce, Fitzgerald — for all of them, you could find some famous author who found them unreadable. (Especially Joyce and Faulkner.)

Except Shakespeare. Everyone loved Shakespeare. In fact, those who mentioned Shakespeare sometimes said he was the best author who has ever lived.

How likely is this?

continue reading »

Good movies for rationalists?

0 roland 09 November 2013 08:00AM

Hi,

what good movies can you suggest that give ideas or inspirations on how to be more rational?

I just watched [Memento](https://en.wikipedia.org/wiki/Memento_%28film%29) last night and I was very impressed.

(No spoilers in this post)

The main character is a guy who suffers from amnesia, he forgets everything after a couple minutes so he has developed a system to cope with it. He takes pictures and writes notes. E.g. when staying at a hotel he takes a picture of it and put it in his pocket. So later when he doesnt know where he is staying he searches his pockets, finds the picture of the hotel and then he knows.

What I learned

I identified with the character in the movie because in spite of not having amnesia my memory as everyone elses isn't perfect either and I have all the quirks(biases) of a normal human brain. I cant exactly remember what I did last Thursday at 3 PM. Do I actually know why I am doing what Im doing or why I believe what I believe? I may have good rationalizations for both, of course, but that doesnt mean they are the real reasons.

I like to read LW but I havent developed much of a system to actually be more rational. If anyone has, I would be eager to read about it.

Practical Advice

What system could I develop to be more rational? One thing that a lot of management experts(e.g. Peter Drucker) have already pointed out is to write down how we actually spend our time because often how we spend it is not how we think we spend it and we end up spending much more time on unproductive activities than we are aware of. How much time went into random internet browsing last week?

I will start an activity log during work: how much time Im spending on what. This will be a first step.

 

 

Emotional Basilisks

-2 OrphanWilde 28 June 2013 09:10PM

Suppose it is absolutely true that atheism has a negative impact on your happiness and lifespan.  Suppose furthermore that you are the first person in your society of relatively happy theists who happened upon the idea of atheism, and moreover found absolute proof of its correctness, and quietly studied its effects on a small group of people kept isolated from the general population, and you discover that it has negative effects on happiness and lifespan.  Suppose that it -does- free people from a considerable amount of time wasted - from your perspective as a newfound atheist - in theistic theater.

Would you spread the idea?

This is, in our theoretical society, the emotional equivalent of a nuclear weapon; the group you tested it on is now comparatively crippled with existentialism and doubt, and many are beginning to doubt that the continued existence of human beings is even a good thing.  This is, for all intents and purposes, a basilisk, the mere knowledge of which causes its knower severe harm.  Is it, in fact, a good idea to go around talking about this revolutionary new idea, which makes everybody who learns it slightly less happy?  Would it be a -better- idea to form a secret society to go around talking to bright people likely to discover it themselves to try to keep this new idea quiet?

(Please don't fight the hypothetical here.  I know the evidence isn't nearly so perfect that atheism does in fact cause harm, as all the studies I've personally seen which suggest as much have some methodical flaws.  This is merely a question of whether "That which can be destroyed by the truth should be" is, in fact, a useful position to take, in view of ideas which may actually be harmful.)

Newbomb's parabox

-9 Locaha 01 July 2013 01:51PM

Excuse the horrible terribad pun...

 

An evil Omega has locked you in a box. Inside, there is a bomb and a button. Omega informs you that in an hour the bomb will explode, unless you do the opposite of what Omega predicted you will do. Namely, press the button if it predicted you won't or vice versa. In that case, the bomb won't explode and the box will open, letting you free.

Your actions?

 

PS. You have no chance to survive make your time.

PPS. Quick! Omega predicted that in exactly 5 second from now, you will blink. Your actions?

PPPS. Omega vs. Quantum Weather Butterfly. The battle of the Eon!

Living in the shadow of superintelligence

0 Mitchell_Porter 24 June 2013 12:06PM

Although it regularly discusses the possibility of superintelligences with the power to transform the universe in the service of some value system - whether that value system is paperclip maximization or some elusive extrapolation of human values - it seems that Less Wrong has never systematically discussed the possibility that we are already within the domain of some superintelligence, and what that would imply. So how about it? What are the possibilities, what are the probabilities, and how should they affect our choices?

Karma as Money

0 diegocaleiro 02 June 2013 01:46AM

How do you gather a theory of Counterfactuals, Karma, and Economics, into a revised algorithm for thinking about Lesswrong?

Thinking of Karma as money. 

There are a lot of things that one may consider worth saying on Lesswrong. Things that go against the agenda, things that may make people unconfortable, things that are different from what the high-ranking officials would prefer to read here. But we don't do it, because we don't want to "loose" precious Karma points. Each Karma point loss is felt as an insecurity, as a tiny arrow penetrating the chest.  But should it be that way? 

Here is the alternative: Think of Karma as money. You work hard for getting a few karma points by writing interesting stuff on superintelligence and whatnot, society rewards you by paying some karma points. Then you go there and write something you think people need to hear, but will downvote for sure, at least initially. Some people by now will be very rich, which affords them the opportunity of saying a lot of things that they are not sure will get themselves upvoted, but are sure should be posted.

Citizen: Wait, you said counterfactuals...

Yes, just like your State doesn't really care or like you going out in your hovercraft through the river and using equipment to climb a mountain, so the people here may not care about putting attention into that idea which you think they should hear. Thus, they dowvote it. They make you pay for their attention. If you mentalize it as "they are drawing my soul and life is worthless if karma is negative", then you are much less likely to end up posting something controversial that may be counterfactually relevant

Just like efficient charity donation works because the vast majority of people are not paying to effectively cause others into being happier, using karma as money works because the vast majority of people are afraid their soul is being sucked every time a downvote comes. But it isn't, this is just the price people charge for their attention, if you think the way I'm tentatively suggesting.  It is just a test worth trying, not necessarily something that I fully endorse. I like the idea, and have been using it since forever. Every post linked here, or an earlier subpart of it, has been negative at some point, and from before posting, I knew it would be a "costly one".  Try it, if you are rich, you may have nothing much to loose, and more controversial but useful stuff will show up with time.  

Let's see how much this costs. 

Sorting Comments

-11 troll 24 April 2013 04:35PM

I'd like to be able to sort comments as a list, too.

An example of this would be sorting comments by new as a list so all the comments you read will be new ones.

Worth remembering (when comparing ‘the US’ to ‘Europe’)

10 Curiousguy 13 April 2013 08:35PM

I posted the post below on my own blog a while back, but a friend of mine suggested that it might be a good idea to cross-post a little bit of my stuff here as well. I've made a few changes to the post at the bottom, but it's pretty much the same post as the one I posted back then. Okay, here goes:

...

People often note that it’s a bad idea to compare small European countries with a country that is so big that it is comparable in size to the continent that the small country is a part of. I’ll go into a bit more detail about the differences in this post.

So, in a comment I left over at MR I noted that:

‘The United States is 3 times as big as EU-15 used to be, and EU-15 included pretty much all of the countries in Western Europe that people from the US like to compare to their own country (Italy, Germany, Spain, France, UK, Sweden…)’

Here’s the map:

It’s not ‘completely true’, but it’s very close – the area of EU-15 was 3,367,154 km2 (link). The area of the United States is 9.83 million km2.

Some more random numbers, I used wikipedia’s numbers and I couldn’t be bothered to add links because it would have taken forever and nobody would follow them anyway – you can look it up if something sounds really wrong. Texas: 696,200 km2. France: 674,843 km2. (Metropolitan France – i.e. ‘France-France (+Corsica)’: 551,695 km2). Spain: 504,030 km2. California: 423,970 km2. Germany: 357,021 km2. Denmark: 43,075 km2. Netherlands: 41,543 km2.

The red bit in the picture below is larger than any country in Europe which is not Russia (or another way to visualize it: That bit is actually significantly larger than the Iberian Peninsula in the map above). Maybe the scales aren’t completely similar, but they’re actually not really that far off:

If you take a trip in Europe from Venezia, Italy to Amsterdam, Netherlands, you’ll travel ~1200-1300 kilometers depending on the route. The lenght and width of Texas are both in the neighbourhood of ~1,250 km.

Now, Arizona is another southern US state with an area of 295,254 km2 and a population of 6,4 million people. The Netherlands’ population is estimated at 16.85 million. If you combine the populations of Netherlands (16,85), Denmark (5,5) and Belgium (11 mill), those 33 million people are distributed over an area of ~115.000 km2. The (smaller) combined populations of Texas (25,1) and Arizona (6,4) have roughly a million square kilometers to deal with.

Does it make better sense to compare Texas with France? And those small countries with, say, the state of New York? It probably would. But it’s really hard to find good matches here, in particular due to the problem with population density differences. If you do find areas that match on this metric, odds are they don’t exactly match on other key metrics. The population density of the United States as a whole is 33,7/km2. If you scale that up by a factor of ten, you get to the third most densely populated state, Massachusetts (324.1 /km2). The population density of Massachusetts is somewhat lower than both Belgium’s (354.7/km2) and Netherlands’ (403/km2). The population density of Germany (229/km2) is comparable to that of Maryland (229.7/km2), which is in the US top five – Germany is almost 7 times as densely populated as ‘the US as a whole’. The population density of Great Britain is 277/km2, comparable to Connecticut’s (285.0/km2) – the state of Connecticut is btw. #4 on the US list. Italy is at 201.2/km2, between Delaware and Maryland – it would be on the top 6 if it was a US state. Americans like to use the expression ‘France and Germany’, but at least in terms of population density, there’s a huge difference between these two countries that I’m not sure they’re aware of: The population density of France is much lower (116/km2) than that of Germany, and rather more comparable to that of Spain (93/km2). All US states outside the top ten have population densities well below 100/km2, so note that even though Spain and France are relatively sparcely populated in a Western European context, France would be well within the top 10 and Spain just outside top 10 if the two countries were US states. The average population density of the entire European Union, including a lot of Eastern European countries most Americans couldn’t find on a map, is about the same as that of France, 116.2/km2; 3.5 times as high as the US average.

The population density of Iceland is 3.1/km2. As mentioned, the US average is 33.7/km2 and Belgium’s density is 354.7/km2. Remember these magnitudes. And yes, I know that the US population density is not homogenous and that a lot of it is almost empty. The population density of Europe isn’t homogenous either – to take an example, approximately one eighth of the German population – 10 million people – live in the very small Rhine-Ruhr metropolitan region (7,110 square kilometers, or less than 2% of the area). A fifth (12+ mill) of the French population live in the Paris metropolitan area. On the other hand, the population density of Norway, which even though she is a bit of an outlier is still very much a part of Western Europe, is 12,5/km2, comparable on that metric to, say, Nevada (9.02/km2) in the US.

If you look at differences in the US internally, when it comes to the 10 most densely populated states the one that is situated the most to the west of these is Ohio (the state border of which is still within 500 km of the Atlantic Ocean). Here’s a map:

Remember here that these numbers are people/sq mile, so to compare the numbers there with the rest of the numbers in this post you need to divide by ~2,6 or so. I found this comparable map of Europe convenient both because it gives density limits in sq. miles and because it’s a lot more fine grained than just data on the national level:

Last of all: Languages! Here’s the European map:

Let’s just say that a map of the US would look different. Yeah, a lot has been written about the Spanish/English-thing going on in the US. Well, intranational language barriers and -linguistic diversity aren’t exactly unknown phenomena in Europe either, despite the small size of the countries involved. A thing worth remembering here is also that in many of the bilingual regions of Europe highlighted here, English is the third language. If you’re a US tourist visiting some European bilingual region and you’re annoyed people don’t speak much English, ask yourself how many areas of the US you can think of where people can hold conversations in, say, English, Spanish and French.

Did you know that 90 percent of the human population lives on the Northern Hemisphere? I didn’t, before I wrote this.

View more: Next