At LW London last week, someone mentioned the possibility of a Google Glass app doing face recognition on people. If you've met someone before, it tells you their name, how you know them, etc. Someone else mentioned that this could reduce the social capital of people who are already good at this.

A third person said that something similar happened when Facebook started telling everyone when everyone else's birthday was. Previously he got points by making an effort to remember, but those points are no longer available.

Are there other social skills that technology has made obsolete? And the reverse question that it only just occured to me to ask, are there social skills that are only useful because of technology?

I'm not really sure what sorts of things I'm looking for here. "Ability to ask for directions" seems like one example, but it feels kind of noncentral to me, I don't know why. But I'm mostly just curious.

New to LessWrong?

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 2:40 PM

Ability to communicate through text has become a lot more important.

Pre-telephone, there was an era of correspondence by mail, but the skills are a little different, I think-- by mail, it's important to remember what's going on over days or weeks, and now it isn't.

Ability to communicate through text has become a lot more important.

Agreed. Though some people seem not to have gotten the memo.

I am much better at communicating through text than speech. That sometimes gives me issues at work. I'll write up something as clearly as possible, email it to someone, and they will then walk around to my desk and say, in essence: "I can't be bothered to read that and would prefer to spend more time getting it in a less coherent form, and also interrupt whatever you're doing." It drives me up the wall. And this is from IT people, who should know better.

I would like to blame the culture of phone messaging and twitter for getting people discombobulated whenever they encounter more than a few lines of text, but I'm actually not sure what the cause is. Maybe it's just me.

A charitable explanation for this would be that people are unaccustomed to well written documents and so are more confident with their interrogation skills than your writing skills. Even with a well written document, the reader will need to get into the mindset of the writer, which requires effort. Whereas interviewing the writer allows the writer to share some of the mental effort for bridging the gap.

Several uncharitable explanations leap to mind as well, but they don't seem helpful here.

[deleted -- responded to the wrong post]

This is interesting, because to me it naturally seems that communicating through speech face-to-face is far superior to text communication. It's the only way to read tonality in a voice as well as body language, which gives a lot of insight into the person's relationship to the material they're communicating. It's faster to shoot quick follow-up questions back and forth (and again, by seeing their response you can see if you're asking the right questions). And the face time can also be used to build rapport and strengthen relationships with the person you're talking to.

To put it another way, communicating through speech is much higher bandwidth.

Granted, I'm not as eloquent when I'm speaking as when I can take the time to compose something, and if you need to have a record of the conversation then text is clearly superior. But I'm surprised you take anything else as an affront.

To put it another way, communicating through speech is much higher bandwidth.

Agreed that speech has higher bandwidth; but (to me at least) it seems to also have a much lower signal to noise ratio.

But I'm surprised you take anything else as an affront.

I don't, in general. Circumstances matter and I don't find people talking to me offensive in itself, even though it's not my preferred form.

I do take it as an affront when I go to considerable effort to be clear about something important, to answer possible questions, to describe alternative options, and the recipient says, in essence: "tl;dr." This forces me to pay the mental cost of articulation twice, for a worse result, and interrupts whatever else I was doing at the time. The effect is especially bad in my profession because many computer tools do not lend themselves to precise verbal description, and a misheard command can be the difference between getting the expected outcome and ending up with a completely hosed system. This sort of thing is why I say IT folks should know better. I think doctors write down prescription instructions for about the same reason. Small mistakes matter. Heading them off is often the explicit reason I'm communicating in writing in the first place.

[-][anonymous]10y00

Looks to be a subtype of the general observation that whoever can establish her authority in an argument wins.

[This comment is no longer endorsed by its author]Reply

I think some people find it easier to focus on voice than text, and/or they just want to feel reassured that someone is paying attention to them.

Similarly, I find that people often insist on communicating information by phone, when the same information could be conveyed by email more quickly, without interrupting other activities on the recipient's end, in a form which leaves a convenient lasting record which can be referred back to as necessary. In fact, I often find people trying to contact me on the phone even when the advantages of email are so pronounced that they're effectively forced to send a followup email restating what they already said on the phone, in the form they should have put it in originally had they not felt compelled to waste both of our time first.

Along with this, I guess that ability to communicate over the phone is becoming less relevant, having been useful for <150 years.

I've seen a few theories suggesting the technology has already encouraged compartmentalization of information : it has become more vital to remember how or where to find data, than to remember the data itself, if the data is believed to be readily accessible. (There is some evidence, but the studies have the typical issues of mainstream social psychology.) This would include most forms of social data.

Reputation and related network effects are dramatically different today than in the recent past. I'd expect (95% confidence) that whatever successors we see to smartphones (whether Glass or something else) will continue that trend. I'd expect, albeit with low confidence (~75%), that people have greater ability to connect several different names to one personality. Where it was once a specialty skill (aliases, the professional writing community), it's increasingly common for anyone who wants to maintain an online social presence.

Several skills are somewhat temporary. People are less capable of remember phone numbers today than they were twenty years ago, who in turn were much more capable of associating seven- or ten-digit strings to names than the people before them (even adjusting for these numbers being built to be memorable). There are a number of other less generally used versions of such skills : four-triplet memorization is standard among network technicians under IPv4, but will likely degrade as (if) IPv6 becomes common in consumer environments.

There are a number of other less generally used versions of such skills : four-triplet memorization is standard among network technicians under IPv4, but will likely degrade as (if) IPv6 becomes common in consumer environments.

Google measures global IPv6 usage at 3.5%, up from 1.5% a year ago and 0.65% a year before that. That's more than doubling as a percentage year-over-year.

Adoption is markedly stronger in the US and parts of Western Europe than the rest of the world. 7% of usage from the US, 8% from Germany, and 18% from Belgium is via IPv6; versus 0.39% from Canada and 0.18% from the UK.

IPv6 usage is stronger on weekends than weekdays, which indicates stronger adoption in consumer environments (including mobile) than in workplace environments. I suspect this is due to mobile providers, some of whom hand out v6 addresses to any phones that support the protocol well. (In the US, Verizon Wireless and T-Mobile both push v6 as default on new phones; many end users may be using it without even realizing it.)

Google measures global IPv6 usage at 3.5%, up from 1.5% a year ago and 0.65% a year before that. That's more than doubling as a percentage year-over-year.

God I hope that continues. Death to IPv4 and the NAT insanity it makes necessary.

(hrm. I'm trying to replace the word "God" in that sentence with something less incoherent but containing the same sense of emphasis, and coming up blank. I blame Monday. Suggestions, anyone?)

hrm. I'm trying to replace the word "God" in that sentence with something less incoherent but containing the same sense of emphasis, and coming up blank. I blame Monday. Suggestions, anyone?

Cute Kittens I hope that continues.

(Emphasize the kittens like it's a curse word, or it will sound ridiculous. You are not trying to avoid cursing, you are trying to introduce it. Also it will sound ridiculous anyways.)

This is interesting because I somehow managed to not recognize that I was trying to curse. I swear all the time in real life and most places online, but not here. It's not because I'm thinking "I shouldn't swear because it's LW," either; I just don't even think about swearing because it's so dramatically out of place, like using a cell phone during a live theater performance.

I don't understand why NAT is considered bad. Devices on private networks should have private addresses.

Rebuttal: Endpoints that talk to services on public networks are part of the public network, not a private network — even if they are behind middleboxes such as firewalls. Endpoints on the public network should be distinguishable one from another.

Applications should be able to count on an addressing system that distinguishes endpoints, not just networks. That assumption was baked into the design of TCP/IP, allowing the creation of a wide variety of network applications. Many early applications don't work under NAT without application-specific workarounds. NAT has badly encumbered the design of modern applications, to the point where people now assume that there is a hard distinction between "servers" (machines that have public addresses) and "user machines" (that don't).

In the TCP/IP design, hosts are distinguished by addresses, and services on those hosts are distinguished by port numbers. In the NAT non-design, the hosts on a "private" (not really private, that is, air-gapped) network cannot be distinguished by addresses. As such, applications protocols cannot make intelligent use of addresses, and developers of applications intended to run on hosts located in homes and offices are hampered in what they can offer, by having to work around NAT all the time.

NAT conflates several issues; notably security policy and addressing. The ostensible security benefit (disallowing inbound probing of "private" endpoints) can actually be had without losing the benefits of a public addressing: it's called a default-deny firewall, it's existed since before NAT, and you can have it even with public addresses behind it. (Though neither NAT nor default-deny firewalls provide general security, especially in the browser era, where endpoints run nearly-arbitrary software they've fetched off the net.)

NAT requires protocol-specific workarounds — either in the middlebox itself, such as port forwarding, or in the application, such as STUN. These deeply encumber application design, in ways that encourage centralization and discourage distributed protocols.

In gist: https://en.wikipedia.org/wiki/End-to-end_principle

Endpoints that talk to services on public networks are part of the public network, not a private network — even if they are behind middleboxes such as firewalls.

Only if these endpoints service incoming public requests. If my machine, for example, functions solely as an SSH terminal to tunnel into a public server (and has no open ports), I don't see how it can be counted as a "part of the public network" in any meaningful sense.

Endpoints on the public network should be distinguishable one from another.

Yes, but not on internal LANs which is the whole point of the discussion. From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal 'net (at least without putting in some effort for it :-/)

NAT has badly encumbered the design of modern applications

I don't think so. NAT just forced the applications to go up one abstraction layer. That's not necessarily a bad thing.

Besides, in the world of e.g. load balancers and VMs your desire to have a known physical machine sit at a given IP address seems a bit misguided. The endpoints are shifting and fluid nowadays.

Only if these endpoints service incoming public requests.

As I said, NAT puts encumbrances on application design. One of them is "end-user machines only initiate TCP sessions; they don't listen for them." This fits badly to a number of application domains including peer-to-peer protocols generally, games, chat systems, VoIP, and so on. The workarounds have been extensive and expensive. Ever worked with STUN?

Yes, but not on internal LANs which is the whole point of the discussion.

Private networks are IP networks that are air-gapped from the public network. We're talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources. These are not secured from the public network ... especially since current client software (i.e. web browsers) promiscuously makes requests to all sorts of endpoints without checking with the user first.

A lot of this actually exists for OSI Layer 8 and 9 reasons (the "financial" and "political" layers of the network design). Justifying NAT on the basis of security is a rationalization, since neither does it provide security that couldn't be had without it (via a plain firewall), nor was it actually deployed for security reasons.

From the security point of view, I do NOT want general public to be able to distinguish and target separate machines on an internal 'net (at least without putting in some effort for it :-/)

If security was the only concern, we'd shut the damn thing down and reimplement it in Haskell. It ain't.

But on the other hand, it's a security problem when a security-sensitive service (say, a money-making web server) can't distinguish between an abusive client and an innocent one because they happen to be located behind the same NAT. Denying service to a NAT address that emits abuse allows the abusive client to dictate whether the innocent client gets any service. This is unacceptable to a for-profit service, especially if the two clients and the NAT are not actually under common administration, which they typically aren't today. If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one. IPv6 helps with that, by abolishing address-exhaustion as a justification for NAT.

end-user machines only initiate TCP sessions; they don't listen for them

That's not a misfeature of NAT -- it's adjustable at the router/firewall. Games, chat, etc. work perfectly well given the appropriate configuration of your router.

We're talking about networks which have been assigned private-network (RFC 1918) addresses due to IPv4 address exhaustion and ISP market segmentation — but which gateway onto the public network via NAT and expect to access public-network resources.

Correct, except for the reasons why they were assigned private-network addresses. In the networks I'm familiar with the machines were assigned RFC 1918 addresses because it's convenient (there's local control over IP assignment), because the network has to deal with machines coming and going (laptops, smartphones), and because many of these machines are not supposed to be accessible from the pubic internet.

These are not secured from the public network

Sure they are. That is, some of them are, provided the local sysadmin made it so.

To give a trivial example, consider a local database server which does not run any browsers and which responds to (and is supposed to only respond to) just local machines -- easily done if the local machines use private-network IP addresses which are not routable over the general internet.

If all hosts are distinguishable by address, then the security-sensitive service can accept traffic from a good client and reject traffic from a bad one.

That's a very naive approach. IPv6 is not an immutable GUID given to a piece of hardware once and forever. A MAC address is something close to that and even then it's trivially spoofed.

Consider a scenario where I'm cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your "money-making web server" do about them?

Correct, except for the reasons why they were assigned private-network addresses.

In the case of end-user networks, the reason is simple: end-user ISPs issued only one IPv4 address per customer, under the assumption that the customer would attach only one host to the network. This assumption was sometimes tacit, but sometimes explicit as a matter of contract or support policy. It became increasingly inappropriate for broadband users' actual use.

Customers worked around this by deploying NAT devices. This was sometimes against the ISP's wishes — to the extent that MAC address cloning (where the NAT device takes on the MAC address of the single host formerly attached directly to the public network) remains a common feature of end-user NAT devices; this originated as a way of fooling the ISP's equipment into believing the NAT device was the same machine as the single host it replaced.

It was only subsequent to this that consumer ISPs abandoned the pretense of not supporting multiple hosts in the customer's home — and began selling or leasing NAT devices themselves as a profit center, rather than ignoring or attempting to ban them.

In the case of organizational networks, one typical reason to deploy NAT was (and remains!) address exhaustion: the organization is not able to obtain enough IPv4 addresses from an ISP or registrar to assign a unique public address to each host they wish to attach to the network. Although the ISP doesn't intend to disallow more hosts, it is unwilling or unable to provide the address space for them. In some parts of the world, multiple levels of NAT are deployed to cope with address exhaustion, a situation that cannot be explained as a security measure at all.

Sure they are. That is, some of them are, provided the local sysadmin made it so.

Ah. You're referring to networks that have a "local sysadmin". I'm also considering networks that don't.

(Most don't.)

(But networks with local sysadmins can have default-deny firewalls without needing NAT.)

I'm also considering the situation of developers of end-user networked applications, who have to work with whatever kind of network the user's host happens to be attached to. Those developers have a lot more flexibility — and a lot fewer workarounds to cope with — under publicly-routable v6 than under pervasively-NATted v4.

Consider a scenario where I'm cloning VMs at a rate of, say, one per second and each lives for a couple of minutes. What will your "money-making web server" do about them?

If blocking a single address doesn't work, with v6 the natural next step is to block the /64, the unit of stateless address autoconfiguration — since that's the minimal likely unit of common administration. Yes, that's analogous to blocking a NAT v4 address ... but you don't have to start there.

You're referring to networks that have a "local sysadmin". I'm also considering networks that don't.

You have been making fully general claims about the evilness of NAT, not conditional on whether local networks are (well-)managed or not. I don't think it is as clear-cut as you make it to be.

The proliferation of behind-the-NAT machines has many reasons -- some historical (as you pointed out, there was/is a shortage of IPv4 addresses and ISPs were stingy with allocating them), but some valid reasons of security, convenience, etc. There are a LOT of internal networks belonging to organizations, most of them should stay behind NAT.

Your basic complaint is that NAT makes life hard for developers of network applications. Yes, it does. Suck it up. Reality is complicated and coding for the real world instead of an abstract model is messy. Yes, it would be nice if everything were simple. No, it's not going to happen.

NAT/PAT as a standard is very limited in its ability to protect against incoming messages. They can not log errors, they do not process unknown responses, they do not process unknown 'responses' that are really attacks. No verification that incoming messages have valid formats, minimal or no handling of DoS. Some configurations (NAT without PAT) will intentionally and directly expose all ports on an internal machine to the outside world. And even from an obscurity perspective, it is trivial (and standard!) for servers to be able to distinguish between and identify different machines inside of a NAT/PAT network.

In practice, it should and common home implementations will provide at least some obscurity, but bad firmware and software configuration has and will continue to leave incoming ports open. Even best-case implementations only provide similar protection as a limited (incoming-only) packet filter, which is really not enough protection for the average system. You should always run a stateful firewall -- and once you have a stateful firewall, NAT/PAT can not provide excess security. If you absolutely can't have a firewall, then NAT/PAT is better than nothing... but it's still not very good or even good enough.

IPv6 does have unique local addresses and a NAT-like feature called NPT, which may be the sort of thing you're thinking about. But it does so by avoiding many of the worst issues of NAT/PAT configurations, and it does so for ease-of-configuration and ease-of-transfer purposes, with security falling to the security-focused tools.

NAT/PAT does provide some utility, both in that most (good) NAT/PAT implementations at least give default-deny behavior and it makes configuring a network easier. But there are a huge number of resulting issues. The problem is that there isn't a distinction between "internal networks" and "public servers" or between "public servers" and home machines. They're not merely a bad metaphor, but an actively misleading one. The internet /requires/ all devices be addressable. If your home machine can access any website, it does so by making itself distinguishable from others on the same internal network and leaving itself exposed to return messages.

NAT works well enough in a www-focused environment -- TCP, short connection duration, limited number of expected simultaneous applications, at least one machine not behind NAT -- but that's not all or even most of the internet. Long-lasting connections, 'connectionless' messages, any sort of serve-from-home configuration, all require fairly complicated work-arounds.

It's not merely forced folk to go up one abstraction layer, but to go to some vastly suboptimal designs. Connection-oriented protocols like TCP aren't very good solutions when latency matters, or where you're expecting to send only short messages -- but they're essentially required for any 'server'-to-'host' communication. Heartbeat messages should be much more specialized than they currently are: they're common because the typical NAT/PAT will drop a connection mapping if you take long at all. Middleware servers and psuedo-VPNs like Hamachi are about the worst possible way to handle secure communications between 'internal networks', but they're an industry because NAT/PAT makes full configuration of sane tools very complicated. And the less said about STUN or uPNP, the better.

NAT/PAT as a standard is very limited in its ability to protect against incoming messages.

Of course, since it's not its function. Firewalls exist for a reason.

The problem is that there isn't a distinction between "internal networks" and "public servers" or between "public servers" and home machines. They're not merely a bad metaphor, but an actively misleading one.

I disagree. "Home machine" is a silly name which doesn't mean much, but the distinction between internal networks and public servers is rather obvious to me.

The internet /requires/ all devices be addressable.

No, I don't think it does. IP protocol requires an IP address, but that's not the same thing as requiring devices be addressable. Network bridges and intrusion-detection boxes, for example, are devices that are commonly set up as non-addressable.

If your home machine can access any website, it does so by making itself distinguishable from others on the same internal network and leaving itself exposed to return messages.

Let's leave home machines out of it and talk about boxes on an internal LAN. The mapping between IP addresses and machines can be established by middleware and doesn't have to be long-term or permanent. In some cases (e.g. VMs, high availability environments) the end point of a connection can change without the public server being aware of anything at all.

Endpoints not being able to connect to each other makes some functionality costly or impossible. For example, peer to peer distribution systems rely on being able to contact cooperative endpoints. NAT makes that a lot harder, meaning plenty of development and usability costs.

A more mundane example is multiplayer games. When I played warcraft 3, I had lots of issues testing maps I made because no one could connect to games I hosted (I was behind a university NAT, out of my control). I had to rely on sending the map to friends and having them host.

For example, peer to peer distribution systems rely on being able to contact cooperative endpoints.

Unlike what the TCP/IP designers envisioned, current internet is basically client/server. A client always initiates the exchange and should be isolated from unsolicited access. If necessary, P2P access is a solved problem, and it is properly done by applications at a level higher than TCP/IP, anyway.

no one could connect to games I hosted

Arguably the university's NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren't actively against it.

Arguably the university's NAT functioned as intended. They did not provide you with internet access for the purpose of hosting games, even if they weren't actively against it.

The NAT/firewall was there for security reasons, not to police gaming. This was when I lived in residence, so gaming was a legitimate recreational use.

Extremely temporary friendships. I suspect, without demonstrable evidence beyond stories from friends and myself, that location-based networking applications have lead to us developing better skills to manage temporary group friendships among travelers and locals. CouchSurfing, AirBnB, Grindr, etc., started out fairly awkward for all involved several years ago, but now it seems to me that people are comfortable and adept with the norms.

Memory has definitely changed with the advent of technology. The ability to acquire information almost instantaneously has definitely reduced the need to remember more information. Also, I feel that social interaction has changed due to instantaneous communication. People expect others to instantly reply at any time and be available at all times. I know this is a cause of increased stress (http://scholarsarchive.jwu.edu/mba_student/12/) in the work place and a blurring of tradition work/home life boundaries.

I think the permanence of technological communication is also causing problems and changes in social skills. With the internet, anything that is posted can potentially be recovered at any point in the future. Never before has a form of communication been needed what would instantly delete itself (snapchat, privnote), at least in any other setting than a military one.

Technology particularly the internet and the multitude of television channels have made it easier and easier for extreme views to flourish. Even if you have views that would be considered immoral, wrong, or evil by most people it is easy enough to find groups and people who share your own ideas and views, as well as exclude anything that does not fit your view. Take the fact that certain news channels cater to certain ideological standpoints (fox news, Cnn, etc). No longer do companies need to cater to a middle point or showcase opposing views. The internet even changes based on your beliefs without you knowing (http://www.technologyreview.com/view/522111/how-to-burst-the-filter-bubble-that-protects-us-from-opposing-views/).

Technology has also greatly changed how we speak and talk, for instance I hear people saying "lol" or Oh-Em-Gee on serious news channels.

A new challenge not present when everyone you knew was from your area: How do you know this person?

With text messages and twitter, conciseness is valued and practiced. Outside of those, typing is faster than handwriting and there is no physical limit on length, just what someone is willing to read.

Due to text communications, personal hygiene and appearance are less important than proper spelling and grammar. Now it is possible to have strong connections with people without knowing their real name or what they look like. Avoiding the meatspace completely is easy, so skills such as making eye contact, small talk, and strong handshakes decay.

A new challenge not present when everyone you knew was from your area: How do you know this person?

This has become increasingly true for me. There are several people across the world who I regularly converse with and referring to them in every day conversation with (meatspace) friends and family tends to be full of qualifiers: E.g. "This friend in the US I met over tumblr", or "That guy from the Magic: The Gathering forum."

When talking about people I know "in real life", I don't feel the need to qualify where I met them.

Recalling a past interaction with a person is about more than "how do I know this person?". It's also about the emotional attachment that you have to the person. It's about whether they did you a favor and you owe them something or about whether you did them a favor and they owe you. Most of the time those relationships a very implicit and not easily grasped by notes.

There also a cost with not paying attention to another person when you meet them but being occupied with reading your notes, when the person would expect that you pay attention to them.

Absolutely, but I think that a lot of the time I would only need a small amount of information to trigger better recall. And sometimes there just isn't much to remember. For example, "this is ____, she was in your dance class yesterday" would have saved some momentary embarrassment at a party a month or so back.