Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Morendil 02 January 2010 06:00:06PM *  3 points [-]

I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.

Call it 75% - I would be more than mildly surprised if it hadn't happened by 2020.

For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.

My reasoning is similar to Michael Vassar's AR prediction, and based on the iPhone's success. That doesn't seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.

However, in many ways these interfaces are still primitive and awkward. "Sixth Sense" type interfaces are interesting, but still strike me as overly intrusive on others' personal space.

It would make sense to me to be able, say, to subvocalize a command such as "Show me the way to metro station X", then have my smartphone gently "tug" me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.

I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into "augmented humans".

When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).

I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.

ETA: I don't mind this comment being downvoted, even as shorthand for "I disagree", but I'd be genuinely curious to know what flaws you're seeing in my thinking, or what facts you're aware of that make my degree of confidence seems way off.

Comment author: Morendil 16 September 2017 02:58:34PM 1 point [-]

By now this looks rather unlikely in the original time-frame, even though there are still encouraging hints from time to time.

Comment author: RomeoStevens 02 February 2017 09:15:00PM 0 points [-]
Comment author: Morendil 03 February 2017 07:34:12AM 2 points [-]
Comment author: Gram_Stone 11 January 2017 04:41:39AM 0 points [-]

Additional data point: I see [deleted].

Comment author: Morendil 11 January 2017 08:35:59AM *  0 points [-]

Me, as well.

(Edit: looking at Internet Archive's cached snapshots, all of them that I checked look that way to me too.)

(Edit2: it has looked that way to others as well for quite some time. I wouldn't worry about it.)

Comment author: Morendil 30 December 2016 12:36:21PM 2 points [-]

I'm seeing similarities between this and Goldratt's "Evaporating Cloud". You might find it worthwhile to read up on applications of EC in the literature on Theory of Constraints, if you haven't already.

In response to Be secretly wrong
Comment author: Vaniver 10 December 2016 07:06:48AM 5 points [-]

Moved to main and promoted.

In response to comment by Vaniver on Be secretly wrong
Comment author: Morendil 12 December 2016 05:59:39PM 1 point [-]

Does that mean Main is no longer deprecated?

Comment author: Morendil 28 November 2016 06:41:50PM 2 points [-]

I realize I haven't given a direct answer yet, so here it is: I'm in, if I'm wanted, and if some of the changes discussed here take place. (What it would take to get me onboard is, at the least, an explicit editorial policy and people in charge of enforcing it.)

[Link] Newcomb's problem divides philosophers. Which side are you on?

-1 Morendil 28 November 2016 06:34PM
Comment author: sarahconstantin 27 November 2016 10:52:41AM 25 points [-]

Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.

One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.

Comment author: Morendil 27 November 2016 11:11:26AM 5 points [-]

There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.

I'd be wary of post hoc ergo propter hoc in this context. You might also have expected that by leaving for other projects these posters would create a vacuum for others to fill. It could be worth looking at why that didn't happen.

Comment author: sarahconstantin 27 November 2016 10:27:46AM 18 points [-]

1: the general move of the internet away from blogs and forums and towards social media.

In particular, there seems to be a mental move that people make, that I've seen people write about quite frequently, of wanting to avoid the more "official"-seeming forms of online discussion, and towards more informal places. From blogging to FB, from FB to Tumblr and Twitter, and thence to Snapchat and other stuff I'm too old for. Basically, people say that they're intimidated to talk on the more official, public channels. I get a sense of people feeling hassled by unfriendly commenters, and also a sense of something like "kids wanting to hang out where the grownups aren't", except that the "kids" here are often adults themselves. A sense that you'll be judged if you do your honest best to write what you actually believe, in front of people who might critique it, and so that it's safer to do something that leaves you less exposed, like sharing memes.

I think the "hide, go in the darkness, do things that you can't do by daylight" Dionysian kind of impulse is not totally irrational (a lot of people do have judgmental employers or families) but it's really counterproductive to discourse, which is inherently an Apollonian, daylight kind of activity.

Comment author: Morendil 27 November 2016 10:38:52AM 2 points [-]

Yes, and this would be a general trend - affecting all community blogs to some extent. I was looking for an explanation for the downfall of LessWrong specifically, but I suppose it's also interesting to consider general trends.

Would you say that LessWrong is particularly prone to this effect, and if so because of what properties?

Comment author: SatvikBeri 27 November 2016 09:59:52AM 4 points [-]

My theory is that the main things that matter are content and enforcement of strong intellectual norms, and both degraded around the time a few major high-status members of the community mostly stopped posting (e.g. Eliezer and Yvain.)

The problem with lack of content is obvious, the problem with lack of enforcement is that most discussions are not very good, and it takes a significant amount of feedback to make them better. But it's hard for people to get away with giving subtle criticism unless they're already a high-status member of a community, and upvotes/downvotes are just not sufficiently granular.

Comment author: Morendil 27 November 2016 10:33:22AM 8 points [-]

This feels like a good start but one that needs significant improvement too.

For instance, I'm wondering how much of the situation Anna laments is a result of LW lacking an explicit editorial policy. I for one never quite felt sure what was or wasn't relevant for LW - what had a shot at being promoted - and the few posts I wrote here had a tentative aspect to them because of this. I can't yet articulate why I stopped posting, but it may have had something to do with my writing a bunch of substantive posts that were never promoted to Main.

If you look at the home page only (recent articles in Main) you could draw the inference that the main topics on LessWrong are MIRI, CFAR, FHI, "the LessWrong community", with a side dish of AI safety and startup founder psychology. This doesn't feel aligned with "refining the art of human rationality", it makes LessWrong feel like more of a corporate blog.

View more: Next