Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 21 August 2017 07:13:41AM 2 points [-]

The ability to express basic nonsurprising facts is useful.

When discussing whether or not to allow abortion of a fetus it matters whether you believe that real human consciousness needs a certain amount of neurons to emerge.

Plenty of people believe in some form of soul that's a unit that creates consciousness. Saying that it's emergent means that you disagree.

According to Scott's latest post about EA global, there are people at the foundational research institute who do ask themseves whether particles can be conscious.

There are plenty of cases where people try to find reductionist ways to thinking about a domain. Calories in, calories out is a common paradigm that drives a lot of thinking about diet. If you instead have a paradigm that centeres around a cybernetic system that has an emergent set point that's managed by a complex net of neurons, that paradigm gives a different perspective about what to do about weightloss.

Comment author: Viliam 22 August 2017 09:24:59PM *  0 points [-]

Maybe this is just me, but it seems to me like there is a "motte and bailey" game being played with "emergence".

The "motte" is the definition provided here by the defenders of "emergence". An emergent property is any property exhibited by a system composed of pieces, where no individual piece has that property alone. Taking this literally, even "distance between two oranges" is an emergent property of those two oranges. I just somehow do not remember anyone using that word in this sense.

The "bailey" of "emergence" is that it is a mysterious process, which will somehow inevitably happen if you put a lot of pieces together and let them interact randomly. It is somehow important for those pieces to not be arranged in any simple/regular way that would allow us to fully understand their interaction, otherwise the expected effect will not happen. But as long as you close your eyes and arrange those pieces randomly, it is simply a question of having enough pieces in the system for the property to emerge.

For example, the "motte" of "consciousness is an emergent property of neurons" is saying that one neuron is not conscious, but there are some systems of neurons (i.e. brains) which are conscious.

The "bailey" of "consciousness is an emergent property of neurons" is that if you simulate a sufficiently large number of randomly connected neurons on your computer, the system is fated to evolve consciousness. If the consciousness does not appear, it must be because there are not enough neurons, or because the simulation is not fast enough.

In other words, if we consider the space of all possible systems composed of 10^11 neurons, the "motte" version merely says that at least one such system is conscious, while the "bailey" version would predict that actually most of them are conscious, because when you have sufficient complexity, the emergent behavior will appear.

The relevance for LW is that for a believer in "emergence", the problem of creating artificial intelligence (although not necessarily friendly one) is simply a question of having enough computing power to simulate a sufficiently large number of neurons.

Comment author: Viliam 20 August 2017 09:58:03PM *  2 points [-]

Emergency still feels like a "nonapple". You are right that mass is not an emergent property of quarks, but still, pretty much everything else in this universe is. If I understand it correctly, even "the distance between two specific quarks" is already an emergent property of quarks, because neither of those two quarks contains their distance in itself. So if I say e.g. "consciousness is an emergent property of quarks", I pretty much said "consciousness is not mass", which is technically true, but still mostly useless. Most of us already expected that.

Similarly, "consciousness is an emergent property of neurons" is only a surprise to those people who expected individual neurons to be conscious. I am sure such people exist. But for the rest of us, what new information does it convey?

Because the trick is that even if you don't believe that individual neurons are conscious, hearing "consciousness is an emergent property of neurons" still feels like new information. Except, there is nothing more there, only the aura of having an explanation.

Comment author: Viliam 19 August 2017 10:40:25PM 1 point [-]

Metadownvoted.

Comment author: username2 14 August 2017 06:25:00PM *  1 point [-]

I'm currently going through a painful divorce so of course I'm starting to look into dating apps as a superficial coping mechanism.

It seems to me that even the modern dating apps like Tinder and Bumble could be made a lot better with a tiny bit of machine learning. After a couple thousand swipes (which doesn't take long), I would think that a machine learning system could get a pretty good sense of my tastes and perhaps some metric of my minimum standards of attractiveness. This is particularly true for a system that has access to all the swiping data across the whole platform.

Since I swipe completely based on superficial appearance without ever reading the bio (like most people), the system wouldn't need to take the biographical information into account, though I suppose it could use that information as well.

The ideal system would quickly learn my preferences in both appearance and personal information and then automatically match me up with the top likely candidates. I know these apps keep track of the response rates of individuals, so matches who tend not to respond often (probably due to being very generally desirable) would be penalized in your personal matchup ranking - again, something machine learning could handle easily.

I find myself wondering why this doesn't already exist.

Comment author: Viliam 16 August 2017 07:16:52PM 1 point [-]

Or maybe some kind of recommendation system: "Users who dated this person also dated these: ..."

Comment author: Lumifer 07 August 2017 03:08:02PM 0 points [-]

Without knowing much physics... "cooling adiabatically" means that both the temperature and the density decrease. If the air inside is not affected by this, it will be warmer, but will also be more dense.

Comment author: Viliam 07 August 2017 08:50:10PM 0 points [-]

I am suspicious about the part where the density decreases. That means, the air expands to a larger volume. But larger in which dimension: horizontally or vertically? I suppose it cannot be horizontally, because next to it there is air in exactly the same situation, which also tries to expand with equal pressure. But if vertically, then what is the difference between being in a (vertical) chimney or being outside of it?

Comment author: morganism 05 August 2017 08:33:11PM 0 points [-]

geo-engineering with tethers, or tall chimneys. Benefits include electric production and copious water condensate

http://www.superchimney.org/

25,000 chimneys will offset Global Warming.
Each chimney will produce electricity
Each chimney will induce rain generation in surrounding areas
Each chimney will transform desert into arable land capable of trapping . CO2
Comment author: Viliam 07 August 2017 02:57:50PM 0 points [-]
  1. I don't understand what is the difference between the air inside the chimney and the air outside the chimney. They have the same temperature so why will the air inside the chimney be rising?

At the bottom of the chimney air has the same temperature inside and outside. The inside and outside air will be rising up. However, the air outside will be cooling adiabatically, so its temperature will be dropping. The air inside will be not affected by adiabatic cooling and will maintain its energy, so it will be warmer and less dense than outside air. And the higher we go, the bigger that difference will be. That difference will be the driving force of the chimney.

I am bad at physics, so I want to ask others whether this part actually makes sense. I strongly suspect it does not.

Comment author: MrMind 07 August 2017 10:31:39AM 1 point [-]

Which emotions would be easiest?

Sexual attraction...

Comment author: Viliam 07 August 2017 12:52:03PM 1 point [-]

I am imagining how to set up the experiment...

"Sir, I will leave you alone in this room now, with this naked supermodel. She is willing to do anything you want. However, if you can wait for 20 minutes without touching her -- or yourself! -- I will bring you one more."

In response to Inscrutable Ideas
Comment author: Viliam 05 August 2017 02:30:32PM *  5 points [-]

Here is my attempt to summarize "what are the meta-rationalists trying to tell to rationalists", as I understood it from the previous discussion, this article, and some articles linked by this article, plus some personal attempts to steelman:

1) Rationalists have a preference for living in far mode, that is studying things instead of experiencing things. They may not endorse this preference explicitly, they may even verbally deny it, but this is what they typically do. It is not a coincidence that so many rationalists complain about akrasia; motivation resides in near mode, which is where rationalists spend very little time. (And the typical reaction of a rationalist facing akrasia is: "I am going to read yet another article or book about 'procrastination equation'; hopefully that will teach me how to become productive!" which is like trying to become fit by reading yet another book on fitness.) At some moment you need to stop learning and start actually doing things, but rationalists usually find yet another excuse for learning a bit more, and there is always something more to learn. They even consider this approach a virtue.

Rationalists are also more likely to listen to people who got their knowledge from studying, as opposed to people who got their knowledge by experience. Incoming information must at least pretend to be scientific, or it will be dismissed without second thought. In theory, one should update on all available evidence (although not equally strongly), and not double-count any. In practice, one article containing numbers or an equation will always beat unlimited amounts of personal experience.

2) Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don't believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.

Furthermore, meta-rationalists don't really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose "the current scientific knowledge" as one of your starting maps.)

3) There is an "everything of everythings", exceeding all systems, something like the highest level Tegmark multiverse only much more awesome, which is called "holon", or God, or Buddha. We cannot approach it in far mode, but we can... somehow... fruitfully interact with it in near mode. Rationalists deny it because their preferred far-mode approach is fruitless here. But you can still "get it" without necessarily being able to explain it by words. Maybe it is actually inexplicable by words in principle, because the only sufficiently good explanation for holon/God/Buddha is the holon/God/Buddha itself. If you "get it", you become the Kegan-level-5 meta-rationalist, and everything will start making sense. If you don't "get it", you will probably construct some Kegan-level-4 rationalist verbal argument for why it doesn't make sense at all.

How well did I do here?

Comment author: lifelonglearner 04 August 2017 05:09:55PM 2 points [-]

Hey Villiam! I hope you're not feeling too frustrated by the link + short summary. Some additional context that might help:

I currently am trying to make a more coherent / readable synthesis of existing work on habits / stuff like this, and that's currently in the editing stage, so hopefully that'll be good for everyone. It just so happened that I bumped into this paper when I was researching stuff, and I found that it approximated a lot of my thoughts very well, so I thought I'd share it here as an interesting compilation before my content was ready.

Comment author: Viliam 04 August 2017 08:48:37PM 0 points [-]

Thank you, that would be great!

Comment author: Viliam 04 August 2017 01:26:26PM 0 points [-]

I started copying the relevant parts of the article, but... why exactly am I trying to do your homework?

Wouldn't it make more sense for one person who likes the article (and presumably has already read it) to find and copy the good parts, instead of hundred people having to go through dozens of pages, just to see if there is something new and important?

At least in my case, following links and reading long texts just because someone shared them without providing a summary, is a huge contributor to akrasia.

View more: Next