You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

"Free will" being a illusion fits pretty well with the simulation hypothesis.

-12 dedman 10 May 2016 11:10AM

Similar to a game of The Sims the characters actions are chosen in advance.

A string of actions were your last action effects the next one and were actions are cancelled out and changed. 

Your next action is to prepare a meal. You walk to the kitchen to start preparing the meal when you open the fridge and notice you don't have any food. The action is now cancelled and replaced with "Go to the store to buy food". 

Thermodynamics of Intelligence and Cognitive Enhancement

8 CasioTheSane 03 April 2014 11:17PM

Introduction

Brain energy is often confused with motivation, but these are two distinct phenomena. Brain energy is the actual metabolic energy available to the neurons, in the form of adenosine triphosphate (ATP) molecules. ATP is the "energy currency" of the cell, and is produced primarily by oxidative metabolism of energy from food. High motivation increases the use of this energy, but in the absence of sufficient metabolic capacity it eventually results in stress, depression, and burnout as seen in manic depression. Most attempts at cognitive enhancement only address the motivation side of the equation.

The “smart drug” culture has generally been thinking pharmaceutically rather than biologically. Behind that pharmaceutical orientation there is sometimes the idea that the individual just isn't trying hard enough, or doesn't have quite the right genes to excel mentally.

-Ray Peat, PhD

Cellular Thermodynamics

Any simple major enhancement to human intelligence is a net evolutionary disadvantage.

-Eliezer Yudkowsky (Algernon’s Law)

I propose that this constrain is imposed by the energy cost of intelligence. The conventional textbook view of neurology suggests that much of the brain's energy is "wasted" in overcoming the constant diffusion of ions across the membranes of neurons that aren't actively in use. This is necessary to keep the neurons in a 'ready state' to fire when called upon.

Why haven't we evolved some mechanism to control this massive waste of energy?

The Association-Induction hypothesis formulated by Gilbert Ling is an alternate view of cell function, which suggests a distinct functional role of energy within the cell. I won't review it in detail here, but you can find an easy to understand and comprehensive introduction to this hypothesis in the book "Cells, Gels and the Engines of Life" by Gerald H. Pollack (amazon link). This idea has a long history with considerable experimental evidence, which is too extensive to review in this article.

The Association-Induction hypothesis states that ion exclusion in the cell is maintained by the structural ordering of water within the cytoplasm, by an interaction between the cytoskeletal proteins, water molecules, and ATP. Energy (in the form of ATP) is used to unfold proteins, presenting a regular pattern of surface charges to cell water. This orders the cell water into a 'gel like' phase which excludes specific ions, because their presence within the structure is energetically unfavorable. Other ions are selectively retained, because they are adsorbed to charged sites on protein surfaces. This structured state can be maintained with no additional energy. When a neuron fires, this organization collapses, which releases energy and performs work. The neuron uses significant energy only to restore this structured low entropy state, after the neuron fires. 

This figure (borrowed from Gilbert Ling) summarizes this phenomena, showing a folded protein (on the left) and an unfolded protein creating a low entropy gel (on the right).

 

 

To summarize, maintaining the low entropy living state in a non-firing neuron requires little energy. This implies that the brain may already be very efficient, where nearly all energy is used to function, grow, and adapt rather than pump the same ions 'uphill' over and over.

Cost of Intelligence

To quote Eliezer Yudkowsky again, "the evolutionary reasons for this are so obvious as to be worth belaboring." Mammalian brains may already be nearly as efficient as their physics and structure allows, and any increase in intelligence comes with a corresponding increase in energy demand. Brain energy consumption appears correlated with intelligence across different mammals, and humans have unusually high energy requirements due to our intelligence and brain size. 

Therefore if an organism is going to compete while having a greater intelligence, it must be in a situation where this extra intelligence offers a competitive advantage. Once intelligence is adequate to meet the demands of survival in a given environment, extra intelligence merely imposes unnecessary nutritional requirements.

These thermodynamic realities of intelligence lead to the following corollary to Algernon’s Law:

Any increase in intelligence implies a corresponding increase in brain energy consumption.

Potential Implications

What is called genius is the abundance of life and health.

-Henry David Thoreau

This idea can be applied to both evaluate nootropics, and to understand and treat cognitive problems. It's unlikely that any drug will increase intelligence without adverse effects, unless it also acts to increase energy availability in the brain. From this perspective, we can categorically exclude any nootropic approaches which fail to increase oxidative metabolism in the brain.

This idea shifts the search for nootropics from neurotransmitter like drugs that improve focus and motivation, to those compounds which regulate and support oxidative metabolism such as glucose, thyroid hormones, some steroid hormones, cholesterol, oxygen, carbon dioxide, and enzyme cofactors.

Why haven't we already found that these substances increase intelligence?

Deficiencies in all of these substances do reduce intelligence. Further raising brain metabolism above normal healthy levels should be expected to be a complex problem because of the interrelation between the molecules required to support metabolism:

If you increase oxidative metabolism, the demand for all raw materials of metabolism is correspondingly increased. Any single deficiency poses a bottleneck, and may result in the opposite of the intended result.

So this suggests a 'systems biology' approach to cognitive enhancement. It's necessary to consider how metabolism is regulated, and what substrates it requires. To raise intelligence in a safe and effective way, all of these substrates must have increased availability to the neuron, in appropriate ratios.

I am always leery of drawing analogies between brains and computers but this approach to cognitive enhancement is very loosely analogous to over-clocking a CPU. Over-clocking requires raising both the clock rate, and the energy availability (voltage). In the case of the brain, the effective 'clock rate' is controlled by hormones (primarily triiodothyronine aka T3), and energy availability is provided by glucose and other nutrients.

It's not clear if merely raising brain metabolism in this way will actually result in a corresponding increase in intelligence, however I think it's unlikely that the opposite is possible (increasing intelligence without raising brain metabolism).

A hypothesis testing video game

6 Swimmy 01 April 2013 05:41AM

The Blob Family is a simple game made by Leon Arnott. At heart, it's a game about testing hypotheses and getting the right answer with the least amount of evidence you can.

The mechanics work like so: Balls bounce around the screen randomly and you control a character who needs to avoid them. You can aim the mouse anywhere and activate a sonar. On the right side are rules for how various balls will react to this, and your goal is to figure out which ball is which. As you use the sonar more, the balls speed up, so it becomes more difficult to stay alive, thus giving an incentive to test your hypothesis in as few clicks as possible.

It very nicely illustrates the principle that, to test a hypothesis, you must design tests to falisfy your intuitions rather than to confirm them. For example, in one level, when you use the sonar:

  • 1 ball heads toward the center
  • 1 ball heads away from the center
  • 1 ball heads away from the mouse
  • 1 ball heads away from you

I found myself mistakenly clicking in the center of the screen to test hypothesis 1, but this is insufficient. To design the proper tests, you need to keep the mouse out of the center, keep it away from you, and depending on the position of the balls keep it off a straight line from you.

It could also demonstrate the ability of a fast brain to test hypotheses quickly. For many levels, if you could slow time down and set up a very good test, you could solve the problem with a single click. But we humans aren't usually so attentive.

Just thought the LW crowd might enjoy it.

The Logic of the Hypothesis Test: A Steel Man

5 Matt_Simpson 21 February 2013 06:19AM

Related to: Beyond Bayesians and Frequentists

Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.

I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:

1. Either the null hypothesis or the alternative hypothesis is true.

2. If the null hypothesis is true, then the data has a certain probability distribution.

3. Under this distribution, our sample is extremely unlikely.

4. Therefore under the null hypothesis, our sample is extremely unlikely.

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

An example looks like this:

Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:

1. Either or where is some constant.

2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.

3. Under the null hypothesis, has a distribution with degrees of freedom.

4. is really small under the null hypothesis (e.g. less than 0.05).

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath. 

This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe. 

That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?

Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.