Reality is arational.

-10 reguru 09 September 2016 04:08PM

Reality is arational. Everything you do is arational. You aren't aware of it because you lack awareness. By becoming aware that you are unaware, you have increased your awareness. Yet still, you will always lack awareness. The same with me. My definition of awareness is the subjective experience of separating thoughts from awareness. You can become aware of thoughts, and if an "I" thought appears, that was not you, you simply became aware of it.

My point is that I think that you, confuse the map for the territory. Now I made the same mistake, because "map not being the territory" is a map. In all actuality, all types of communication are, and equally untrue.

The way I see it is that reality is the way it is and it is arational. Gravity does not exist. We may create a layer on top of arational reality and call it reality, while in all actuality it is a virtual reality.

It is simply a human projection on top of the arational reality. Arationality is completely independent of reasoning, everything rational and irrational exists within a matrix (virtual reality) of the arational.

It's fine to do physics, math or other science but it is still a human projection.

You might think that there is no alternative to using maps (like I do here) but I am simply pointing out that you can discover arational reality without creating another map to point out its existence.
If you want to find out for yourself, what happens when you become silent of all thoughts? Does reality disappear?

The point is that you can sit down, become aware of all the maps, and notice that reality does not disappear because what you call "you" (The I thought) lose attachment to maps.

If you investigate for yourself, an empirical investigation, you can find out for yourself too. That's the only way for this to work. Because you might notice I make the same mistake, but there is a small inclination of some maps getting you closer to the truth, even though it's not the case. Because it is an illusion. The illusion that some maps are better than others when they are all the same from the perspective of the arational.

What's the point of this post? It's an invitation, you have to figure it out yourself.

People who lie about how much they eat are jerks

-10 Elo 08 August 2016 03:45AM

Originally posted here: http://bearlamp.com.au/people-who-lie-about-how-much-they-eat-are-jerks/


Weight loss journey is a long and complicated problem solving adventure.  This is one small factor that adds to the confusion.  You probably have that one friend.  Appears to eat a whole bunch, and yet doesn't put on weight.  If you ever had that conversation it goes something like,

"How are you so thin?"
"raah raah metabolism"
"raah raah I dont know why I don't put on weight"
"Take advantage of the habit"

Well I have had enough.  You're wrong.  You're lying and you probably don't even know it.  It's not possible. (Within a reasonable scope of human variation) Calories and energy are a black box system.  Calories in, work out, leftovers become weight gain, deficit is weight loss.  If a human could eat significantly more calories for the same amount of work and not put on weight we would be prodding them in a lab for breaking the laws of physics on conservation of mass and conservation of energy.

So this is you, you say you gain weight no matter what you eat and that's scientifically impossible.  Now what?  You probably don't mean to break the laws of physics (and you probably don't actually break them).  You genuinely absentmindedly don't notice when you scoff down whole plates of food and when you skip dinner because you didn't feel like it (and absentmindedly balance the calories automatically).  It's all the same to you because you naturally do that.

This very likely is about habits, and natural habits that people have.  If for example John has the habit of getting home and going to the fridge, making dinner because it's usually the evening.  Wendy doesn't have the habit.  She eats when she is hungry.  Not having a set mealtime sometimes means that she gets tired-hungry and has a state of being too exhausted to decide what to eat and too hungry to do anything else that would help solve the problem.  But for Wendy she doesn't get home and automatically cook dinner.  (good things and bad things come from habits.)

Wendy and john go to a big lunch together.  They both eat 150% of the calories they should be eating for that meal, and they don't mind - enjoying food is part of enjoying life.  It was a fancy restaurant with good food.  Later that evening when Wendy gets home she doesn't feel hungry and goes off to read a book or talk to friends on the internet.  Eventually she has a light snack (of 10% of her "dinner" calories) and heads off to totalling 160% of the calories for the two meals.  Effectively under-eating for the day.  John on the other hand, has his habit of heading home and making dinner.  Even after the big lunch, his automatic systems take over and he makes and ordinary dinner of 100% of his calories for that meal.  John's total for that day is 250% for two meals or effectively half a meal extra for that day.

If W and J do this every week (assuming the rest of their diets are perfectly balanced), John will have an upwards trajectory and Wendy will have a downwards one.  John might ask Wendy how she stays so skinny, and Wendy wouldn't know.  After all they eat about the same amount when they are together.

No one understands this.  


What can we do about it?

1. We can hire scientists to follow both J and W around for a week and write down every time they eat something. (this is impractical - maybe if we are in an isolated environment like a weekend retreat it would be easier to do this)
2. We can get them to self report via an app (but people are usually pretty bad at that)
3. We can try ask more specifically, "what do you eat in a day?", or "what have you eaten since this time yesterday?" and gather data points to try to build a picture of what a person eats.
4. We can search for people with similar habits around food to us and ask them how they stay healthy.
5. We can look for people with successful habits around food, ask them for advice and then figure out why that advice works, and how to make that advice work for us.

On the noticing level.  You should notice that every single thing that you eat adds to your caloric intake. Every single piece of work you do adds to your burn.  It's easier to eat another piece of chocolate (for 5 seconds) than run another 15minutes to burn that chocolate off.  If something is not working towards your dieting success it's probably working against it.


Meta: this took one hour to write.

Anti-reductionism as complementary, rather than contradictory

-2 ImNotAsSmartAsIThinK 27 May 2016 11:17PM

Epistemic Status: confused & unlikely

Author's note: the central claim of this article I now believe is confused, and mostly inaccurate. More precisely (in response to a comment by ChristianKl)

>Whose idea of reductionism are you criticising? I think your post could get more useful by being more clear about the idea you want to challenge.

I think this is closest I get to having a "Definiton 3.4.1" in my post

"...the other reductionism I mentioned, the 'big thing = small thing + small thing' one..."

Essentially, the claim is that to accurately explain reality, non-reductionist explanations aren't always *wrong*. 

The confusion, however, that I realized elsewhere in the thread, is that I conflate 'historical explanation' with 'predictive explanation'. Good predictive explanation will almost always be reductionist, because, as it says on the tin, big are made of smaller things. Good historical explanations, though, will be contra-reductionist, they'll explain phenomena in terms of its relation to the environment. Consider evolution; the genes seem to be explained non-reductionistically because their presence or absence is determined by its effect on the environment i.e. whether its fit, so the explanation for how it got there necessarily includes complex things because they cause it.

>Apart from that I don't know what you mean with theory in "Reductionism is a philosophy, not a theory." As a result on using a bunch of terms where I don't know exactly what you mean it's hard to follow your argument.

Artifact of confusion;  if contra-reductionism is a valid platform for explanation, then the value of reductionism isn't constative -- that is, it isn't about whether it's true or false, but something at the meta-level, rather than the object level. The antecedent is no longer believed, so now I do not believe the consequent.

The conceit I had by calling it a philosophy, or more accurately, a perspective, is essentially that you have a dataset, then you can apply a 'reductionist' filter on it to get reductionist explanations and a 'contra-reductionist' filter to get contra explanations. This was a confusion; and only seemed reasonable because I I was treating the two type of explanation -- historical and predictive -- as somehow equivalent, which I now know to be mistaken.

 

Reductionism is usually thought of as the assertion that the sum of the parts equal the whole. Or, a bit more polemically, that reductionist explanations more meaningful, proper, or [insert descriptor laced with postive affect]. It's certainly appealing, you could even say it seems reality prefers these types of explanation. The facts of biology can be attributed to the effects of chemistry, the reactions of chemistry can be attributed to the interplay of atoms, and so on.

But this is conflating what is seen with the perspective itself; I see a jelly donut therefore I am a jelly donut is not a valid inference. Reductionism is a way of thinking about facts, but it is not the facts themselves. Reductionism is a philosophy, not a theory. The closest thing to an testable prediction it makes it what could be termed an anti-prediction.

Another confusion concerns the alternatives to reductionism. The salient instance of anti-reduction tends to be some holist quantum spirituality woo, but I contend this is more of a weak man than anything. To alleviate any confusion, I'll just refer to my proposed notion as 'contra-reductionism'.

Earlier, I mentioned reductionism makes no meaningful predictions. To clarify this, I'll distinguish from a kind a diminutive motte of reductionism which may or may not actually exist outside my own mind, (and which truly is just a species of causality, broadly construed). In broad strokes, this reductionism 'reduces' a phenomena to the sum of it's causes, as opposed to its parts. This is the kind of reductionist explanation that treats evolution as a reductionist explanation, indeed it treats almost any model which isn't strictly random as 'reductionist'. The other referent would be reductionism as the belief that "big things are made of a smaller things, and complex things are made of simpler things". 

It's is the former kind of reductionism that makes what I labeled an anti-prediction, the core of this argument is simply that reductionist is about causality; specifically, it qualifies what types of causes should even be considered meaningful or well-founded or simply, worth thinking about. If you broaden the net sufficiently, causality is a concept which even makes sense to apply to mathematical abstraction completely unrooted in any kind of time. That is the interventionist account of causality essentially boils it down to 'what levers could we have pulled to make something not happen', which perfectly translates to maths, see, for instance, reductio ad absurdum arguments.

But I digress. This diminutive reductionism here is simply the belief that things can be reduced to their causes, which is on par with defining transhumanism as 'simplified humanism' in the category of useless philosophical mottes. In short, this is quite literally an assertion of no substance, and isn't even worth giving a name.

Now that I've finished attacking straw men, the other reductionism I mentioned, the 'big thing = small thing + small thing' one, is also flawed, albeit useful nonetheless.

This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology. Put bluntly, the there is little object-level difference between explaining genes sequences with evolution and explaining weather with pantheons of gods (there is meta-level difference; i.e. one is accurate). Put less controversially, this is explicitly non-reductionistic; relatively simple things (the genetic sequence of a creature) are explained in the language of things far more complex (population and environment dynamics over the course of billions of years). If this is your reductionism, all it does is encapsulate the ontology of universe-space, or more evocatively, it's a logic that doesn't -- couldn't -- tell you where you live, because doesn't change wherever you may go.

Another situation where reductionism  and contra-reductionism give different answers is an example cribbed from David Deutsch. It's possible to set up dominos so that they compute an algorithm which decides the primality of 631. How would you explain a a positive result?

The reductionist explanation is approximately: "the domino remains standing because the one behind it didn't fall over", and so on with variation such as "that domino didn't fall over because the one behind it was knockovered sideways". The contra-reductionist explanation is "that domino didn't fall over "because 631 is prime". Each one is 'useful' depending on whether you are concerned with the mechanics of the domino computer or the theory.

You might detect something in these passages -- that while I slough off any pretense of reductionism, glorious (philosophical) materialism remains a kind of true north in my analysis. This is my thesis. My contra-reductionism isn't non-materialistic, it's merely a perspective inversion of the sort highlighted by a figure/ground illusion. Reductionism defines -- reduces -- objects by pointing to their constituents. A mechanism functions because its components function. A big thing of small things. Quasi-reductionism  does the opposite, it defined objects by their impact on other objects, "[A] tree is only a tree in the shade it gives to the ground below, to the relationship of wind to branch and air to leaf." I don't mean this in a spiritual way, naturally (no pun intended). I am merely defining objects externally rather than internally. At the core, the rose is still a rose, the sum is still normality.

If I had to give a short, pithy summation of this post, the core is simply that, like all systematized notions of truth or meaningfulness, reductionism collapses in degenerate cases where it fails to be useful or give the right answer. Contra-reductionism isn't a improvement or a replacement, but a alternative formulation in a conceptual monoculture, which happens to give right answer sometimes.

When considering incentives, consider the incentives of all parties

-5 casebash 29 May 2016 01:47PM

Once upon a time the countries of Alpago and Byzantine had a war. Alpago was mostly undamaged during this war. Byzantine was severely damaged by this war, although they have caught up in some metrics such as education, their economy is still somewhat weaker. Alpago was the clear aggressor, and now, fifty years later, everyone who is reasonable now acknowledges that Alpago was in the wrong. 

There is a major debate within the countries about how to respond to the past. Many Byzantians argue that the views of the Alpagoans are irrelevant. The Alpagoans are "unbombed", this provides them with many systematic advantage over the Byzantians such as career opportunities, indeed most of the top companies in Byzantian still have Alpagoan CEOs since many of the senior management were hired before Byzantian had built anywhere near the number of colleges in Alpago.

Many Byzantians argue that the views of the unbombed deserve very little consideration. Of course the unbombed will want to preserve their advantages. How can the Byzantians ever have their voices heard when unbombed members of parliament are giving their opinions in the Alpago parliament on how much compensation is appropriate? Surely if Alpago was truly sorry, they would accept the demands of the Byzantian government without question.

The Byzantians are undoubtedly correct in their assumption that the Alpagoans have a very strong incentive to underestimate what is owed. They are also correct when they say that the Alpagoans are in a position of power that makes it very easy for them to ignore the issue of compensation, after all, it does not affect them very much if their government decides to pay compensation to the Byzantians, instead of the alternate plan of wasting it on a fleet of nuclear submarines. However, in other areas, the Alpagoans no longer have a power advantage. Many Alpagoan politicians used to say that the war was justified, if a politician said that these days, even the conservative party would demand that they resign because no reasonable person could come to such a conclusion.

In contrast, some of the more extreme Byzantians regularly declare the burning of their capital as a intentional war crime, while the evidence quite clearly shows that the Alpagoans had not targeted their civilian population, only their military base which had inadvertently led to the fire when it was destroyed. During the war, the intentional targetting theory was best supported by the evidence available to the Alpagoans, but advance in forensics have long ago disproven this theory. Many Byzantines consider this forensic technique discredited, because it was originally used to blame the war on the Byzantines. The reason why the Alpagoans did not burn the city was not altruistic. They did not want to burn the city merely because this would make it impossible for them to loot it. It is politically risky for an Alpagoan to point out that the burning was unintentional, since they might be mistaken for a member of the Alpagoan Pillorying Club. These are really legitimately horrible people (even the conservative party consider them to be bigots).

On the other hand, the Alpagoans almost universally insist that they never executed any Byzantine civilians in the brief period that they occupied the country. There are extensive interviews with numerous witnesses who saw this happen with their own eyes, but no hard evidence. The Alpagoans dismiss these accounts as it is impossible for them to conceive that criminals might be telling the truth when their own soldiers (whom they consider honorable - they blame politicians for the war) deny this ever happened. Any Byzantine who mentions this immediately gets dismissed as a "loony conspiracy nut".

If the Byzantians want to consider the incentives of the Alpagoans, they need to also consider their own incentives, as they would be construed by a hardened cynic. They might argue that their incentives are to fight for justice as this would earn them respect, but the cynic would not accept this. The cynic would argue that their incentives are to fight for the maximal amount of compensation, even if a perfectly impartial judge decided that it should be X, their incentive would be to claim that it should be at least X + 1. These incentives exist, even if the Alpagoan government would never offer even half of X.

Some of the Alpagoan are motivated by conscious self-interest to preserve their advantages, while many more who are convinced that they support fair compensation are affect by unconscious self-interest bias. But, the cynic will believe that the Byzantians will have an incentive to position the effect of self-interest on the Alpagoans as greater than it is. The cynic will believe that similarly, some of the Byzantians will be motivated by conscious self-interest, and others by unconscious bias, all while completely convinced that they are being fair.

The Alpagoans are in a position of power when it comes to compensation. The Byzantians lack the ability to force them to pay it, so the resolution will most likely be on the terms of the Alpagoans. The cynic will note that the Byzantians have the incentive to position themselves as being in a position of power for all issues, even when they are the ones in the position of power, such as in relation to the claim that the Alpagoans had intentionally burned their capital. Many Byzantians know that the Alpagoans didn't actually intentionally try to burn their capital, but they see this as a technicality (they started an illegitimate war which resulted in the capital burning) and they do not want to get into an argument with their fellow Byzantians who *really* strongly believe this. Further disagreeing with other Byzantians would undermine their cause which they see as just. The cynic would note that this is a very easy argument for the Byzantians to make. It does not harm them if the actions of the Alpagoans are misrepresented, in fact it helps them. Further, there are social incentives to agree with their fellow Byzantians.

Even though the Alpagoans are correct that they didn't intentionally burn the city, many of them have formed their viewpoint out of self-interest. There is convincing historical evidence, but very few of them have actually seen this, nor do most of them have interest in checking it out as it might disprove their beliefs. Most Alpagoans would be unwilling to acknowledge this, as it would harm their credibility and by used as ammunition by Byzantian activists who believe that they burned it intentionally.

We can see that considering the incentives of all the parties will help both the Byzantians come to a better understanding regarding the situation. The same will be true for the Alpagoans - the Byzantians are right in that the Alpagoans are often unaware of their bias. On the other hand, if either group only considers the incentives of one of the parties, they will most likely come to a more biased conclusion than if they had considered the incentives of neither of the parties. For these purposes, it is very important that the cynic be maximally cynical, without actually being a conspiracy theorist, in order to reduce room for bias.

Wrong however unnamed

-4 Romashka 24 May 2016 01:55PM

Related to: 37 ways that words can be wrong.

Consider the following sentence (from Internet; but I have heard it before): 'Lichens consist of fungi and algae, but they are more than the sum of their constituents.'

It is supposed to say something like 'the fungus and the alga don't just live very close to each other, they influence each other's habitat(s) and can be considered, for most purposes, to form a physiologically integrated body'. It never actually says that, although people gradually come to this conclusion if they look at illustrations or read long enough. And I don't think the phrase is sufficiently catchy to explain its popularity; rather, that it is a tenuous introduction to the much-later-explained term 'synergism'. A noble (in principle) preparation of the mind.

Yet how is a lichen 'more than the sum of fungus and alga'? I suppose one could speak of a 'sum' if the lichen was pulverized and consumed as medicine, and then its effect on the patient was compared to that of the mixture of similarly treated fungus (grown how exactly?) and alga (same here). It doesn't exist in the wild. It shouldn't exist in the literature.

A child is not bothered by its lack of sense. When she encounters 'synergism', she'll remember having been told of something like it, and be reassured by the unity of science. It flies under the radar of 'established biological myths', because it doesn't have enough meaning to be one.

I picked a dictionary of zoological terms and tried to recall how the notions were put before me for the first time, but of course I failed. (I guess it should be high-level things, like 'variability', or colloquial expressions - 'bold as a lion', etc., that distort and get distorted the most.) They seem to 'have always been there'. Then, I looked at the definitions and tried to imagine them misapplied (intuitively, a simpler task). No luck. Yet someday, something other truly unknown to me will appear familiar and simple.

We can weed out improper concepts from textbooks, but there are too many sources which are written far more engagingly and 'clearly', and which propagate not even wrong ideas. Explained like I'm five.

And never named.

Suppose HBD is True

-12 OrphanWilde 21 April 2016 01:34PM

Suppose, for the purposes of argument, HBD (Human bio-diversity, the claim that distinct populations (I will be avoiding using the word "race" here insomuch as possible) of humans exist and have substantial genetical variance which accounts for some difference in average intelligence from population to population) is true, and that all its proponents are correct in accusing the politicization of science for burying this information.

I seek to ask the more interesting question: Would it matter?

1. Societal Ramifications of HBD: Eugenics

So, we now have some kind of nice, tidy explanation for different characters among different groups of people.  Okay.  We have a theory.  It has explanatory power.  What can we do with it?

Unless you're willing to commit to eugenics of some kind (be it restricting reproduction or genetic alteration), not much of anything.  And even given you're willing to commit to eugenics, HBD doesn't add anything  HBD doesn't actually change any of the arguments for eugenics - below-average people exist in every population group, and insofar as we regard below-average people a problem, the genetic population they happen to belong to doesn't matter.  If the point is to raise the average, the population group doesn't matter.  If the point is to reduce the number of socially dependent individuals, the population group doesn't matter.

Worse, insofar as we use HBD as a determinant in eugenics, our eugenics are less effective.  HBD says your population group has a relationship with intelligence; but if we're interested in intelligence, we have no reason to look at your population group, because we can measure intelligence more directly.  There's no reason to use the proxy of population group if we're interested in intelligence, and indeed, every reason not to; it's significantly less accurate and politically and historically problematic.

Yet still worse for our eugenics advocate, insomuch as population groups do have significant genetic diversity, using population groups instead of direct measurements of intelligence is far more likely to cause disease transmission risks.  (Genetic diversity is very important for population-level disease resistance.  Just look at bananas.)

2. Social Ramifications of HBD: Social Assistance

Let's suppose we're not interested in eugenics.  Let's suppose we're interested in maximizing our societal outcomes.

Well, again, HBD doesn't offer us anything new.  We can already test intelligence, and insofar as HBD is accurate, intelligence tests are more accurate.  So if we aim to streamline society, we don't need HBD to do so.  HBD might offer an argument against affirmative action, in that we have different base expectations for different populations, but affirmative action already takes different base expectations into account (if you live in a city of 50% black people and 50% white people, but 10% of local lawyers are black, your local law firm isn't required to have 50% black lawyers, but 10%).  We might desire to adjust the way we engage in affirmative action, insofar as affirmative action might not lead to the best results, but if you're interested in the best results, you can argue on the basis of best results without needing HBD.

I have yet to encounter someone who argues HBD who also argues we should do something with regard to HELPING PEOPLE on the basis of this, but that might actually be a more significant argument: If there are populations of people who are going to fall behind, that might be a good argument to provide additional resources to these populations of people, particularly if there are geographic correspondences - that is, if HBD is true, and if population groups are geographically segregated, individuals in these population groups will suffer disproportionately relative to their merits, because they don't have the local geographic social capital that equal-advantage people of other population groups would have.  (An average person in a poor region will do worse than an average person in a rich region.)  So HBD provides an argument for desegregation.

Curiously, HBD advocates have a tendency to argue that segregation would lead to the best outcome.  I'd welcome arguments that concentrating an -absence- of social capital is a good idea.

3. Scientific Ramifications of HBD

Well, if HBD were true, it would mean science is politicized.  This might be news to somebody, I guess.

4. Political Ramifications of HBD

We live in a meritocracy.  It's actually not an ideal thing, contrary to the views of some people, because it results in a systematic merit segregation that has completely deprived the lower classes of intellectual resources; talk to older people sometime, who remember, when they worked in the coal mines (or whatever), the one guy who you could trust to be able to answer your questions and provide advice.  Our meritocracy has advanced to the point where we are systematically stripping everybody of value from the lower classes and redistributing them to the middle and upper classes.

HBD might be meaningful here.  Insofar as people take HBD to its absurd extremes, it might actually result in an -improvement- for some lower-class groups, because if we stop taking all the intelligent people out of poor areas, there will still be intelligent people in those poor areas.  But racism as a force of utilitarian good isn't something I care to explore in any great detail, mostly because if I'm wrong it would be a very bad thing, and also because none of its advocates actually suggest anything like this, more interesting in promoting segregation than desegregation.

It doesn't change much else, either.  With HBD we continually run into the same problem - as a theory, it's the product of measuring individual differences, and as a theory, it doesn't add anything to our information that we don't already have with the individual differences.

5. The Big Problem: Individuality

Which is the crucial fault with HBD, iterated multiple times here, in multiple ways: It literally doesn't matter if HBD is true.  All the information it -might- provide us with, we can get with much more accuracy using the same tests we might use to arrive at HBD.  Anything we might want to do with the idea, we can do -better- without it.

HBD might predict we get fewer IQ-115, IQ-130, and IQ-145 people from particular population groups, but it doesn't actually rule them out.  Insofar as this kind of information is useful, it's -more- useful to have more accurate information.  HBD doesn't say "Black people are stupid", instead it says "The average IQ of black people is slightly lower than the average IQ of white people".  But since "black people" isn't a thing that exists, but rather an abstract concept referring to a group of "black persons", and HBD doesn't make any predictions at the individual level we couldn't more accurately obtain through listening to a person speak for five seconds, it doesn't actually make any useful predictions.  It adds literally nothing to our model of the world.

It's not the most important idea of the century.  It's not important at all.

If you think it's true - okay.  What does it -add- to your understanding of the world?  What useful predictions does it make?  How does it permit you to improve society?  I've heard people insist it's this majorly important idea that the scientific and political establishment is suppressing.  I'd like to introduce you to the aether, another idea that had explanatory power but made no useful predictions, and which was abandoned - not because anybody thought it was wrong, but because it didn't even rise to the level of wrong, because it was useless.

And that's what HBD is.  A useless idea.

And even worse, it's a useless idea that's hopelessly politicized.

What can we learn from Microsoft's Tay, its inflammatory tweets, and its shutdown?

1 InquilineKea 26 March 2016 03:41AM

http://www.wired.com/2016/03/fault-microsofts-teen-ai-turned-jerk/

Could this be a lesson for future AIs? The AI control problem?

Two super-intelligences (evolution and science) already exist: what could we learn from them in terms of AI's future and safety?

0 turchin 09 March 2016 11:00AM

There are two things in the past that may be named super-intelligences, if we consider level of tasks they solved. Studying them is useful when we are considering the creation of our own AI.

The first one is biological evolution, which managed to give birth to such a sophisticated thing as man, with its powerful mind and natural languages. The second one is all of human science when considered as a single process, a single hive mind capable of solving such complex problems as sending man to the Moon.

What can we conclude about future computer super-intelligence from studying the available ones?

Goal system. Both super-intelligences are purposeless. They don’t have any final goal which would direct the course of development, but they solve many goals in order to survive in the moment. This is an amazing fact of course.

They also lack a central regulating authority. Of course, the goal of evolution is survival at any given moment, but is a rather technical goal, which is needed for the evolutionary mechanism's realization.

Both will complete a great number of tasks, but no unitary final goal exists. It’s just like a man in their life: values and tasks change, the brain remains.

Consciousness. Evolution lacks it, science has it, but to all appearances, it is of little significance.

That is, there is no center to it, either a perception center or a purpose center. At the same time, all tasks are completed. The sub-conscious part of the human brain works the same way too.

Master algorithm. Both super-intelligences are based on the principle: collaboration of numerous smaller intelligences plus natural selection.

Evolution is impossible without billions of living creatures testing various gene combinations. Each of them solves its own egoistic tasks and does not care about any global purpose. For example, few people think that selection of the best marriage partner is a species evolution tool (assuming that sexual selection is true). Interestingly, the human brain has the same organization: it consists of billions of neurons, but they don’t all see its global task.

Roughly, there have been several million scientists throughout history. Most of them have been solving unrelated problems too, while the least refutable theories passed for selection (considering social mechanisms here).

Safety. Dangerous, but not hostile.

Evolution may experience ecological crises; science creates an atomic bomb. There are hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).

Within an intelligent environment, however, a dangerous agent may appear which is stronger than the environment and will “eat it up”. This will be difficult to initiate. Transition from evolution to science was so difficult to initiate from evolution’s point of view, (if it had one).

How to create our super-intelligence. Assume, we agree that super-intelligence is an environment, possessing multiple agents with differing purposes.

So we could create an “aquarium” and put a million differing agents into it. At the top, however, we set an agent to cast tasks into it and then retrieve answers.

Hardware requirements now are very high: we should simulate millions of human-level agents. A computational environment of about 10 to the power of 20 flops is required to simulate a million brains. In general, this is close to the total power of the Internet. It can be implemented as a distributed network, where individual agents are owned by individual human programmers and solve different tasks – something like SETI-home or the Bitcoin network.

Everyone can cast a task into the network, but provides a part of their own resources in return. 

 

Speed of development of superintelligent environment

Hyperbolic law. The Super-intelligence environment develops hyperbolically. Korotaev shows that the human population grows governed by the law N = 1/T (Forrester law , which has singularity at 2026), which is a solution to the following differential equation:

dN/dt = N*N

A solution and more detailed explanation of the equation can be found in this article by Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate depends on the second power of the population size. The second power was derived as follows: one N means that a bigger population has more descendants; the second N means that a bigger population provides more inventors who generate a growth in technical progress and resources.

Evolution and tech progress are also known to develop hyperbolically (see below to learn how it connects with the exponential nature of Moore’s law; an exact layout of hyperbolic acceleration throughout history may be found in Panov’s article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life” ) The expected singularity will occur in the 21st century. And now we know why. Evolution and tech progress are both controlled by the same development law of the superinteligent environment. This law states that the intelligence in an intelligence environment depends on the number of nodes, and on the intelligence of each node. This is of course is very rough estimation, as we should also include the speed of transactions

However, Korotaev gives an equation for population size only, while actually it is also applicable to evolution – the more individuals, the more often that important and interesting mutations occur, and for the number of scientists in the 20th century. (In the 21st century it has reached its plateau already, so now we should probably include the number of AI specialists as nodes).

In short: Korotayev provides a hyperbolic law of acceleration and its derivation from plausible assumptions but it is only applicable to demographics in human history from its beginning and until the middle of the 20t century, when demographics stopped obeying this law. Panov provides data points for all history from the beginning of the universe until the end of the 20th century, and showed that these data points are controlled by hyperbolic law, but he wrote down this law in a different form, that of constantly diminishing intervals between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that previous one, which implies hyperbolic law.)

What I did here: I suggested that Korotaev’s explanation of hyperbolic law stands as a pre-human history explanation of an accelerated evolutionary process, and that it will work in the 21st century as a law describing the evolution of an AI-agents' environment. It may need some updates if we also include speed of transactions, but it would give even quicker results.

Moore's law is only exponential approximation, it is hyperbolical in the longer term, if seen as the speed of technological development in general. Kurzweil wrote: “But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.”

While we now know that Moore's law in hardware has slowed to 2.5 years for each doubling, we will probably now start to see exponential growth in the ability of programs.

Neural net development has a doubling time of around one year or less. Moore's law is like spiral, which circles around more and more intelligent technologies, and it consists of small s-like curves. It all deserves a longer explanation. Here I show that Moore's law, as we know it, is not contradicting the hyperbolic law of acceleration of a superintelligent environment, but this is how we see it on a small scale.

 

Neural networks results: Perplexity

46.8, "one billion word benchmark", v1, 11 Dec 2013
43.8, "one billion word benchmark", v2, 28 Feb 2014
41.3, "skip-gram language modeling", 3 Dec 2014
24.2, "Exploring the limits of language modeling", 7 Feb 2016 http://arxiv.org/abs/1602.02410

 

Child age equivalence in question about a picture:

3 May 2015 —4.45 years old http://arxiv.org/abs/1505.00468
7 November 2015 —5.45 y.o. (за 6 месяцев - на год подросла)http://arxiv.org/abs/1511.02274
4 March 2016 —6.2 y.o. http://arxiv.org/pdf/1603.01417

Material from Sergey Shegurin 


Other considerations

Human level agents and Turing test. Ok, we know that the brain is very complex, and if the power of individual agents in if AI environment grows so quickly, there should appear agents capable of passing a Turing test - and it will happen very soon. But for a long time the nodes of this net will be small companies and personal assistants, which could provide superhuman results. There is already a market place where various projects could exchange results or data using API. As a result, a Turing test will be meaningless, because most powerful agents will be helped by humans.

In any case, some kind of “mind brick”, or universal robotic brain will also appear.

Physical size of Strong AI: if the velocity of light is limited, the super-intelligence must decrease in size rather than increase in order to make quick communications inside itself. Otherwise, the information exchange will slow down, and the development rate will be lost.

Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth, and even less in the future. The periphery can be huge, but that will perform technical functions – defence and nutrition.

Transition to the next super-intelligent environment. It is logical to suggest that the next super-intelligence will also be an environment rather than a small agent. It will be something like a net of neural net-based agents as well as connected humans. The transition may seem to be soft on a small time scale, but it will be disruptive by it final results. It is already happening: the Internet, AI-agents, open AI, you name it. The important part of such a transition is changing the speed of interaction between agents. In evolution the transaction time was thousands of years, which was the time needed to check new mutations. In science it was months, which was the time needed to publish an article. Now it is limited by the speed of the Internet, which depends not only on the speed of light, but also on its physical size, bandwidth and so on and have transaction time  order of seconds.

So, a new super-intelligence will rise in a rather “ordinary” fashion: The power and number of interacting AI agents will grow, become quicker and they will quickly perform any tasks which are fed to them. (Elsewhere I discussed this and concluded that such a system may evolve into two very large super-intelligent agents which will have a cold war, and that hard take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI safety since war between such two agents will be very destructive – consider nanoweapons. ).

Super-intelligent agents. As the power of individual agents grows, they will reach human and latterly superhuman levels. They may even invest in self-improvement, but if many agents do this simultaneously, it will not give any of them a decisive advantage.

Human safety in the super-intelligent agents environment. There is well known strategy to be safe in the environment there are more powerful than you, and agents fight each other. It is making alliances with some of the agents, or becoming such an agent yourself.

Fourth super-intelligence? Such an AI neural net-distributed super-intelligence may not be the last, if a quicker way of completing transactions between agents is found. Such a way may be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi paradox – any AI evolves to smaller and smaller sizes, and thus makes infinite calculations in final outer time, perhaps using an artificial black  hole as an artificial Tippler Omega point or femtotech in the final stages). John Smart's conclusions are similar:

Singularity: It could still happen around 2030, as was predicted by Forrester law, and the main reason for this is the nature of hyperbolic law and its underlying reasons of the growing number of agents and the IQ of each agent.

Oscillation before singularity: Growth may become more and more unstable as we near singularity because of the rising probability of global catastrophes and other consequences of disruptive technologies. If true, we will never reach singularity dying off shortly before, or oscillating near its “Schwarzschild sphere”, neither extinct, nor able to create a stable strong AI.

The super-intelligent environment still reaches a singularity point, but a point cannot be the environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer would help to solve such a paradox.               

Ways of enhancing the intelligent environment: agent number growth, agent performance speed growth, inter-agent data exchange rate growth, individual agent intelligence growth, and growth in the principles of building agent working organizations.

The main problem of an intelligent environment: chicken or egg? – Who will win: the super-intelligent environment or the super-agent? Any environment can be covered by an agent submitting tasks to it and using its data. On the other hand, if there are at least two super-agents of this kind, they form an environment.

 

Problems with the model:

1)     The model excludes the possibility of black swans and other disruptive events, and assumes continuous and predictable acceleration, even after human level AI is created.

2)     The model is disruptive itself, as it predicts infinity, and in a very short time frame of 15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.

These two problems may somehow cancel each other out.

In the model exists the idea of oscillation before the singularity, which may result in postponing AI and preventing infinity. The singularity point inside the model is itself calculated using remote past points, and if we take into account more recent points, we could get a later date for the singularity, thus saving the model.

If we say that because of catastrophes and unpredictable events the hyperbolic law will slow down and strong AI will be created before 2100, as a result, we could get a more plausible picture.

This may be similar to R.Hanson’s  “ems universe” , but here, neural net-based agents are not equal to human emulations, which play a minor role in all stories.

Limitation of the model: It is only a model, so it will stop working at some point. Reality will surprise us at some point, but reality doesn’t consist only of black swans. Models may work between them.

TL;DR: Science and evolution are super-intelligent environments governed by the same hyperbolic acceleration law, which soon will result in a new super-intelligent environment, consisting of neural net-based agents. Singularity will come after this, possibly as soon as 2030.

The Fable of the Burning Branch

-19 EphemeralNight 08 February 2016 03:20PM

 

Once upon a time, in a lonely little village, beneath the boughs of a forest of burning trees, there lived a boy. The branches of the burning trees sometimes fell, and the magic in the wood permitted only girls to carry the fallen branches of the burning trees.

One day, a branch fell, and a boy was pinned beneath. The boy saw other boys pinned by branches, rescued by their girl friends, but he remained trapped beneath his own burning branch.

The fire crept closer, and the boy called out for help.

Finally, a friend of his own came, but she told him that she could not free him from the burning branch, because she already free'd her other friend from beneath a burning branch and he would be jealous if she did the same deed for anyone else. This friend left him where he lay, but she did promise to return and visit.

The fire crept closer, and the boy called out for help.

A man stopped, and gave the boy the advice that he'd get out from beneath the burning branch eventually if he just had faith in himself. The boy's reply was that he did have faith in himself, yet he remained trapped beneath the burning branch. The man suggested that perhaps he did not have enough faith, and left with nothing more to offer.

The fire crept closer, and the boy cried out for help.

A girl came along, and said she would free the boy from beneath the burning branch.

But no, her friends said, the boy was a stranger to her, was her heroic virtue worth nothing? Heroic deeds ought to be born from the heart, and made beautiful by love, they insisted. Simply hauling the branch off a boy she did not love would be monstrously crass, and they would not want to be friends with a girl so shamed.

So the girl changed her mind and left with her friends.

The fire crept closer. It began to lick at the boy's skin. A soothing warmth became an uncomfortable heat. The boy mustered his courage and chased the fear out of his own voice. He called out, but not for help. He called out for company.

A girl came along, and the boy asked if she would like to be friends. The girl's reply was that she would like to be friends, but that she spent most of her time on the other side of the village, so if they were to be friends, he must be free from beneath the burning branch.

The boy suggested that she free him from beneath the burning branch, so that they could be friends.

The girl replied that she once free'd a boy from beneath a burning branch who also promised to be her friend, but as soon as he was free he never spoke to her again. So how could she trust the boy's offer of friendship? He would say anything to be free.

The boy tried frantically to convince her that he was sincere, that he would be grateful and try with all his heart to be a good friend to the girl who free'd him, but she did not believe him and turned away from him and left him there to burn.

The fire crept closer and the boy whimpered in pain and fear as it spread from wood to flesh. He cried out for help. He begged for help. "Somebody, please!"

A man and a woman came along, and the man offered advice: he was once trapped beneath a burning branch for several years. The fire was magic, the pain was only an illusion. Perhaps it was sad that he was trapped but even so trapped the boy may lead a fulfilling life. Why, the man remembered etching pictures into his branch, befriending passers by, and making up songs.

The woman beside the man agreed, and told the boy that she hoped the right girl would come along and free him, but that he must not presume that he was entitled to any girl's heroic deed merely because he was trapped beneath a burning branch.

"But do I not deserve to be helped?" the boy pleaded, as the flames licked his skin.

"No, how wrong of you to even speak as though you do. My heroic deeds are mine to give, and to you I owe nothing," he was told.

"Perhaps I don't deserve help from you in particular, or from anyone in particular, but is it not so very cruel of you to say I do not deserve any help at all?" the boy pleaded. "Can a girl willing to free me from beneath this burning branch not be found and sent to my aide?"

"Of course not," he was told, "that is utterly unreasonable and you should be ashamed of yourself for asking. It is offensive that you believe such a girl may even exist. You've become burned and ugly, who would want to save you now?"

The fire spread, and the boy cried, screamed, and begged desperately for help from every passer by.

"It hurts it hurts it hurts oh why will no one free me from beneath this burning branch?!" he wailed in despair. "Anything, anyone, please! I don't care who frees me, I only wish for release from this torment!"

Many tried to ignore him, while others scoffed in disgust that he had so little regard for what a heroic deed ought to be. Some pitied him, and wanted to help, but could not bring themselves to bear the social cost, the loss of worth in their friends' and family's eyes, that would come of doing a heroic deed motivated, not by love, but by something lesser.

The boy burned, and wanted to die.

Another boy stepped forward. He went right up to the branch, and tried to lift it. The trapped boy gasped at the small relief from the burning agony, but it was only a small relief, for the burning branches could only be lifted by girls, and the other boy could not budge it. Though the effort was for naught, the first boy thanked him sincerely for trying.

The boy burned, and wanted to die. He asked to be killed.

He was told he had so much to live for, even if he must live beneath a burning branch. None were willing to end him, but perhaps they could do something else to make it easier for him to live beneath the burning branch? The boy could think of nothing. He was consumed by agony, and wanted only to end.

And then, one day, a party of strangers arrived in the village. Heroes from a village afar. Within an hour, one foreign girl came before the boy trapped beneath the burning branch and told him that she would free him if he gave her his largest nugget of gold.

Of course, the local villagers were shocked that this foreigner would sully a heroic deed by trafficking it for mere gold.

But, the boy was too desperate to be shocked, and agreed immediately. She free'd him from beneath the burning branch, and as the magical fire was drawn from him, he felt his burned flesh become restored and whole. He fell upon the foreign girl and thanked her and thanked her and thanked her, crying and crying tears of relief.

Later, he asked how. He asked why. The foreign girls explained that in their village, heroic virtue was measured by how much joy a hero brought, and not by how much she loved the ones she saved.

The locals did not like the implication that their own way might not have been the best way, and complained to the chief of their village. The chief cared only about staying in the good graces of the heroes of his village, and so he outlawed the trading of heroic deeds for other commodities.

The foreign girls were chased out of the village.

And then a local girl spoke up, and spoke loud, to sway her fellow villagers. The boy recognized her. It was his friend. The one who had promised to visit so long ago.

But she shamed the boy, for doing something so crass as trading gold for a heroic deed. She told him he should have waited for a local girl to free him from beneath the burning branch, or else grown old and died beneath it.

To garner sympathy from her audience, she sorrowfully admitted that she was a bad friend for letting the boy be tempted into something so disgusting. She felt responsible, she claimed, and so she would fix her mistake.

The girl picked up a burning branch. Seeing what she was about to do, the boy begged and pleaded for her to reconsider, but she dropped the burning branch upon the boy, trapping him once more.

The boy screamed and begged for help, but the girl told him that he was morally obligated to learn to live with the agony, and never again voice a complaint, never again ask to be free'd from beneath the burning branch.

"Banish me from the village, send me away into the cold darkness, please! Anything but this again!" the boy pleaded.

"No," he was told by his former friend, "you are better off where you are, where all is proper."

In the last extreme, the boy made a grab for his former friend's leg, hoping to drag her beneath the burning branch and free himself that way, but she evaded him. In retaliation for the attempt to defy her, she had a wall built around the boy, so that none would be able, even if one should want to free him from beneath the burning branch.

With all hope gone, the boy broke and became numb to all possible joys. And thus, he died, unmourned.

The correct response to uncertainty is *not* half-speed

77 AnnaSalamon 15 January 2016 10:55PM

Related to: Half-assing it with everything you've got; Wasted motionSay it Loud.

Once upon a time (true story), I was on my way to a hotel in a new city.  I knew the hotel was many miles down this long, branchless road.  So I drove for a long while.

After a while, I began to worry I had passed the hotel.

 

 

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

After a while, I realized: I was being silly!  If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.  

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1]

Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives).  For example:
  • I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it.  So, I sat there kind-of-writing it while also fretting about whether the task was correct.
    • (Solution:  Take a minute out to think through heuristics.  Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
  • I wasn't sure (back in early 2012) that CFAR was worthwhile.  So, I kind-of worked on it.
  • An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work.  So I kind-of hung out with her while feeling bad and distracted about my work.
  • A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
  • Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
  • It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models.



You might try looking out for such examples in your life.


[1] Edited to add: The hotel example has received much nitpicking in the comments.  But: (A) the actual example was legit, I think.  Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking.  Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery.  The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other.  Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case.

View more: Next