Posts

Sorted by New

Wiki Contributions

Comments

I create an AI Box Experiment Google Group (search for the "aibox" group) in order to make the discussion about this, including matchmaking and other arrangements.

In order to make the discussion about this, including matchmaking and other arrangements, I created an AI Box Experiment Google Group. As I said previously I'm willing to play the AI, if anybody is interested meet me there for further arrangements.

I offer to play the AI, provided that the gatekeeper do honestly engage in the conversation.

You keep repeating how much information could an AI derive from a very small measurement (the original example was an apple falling) and the last story was supposed to be an analogy to it, but the idea of an entire civilization worth of physical evidence already available makes the job of the AI much easier. The original assertion of deriving modern physics from an apple falling looks ridiculous because you never specified the prior knowledge the AI had and amount of non-redundant information available in the falling apple scenario. If we are rigorous enough with the definitions we end up with a measure of how efficiently can an intelligence observe new information from a certain piece of evidence and how efficiently it can update it's own theories in the face of this new evidence. I agree that a self improving AI could reach the theoretical limits of efficiency on updating its own theories, but the efficiency of information observation from an experiment is more related to what the experiment is measuring and the resolution of the measurements. The assertion that an AI could see an apple falling and theorize general relativity is meaningless without saying how much prior knowledge it has, in a tabula rasa state almost nothing could come from this observation, it would need much more evidence before anything meaningful started to arise. The resolution of the evidence is also very important, it's absurd to believe that there aren't local maxima in the theory space search that would be favored by the AI because the resolution isn't sufficient to show that the theories are dead wrong. The AI would have no way to accurately assess this impact (if we assume it's unable to improve the resolution and manipulate the environment). That's it, the essence of what I think is wrong with your belief about how much an AI could learn from certain forms of evidence: I agree with the idea but your reasoning is much less formal than it should be and it end up looking like magical thinking. With sufficient resolution and a good set of sensors there's sufficient evidence today to believe that an AI could use a very small amount of orthogonal experiments to derive all of modern science, I would bet the actual amount is smaller than one hundred, but if the resolution is insufficient no amount of experiments would do.

I've been reading these last posts on Science vs. Bayes and I really don't get it. I mean, obviously bayesian reasoning supersedes falsifiability and how to analyze evidence, but there's no conflict. Like relativity vs. newtonian mechanics, there's thresholds that we need to cross to see the failures in Science, but there are many situations when just Science works effectively.

The New Scientist is even worse, the idea that we need to ditch falsifiability and use Bayes is idiotic, it's like saying that binary logic should be discarded because we can use probabilities instead of zero and one. Falsifiability is a special case of Bayes, we can't have Bayes without falsifiability (as we can't have natural's addition ruling out 2+2=4), the people that argue this don't understand the extents of Bayes.

WRT multiverse IMHO we have to separate the interpretation of some theory from the theory itself. If the theory (which is testable, falsifiable, etc.) holds against the evidence and one of it's results is the existence of a multiverse, then we have to accept the existence of the multiverse. If it isn't one of the results, but it is one possible interpretation of how the theory "really works", then we are in the realm of philosophy and we can spend thousands of years arguing any way without going forward. In most cases of QM theories there's no clear separation of both, so people attach themselves to the interpretations instead of using the results. If we have two hypothesis that explain the same phenomena we have three possible choices:

  1. they're equal up to isomorphism (which means that doesn't matter which one we choose, other than convenience).
  2. one is simpler than the other (using whatever criteria of complexity we want to use).
  3. both explain more than the phenomena.

Number 1 is a no-brainer. Number 3 is the most usual situation, where the evidence points either way and new evidence is necessary to confirm in both directions. We can use Bayes to assess the probability of each one being "the right one", but if both theories don't contradict each other then there's a smaller theory inside each that falls in the case number 1. Number 2 is the most problematic because plain use of complexity assessment doesn't guarantee that we are picking the right one. The problem lies in the evidence available: there's no way to know if we have sufficient evidence to rule out any one. Just because a equation is simpler it doesn't mean it's correct, perhaps our data set is well known. Again it should be the cause that the simpler theory is isomorphic to a subset of the larger theory.

The only argument that needs to be spoken is if the multiverse is a result or an interpretation, but in the strictest sense of the word: we can't say it's an interpretation assuming that X and Y holds, because them it's an interpretation of QM + X + Y. AFAIK every "interpretation" of QM extends the assumptions in a particular direction. Personally I find the multiverse interpretation cleaner, mathematically simpler and I would bet my money on it.

On your points of departure: (1) Shows how problematic academia is. I think the academic model is a dead end, we should value rationality more than quantity of papers published, the whole politics of the thing is way too much inefficient. (2) It won't be enough because our culture values rationality much less than anything else. Even without bayesian reasoning plain old Science rules out the bible, you can either believe in logic or the bible. One of the best calculus professors I had was a fervent adventist. IMO our best strategy is just outsmart the irrationalists, our method is proven and yields much better results, we just need to keep compounding it to the singularity ;) (3) You're dead wrong (in the example). There are many other necessary experiments other than seeing an apple fall to realize special relativity. Actually a bayesian super-intelligence could get trapped in local maximum for a long time until the "right" set of experiments happened. We have a history of successes in science but there's a long list of known failures, let alone the unknown failures.

TGGP: Eliezer referenced the book (the wikipedia url on the "real" link, lookup for the phrase "Is Idang Alibi about to take a position on the real heart of the uproar?"). I thought everybody followed the links before commenting ;). Anyway I assume that if something is referenced its discussion is on topic.

Regarding their data, we can't just remove the data they fudged, we need to redo the analysis with the original data. We can't just discard data because it doesn't fit our conclusions. Using their raw data without fudging we are left with low correlation, many data points outside the curve.

Ditto for any other studies. I highly skeptical of sociologists or psychologist papers because they always (again IME) have use very bad statistics. Most assume a gaussian or poisson distribution without even proving that the process generating the data has the right properties. The measurement process is highly subjective and there's no analysis to assess the deviance of individual measures, so they don't properly find the actual stddev of their data. If one wants to aggregate studies, first one must prove that the measurement process for each study is the same (in the studies mentioned in your "predictive power" link this is false: at least two Lynn studies use population samples with different properties, also another couple use different IQ tests) otherwise we are mixing unrelated hypothesis.

I'm highly skeptical of IQ measurement, because it's too subjective. Measuring the same individual over and over on a long interval we get different results, but we shouldn't. A physicist wouldn't use a mass measurement process that depended on subjective factors (e.g. if the measured object is pretty or the time of measurement isn't jinxed), in a similar way we shouldn't use a measure of mental capacity that is highly dependent of stress (which has no objective measurement process) or emotional state. In this situation one of the best approaches would be using many different data measurements for each individual and aggregate the data with Monte Carlo analysis to find the probability of each results. We can't just fudge the data, discard sample we don't like and use a subjective methodology, otherwise it isn't science. When a physicist does a experiment he has a theory in mind, so he either already has an equation or ends up discovering one. The equation must account for all variables and the theory must prove why the other variables (e.g. speed of wind in Peking) doesn't matter. "IQ and the Wealth of Nations" fails to prove that any other factors influencing GDP are irrelevant to the IQ correlation, that alone discredits the results.

Correlation is the most overused statistical tool. It is useful to show patterns but unless you have a theory to explain the results and make actual predictions it's irrelevant as much as the scientific method is concerned. If we ignore this anything can be "proven".

About the commenting program:

  1. Why require both javascript and a captcha to prevent spamming? Both are very bad for accessibility.
  2. Moving to another URL is quite bizarre. Additional negative points if after submitting doesn't show any results at all.
  3. Combining 1 & 2 in a javascript requirement for two (seemingly) unrelated URLs makes the whole process much more complicated than it needs to be.

I browse with javascript disabled (security reasons) and usually can post in most blogs. I also write software for a living, so I know that any of those aren't required. Please consider improving your blog software to something simpler and less restrictive.

IME most people only think individual IQ differences are ok because they believe other qualities compensate the difference. If they say that some person has a higher IQ, they usually (at least implicitly) question their social skills, financial success, physical prowess, etc.. Also they always talk about much smarter people, not about the 50% under the average, conveying the idea that difference is due to the genius' unusually high IQ not because most people are stupid in comparison. OTOH group comparisons usually imply that one group is smarter and the other is dumber, by comparing the average values for each group. While race is a sensitive issue, if we exchange race by gender, economical status, birthplace, weight, etc., the controversy is pretty much equivalent.

About the IQ vs. GDP "controversy" both Lynn and Vanhanen should be ashamed. They're not even decent scientists, their methodology is flawed and they manipulated the data to fit their results! You can't say "I don't have the real data so I'll just put a number here and argue that it's true because I say so." and expect it to be taken at face value. It's not an experiment if it isn't reproducible (which rules out almost everything except biology, physics and chemistry ;) and you can't reproduce it if you force the data to fit your pattern.

Now, speaking about IQ itself, does make sense talking about it? Is there (at least) a significant correlation between IQ and any useful metric? Can we say that IQ improves our utility, for example? Are we (as a scientific community) sure that IQ measurement isn't just self fulfilling (i.e. it measures what high IQ people have, but not much more)? I know of the (methodologically valid) studies that show people with higher IQ earning more but those studies don't show if these cases are a direct result of IQ (i.e. they're more effective) or a indirect result due to employers favoring people with high IQs (or SATs). Also other (methodologically valid) studies show that IQ doesn't correlate to financial growth (i.e. becoming richer) because people's investment and saving habits don't correlate with IQ.

IMO IQ is a poor metric, it can't give reliable predictions about things that really matter (e.g. GDP, personal finance, scientific achievements, etc.). I fail to see how it's better than trying to measure how fast can people divide long numbers, surely it may be impressive and have a couple of use cases, but mostly it doesn't matter. IMNSHO it's telling that those people trying to correlate IQ with other values always use bad methodology and end up trying to convince the reader that correlation (i.e. their results) equals causation (i.e. their hypothesis).

IME most people only think individual IQ differences are ok because they believe other qualities compensate the difference. If they say that some person has a higher IQ, they usually (at least implicitly) question their social skills, financial success, physical prowess, etc.. Also they always talk about much smarter people, not about the 50% under the average, conveying the idea that difference is due to the genius' unusually high IQ not because most people are stupid in comparison. OTOH group comparisons usually imply that one group is smarter and the other is dumber, by comparing the average values for each group. While race is a sensitive issue, if we exchange race by gender, economical status, birthplace, weight, etc., the controversy is pretty much equivalent.

About the IQ vs. GDP "controversy" both Lynn and Vanhanen should be ashamed. They're not even decent scientists, their methodology is flawed and they manipulated the data to fit their results! You can't say "I don't have the real data so I'll just put a number here and argue that it's true because I say so." and expect it to be taken at face value. It's not an experiment if it isn't reproducible (which rules out almost everything except biology, physics and chemistry ;) and you can't reproduce it if you force the data to fit your pattern.

Now, speaking about IQ itself, does make sense talking about it? Is there (at least) a significant correlation between IQ and any useful metric? Can we say that IQ improves our utility, for example? Are we (as a scientific community) sure that IQ measurement isn't just self fulfilling (i.e. it measures what high IQ people have, but not much more)? I know of the (methodologically valid) studies that show people with higher IQ earning more but those studies don't show if these cases are a direct result of IQ (i.e. they're more effective) or a indirect result due to employers favoring people with high IQs (or SATs). Also other (methodologically valid) studies show that IQ doesn't correlate to financial growth (i.e. becoming richer) because people's investment and saving habits don't correlate with IQ.

IMO IQ is a poor metric, it can't give reliable predictions about things that really matter (e.g. GDP, personal finance, scientific achievements, etc.). I fail to see how it's better than trying to measure how fast can people divide long numbers, surely it may be impressive and have a couple of use cases, but mostly it doesn't matter. IMNSHO it's telling that those people trying to correlate IQ with other values always use bad methodology and end up trying to convince the reader that correlation (i.e. their results) equals causation (i.e. their hypothesis).