Latex support?
Apologies since I am almost sure this has been brought up before. Are there any plans for some sort of LateX or MathML functionality on the site?
Naive Decision Theory
I am posting this is because I'm interested in self-modifying agent decision theory but I'm too lazy to read up on existing posts. I want to see a concise justification as to why a sophisticated decision theory would be needed for the implementation of an AGI. So I'll present a 'naive' decision theory, and I want to know why it is unsatisfactory.
The one condition in the naive decision theory is that the decision-maker is the only agent in the universe who is capable of self-modification. This will probably suffice for production of the first Artificial General Intelligence (since humans aren't actually all that good at self-modification.)
Suppose that our AGI has a probability model for predicting the 'state of the universe in time T (e.g. T= 10 billion years)' conditional on what it knows, and conditional on one decision it has to make. This one decision is how should it rewrite its code at time zero. We suppose it can rewrite its code instantly, and the code is limited to X bytes. So the AGI has to maximize utility at time T over all programs with X bytes. Supposing it can simulate its utility at the 'end state of the universe' conditional on which program it chooses, why can't it just choose the program with the highest utility? Implicit in our set-up is that the program it chooses may (and very likely) will have the capacity to self-modify again, but we're assuming that our AGI's probability model accounts for when and how it is likely to self-modify. Difficulties with infinite recursion loops should be avoidable if our AGI backtracks from the end of time.
Of course our AGI will need a probability model for predicting what a program for its behavior will do without having to simulate or even completely specify the program. To me, that seems like the hard part. If this is possible, I don't see why it's necessary to develop a specific theory for dealing with convoluted Newcomb-like problems, since the above seems to take care of those issues automatically.
Colonization models: a tutorial on computational Bayesian inference (part 2/2)
Recap
Part 1 was a tutorial for programming a simulation for the emergence and development of intelligent species in a universe 'similar to ours.' In part 2, we will use the model developed in part 1 to evaluate different explanations of the Fermi paradox. However, keep in mind that the purpose of this two-part series is for showcasing useful methods, not for obtaining serious answers.
We summarize the model given in part 1:
SIMPLE MODEL FOR THE UNIVERSE
- The universe is represented by the set of all points in Cartesian 4-space which are of Euclidean distance 1 from the origin (that is, the 3-sphere). The distance between two points is taken to be the Euclidean distance (an approximation to the spherical distance which is accurate at small scales)
- The lifespan of the universe consists of 1000 time steps.
- A photon travels s=0.0004 units in a time step.
- At the end of each time step, there is a chance that a Type 0 civilization will spontaneously emerge in an uninhabited region of space. The base rate for civilization birth is controlled by the parameter a. But this base rate is multiplied by the proportion of the universe which remains uncolonized by Type III civilizations.
- In each time step, a Type 0 civilization has a probability b of self-destructing, a probability c of transitioning to a non-expansionist Type IIa civilization, and a probability d of transitioning to a Type IIb civilization.
- Observers can detect all Type II and Type III civilizations within their past light cones.
- In each time step, a Type IIb civilization has a probability e of transitioning to an expansionist Type III civilization.
- In each time step, all Type III civilizations colonize space in all directions, expanding their sphere of colonization by k * s units per time step.
Section III. Inferential Methodology
In this section, no apologies are made for assuming that the reader has a solid grasp of the principles of Bayesian reasoning. Those currently following the tutorial from Part 1 may find it a good idea to skip to Section IV first.
To dodge the philosophical controversies surrounding anthropic reasoning, we will employ an impartial observer model. Like Jaynes, we introduce a robot which is capable of Bayesian reasoning, but here we imagine a model in which such a robot is instantaneously created and randomly injected into the universe at a random point in space, and at a random time point chosen uniformly from 1 to 1000 (and the robot is aware that it is created via this mechanism). We limit ourselves to asking what kind of inferences this robot would make in a given situation. Interestingly, the inferences made by this robot will turn out to be quite similar to the inferences that would be made under the self-indication assumption.
Colonization models: a programming tutorial (Part 1/2)
Introduction
Are we alone in the universe? How likely is our species to survive the transition from a Type 0 to a Type II civilization? The answers to these questions would be of immense interest to our race; however, we have few tools to reason about these questions. This does not stop us from wanting to find answers to these questions, often by employing controversial principles of inference such as 'anthropic reasoning.' The reader can find a wealth of stimulating discussion about anthropic reasoning at Katja Grace's blog, the site from which this post takes its inspiration. The purpose of this post is to give a quantitatively oriented approach to anthropic reasoning, demonstrating how computer simulations and Bayesian inference can be used as tools for exploration.
The central mystery we want to examine is the Fermi paradox: the fact that
- we are an intelligent civilization
- we cannot observe any signs that other intelligent civilizations ever existed in the universe
One explanation for the Fermi paradox is that we are the only intelligent civilization in the universe. A far more chilling explanation is that intelligent civilizations emerge quite frequently, but that all other intelligent civilizations that have come before us ended up destroying themselves before they could manage to make their mark on their universe.
We can reason about which of the above two explanations are more likely if we have the audacity to assume a model for the emergence and development of civilizations in universe 'similar to ours.' In such a model, it is usually useful to distinguish different 'types' of civilizations. Type 0 civilizations are civilizations with similar levels of technology as ourselves. If a Type 0 civilization survives long enough and accumulates enough scientific knowledge, it can make a transition to a Type I civilization--a civilization which has attained mastery of their home planet. A Type I civilization, over time, can transition to a Type II civilization if it colonizes its solar system. We would suppose that a nearby civilization would have to have reached Type II in order for their activities to be prominent enough for us to be able to detect them. In the original terminology, a Type III civilization is one which has mastery of its galaxy, but in this post we take it to mean something else.
The simplest model for the emergence and development of civilizations would have to specify the following:
- the rate at which intelligent life appears in universes similar to ours;
- the rate at which these intelligent species transition from Type 0 to Type II, Type III civilizations--or self-destruct in the process;
- the visibility of Type II and Type III civilizations to Type 0 civilizations elsewhere
- the proportion of advanced civilizations which ultimately adopt expansionist policies;
- the speed at which those Type III civilizations can expand and colonize the universe.
In the model we propose in the post, the above parameters are held to be constant throughout the entire history of the universe. The importance of the model is that after given a particular specification of the parameters, we can apply Bayesian inference to see how well the model explains the Fermi paradox. The idea is to simulate many different histories of universes for a given set of parameters, so as to find the expected number of observers who observe the Fermi paradox given a particular specification of the parameters. More details about Bayesian inference given in Part 2 of this tutorial.
This post is targeted at readers who are interested in simulating the emergence and expansion of intelligent civilizations in 'universes similar to ours' but who lack the programming knowledge to code these simulations. In this post we will guide the reader through the design and production of a relatively simple universe model and the methodology for doing 'anthropic' Bayesian inference using the model.
Future Filters [draft]
See Katja Grace's article: http://hplusmagazine.com/2011/05/13/anthropic-principles-and-existential-risks/
There are two comments I want to make about the above article.
First: the resolution to God's Coin Toss seems fairly straightforward. I argue that the following scenario is formally equivalent to 'God's Coin Toss'
"Dr. Evil's Machine"
Dr. Evil has a factory for making clones. The factory has 1000 separate identical rooms. Every day, a clone is produced in each room at 9:00 AM. However, there is a 50% chance of malfunction, in which case 900 of the clones suddenly die by 9:30 AM, the remaining 100 are healthy and notice nothing. At the end of the day Dr. Evil ships off all the clones which were produced and restores the rooms to their original state.
You wake up at 10:00 AM and learn that you are one of the clones produced in Dr. Evil's factory, and your learn all of the information above. What is the probability that that the machine malfunctioned today?
In the second reformulation, the answer is clear from Bayes' rule. Let P(M) be the probability of malfunction, and P(S) be the probability that you are alive at 10:00 AM. From the information given, we have
P(M) = 1/2
P(~M) = 1/2
P(S|M) = 1/10
P(S|~M) = 1
Therefore,
P(S) = P(S|M) P(M) + P(S|~M)P(~M) = (1/2)(1/10) + (1/2)(1) = 11/20
P(M|S) = P(S|M) P(M)/P(S) = (1/20)/(11/20) = 1/11
That is, given the information you have, you should conclude that the probability that the machine malfunctioned is 1/11.
The second comment concerns Grace's reasoning about future filters.
I will assume that the following model is a fair representation of Grace's argument about relative probabilities for the first and second filters.
Future Filter Model I
Given: universe with N planets, T time steps. Intelligent life can arise on a planet at most once.
At each time step:
- each surviving intelligent species becomes permanently visible to all other species with probability c (the third filter probability)
- each surviving intelligent species self-destructs with probability b (the second filter probability)
- each virgin planet produces an intelligent species with probability a (the first filter probability)
Suppose N=one billion, T=one million. Put uniform priors on a, b, c, and the current time t (an integer between 1 and T).
Your species appeared on your planet at unknown time step t_0. The current time t is also unknown. At the current time, no species has become permanently visible in the universe. Conditioned on this information, what is the posterior density for first filter parameter a?
Talk about your research
There must be quite a few undergrad/graduate/post-doc/???-level researchers on LessWrong. I'm interested in hearing about your work. I'll post about myself in the comments.
FAI vs network security
All plausible scenarios of AGI disaster involve the AGI gaining access to resources "outside the box." Therefore there are two ways of preventing AGI disaster: one is preventing AGI, which is the "FAI route", and the other is preventing the possibility of rogue AGI gaining control of too many external resources--the "network security route." It seems to me that this network security route--an international initiative to secure networks and computing resources against cyber attacks--is the more realistic solution for preventing AGI disaster. Network security prevents against intentional human-devised attacks as well as the possibility of rogue AGI--therefore such measures are easier to motivate and therefore more likely to be implemented successfully. Also, the development of FAI theory does not prevent the creation of unfriendly AIs. This is not to say that FAI should not be pursued at all, but it can hardly be claimed that development of FAI is of top priority (as it has been stated a few times by users of this site).
Hive mind scenario
In a conceivable future, humans gain the technology to eliminate physical suffering and to create interfaces between their own brains and computing devices--interfaces which are sufficiently advanced that the border between the brain and the computer practically vanishes. Humans are able to access all public knowledge as if they 'knew' it themselves, and they can also upload their own experiences to this 'web' in real-time. The members of this network would lose part of their individuality since an individual's unique set of skills and experiences are a foundational component of identity.
However, although knowledge can be shared for low cost, computing power will remain bounded and valuable. Even if all other psychological needs are pacified, humans will probably still compete for access to computing power.
But what other elements of identity might still remain? Is it reasonable to say that individuality in such a hive mind would reduce to differing preferences for the use of computational power?
Learning through exercises
One of the best aspects of mathematics is that it is possible for a student to reconstruct much of it on their own, given the relevant axioms, definitions, and some hints. Indeed, this style of education is usually encouraged for training mathematicians. Relatedly, it is also possible for a mathematician to give a quick impression of the relevance of their particular field by choosing an example of an interesting problem which can be efficiently solved using the methods of that specific specialty of mathematics.
To what extent do other academic fields share this property? How well can physics, chemistry, biology, etc. be taught "through exercises"?
EDIT: Note that the "exercises" I am referring to are not just matters of applying learned principles for solving random problems but rather are devices to lead the student to "rediscover" important principles in the field.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)