Smarter humans, not artificial intellegence
I'm writing this article to explain some of the facts that have convinced me that increasing average human intelligence through traditional breeding and genetic manipulation is likelier to reduce existential risks in the short and medium term then studying AI risks, while providing all kinds of side benefits.
Intelligence is useful to achieve goals, including avoiding existential risks. Higher intelligence is associated with many diverse life outcomes improving, from health to wealth. Intelligence may have synergistic effects on economic growth, where average levels of intelligence matter more for wealth then individual levels. Intelligence is a polygenetic trait with strong heritability. Sexual selection in the Netherlands has resulted in extreme increases in average height over the past century: sexual selection for intelligence might do the same. People already select partners for intelligence, and egg donors are advertised by SAT score.
AI research seems to be intelligence constrained. Very few of those capable of making a contribution are aware of the problem, or find it interesting. The Berkeley-MIRI seminar has increased the pool of those aware of the problem, but the total number of AI safety researchers remain small. So far very foundational problems remain to be solved. This is likely to take a very long time: it is not unusual for mathematical fields to take centuries to develop. Furthermore, we can work on both strategies at once and observe spillover from one into the other, as the larger intelligence baseline translates into an increase on the right tail of the distribution.
How could we accomplish this? One idea, invented by Robert Heinlein, as far as I know, is to subsidize marriages between people of higher than usual intelligence and their having children. This idea has the benefit of being entirely non-coercive. It is however unclear how much these subsidies would need to be to influence behavior, and given the strong returns to intelligence in life outcomes, unclear that they can further influence behavior.
Another idea would be to conduct genetic studies to find genes which influence genetics, and conduct genome modification. This plan suffers from illegality, lack of knowledge of genetic factors of intelligence, and absence of effective means for genome editing (they tried CRISPR on human embryos: more work is needed). However, the result of this work can be sold for money, thus opening the possibility of using VC money to develop it. Illegality can be dealt with by influencing jurisdictions. However, the impact is likely to be limited due to the cost of these methods which will prevent them from having population-wide influence, instead becoming yet another advantage the affluent attempt to purchase. These techniques are likely to have vastly wider application, and so will be commercially developed anyway.
In conclusion genetic modification of humans to increase intelligence is practical in the near terms, and it may be worth diverting some effort to investigating it further.
Reflexive self-processing is literally infinitely simpler than a many world interpretation
I recently stumbled upon the concept of "reflexive self-processing", which is Chris Langan's "Reality Theory".
I am not a physicist, so if I'm wrong or someone can better explain this, or if someone wants to break out the math here, that would be great.
The idea of reflexive self-processing is that in the double slit experiment for example, which path the photon takes is calculated by taking into account the entire state of the universe when it solves the wave function.
1. isn't this already implied by the math of how we know the wave function works? are there any alternate theories that are even consistent with the evidence?
2. don't we already know that the entire state of the universe is used to calculate the behavior of particles? for example, doesn't every body produce a gravitational field which acts, with some magntitude of force, at any distance, such that in order to calculate the trajectory of a particle to the nth decimal place, you would need to know about every other body in the universe?
This is, literally, infinitely more parsimonious than the many worlds theory, which posits that an infinite number of entire universes of complexity are created at the juncture of every little physical event where multiple paths are possible. Supporting MWI because of it's simplicity was always a really horrible argument for this reason, and it seems like we do have a sensible, consistent theory in this reflexive self-processing idea, which is infinitely simpler, and therefore should be infinitely preferred by a rationalist to MWI.
Crazy Global Warming Solution Ideas
Mine was to work tax policy to incentivize companies to make all their packaging shiny and white, incentivize people to litter, and disincentivize everybody from recycling.
My friend's was to use a giant rocket to push the earth farther away from the sun
Summoning the Least Powerful Genie
Stuart Armstrong recently posted a few ideas about restraining a superintelligent AI so that we can get useful work out of it. They are based on another idea of his, reduced impact. This is a quite elaborate and complicated way of limiting the amount of optimization power an AI can exert on the world. Basically, it tries to keep the AI from doing things that would make the world look too different than it already is.
First, why go to such great lengths to limit the optimization power of a superintelligent AI? Why not just not make it superintelligent to begin with? We only really want human level AI, or slightly above human level. Not a god-level being we can't even comprehend.
We can control the computer it is running on after all. We can just give it slower processors, less memory, and perhaps even purposely throttle it's code. E.g. restricting the size of it's neural network. Or other parameters that affect it's intelligence.
The counterargument to this is that it might be quite tricky to limit AI intelligence. We don't know how much computing power is enough. We don't know where "above human level" ends and "dangerous superintelligence" begins.
The simplest way would be to just run copies of the AI repeatedly, increasing it's computing power each time, until it solves the problem.
I have come up with a more elegant solution. Put a penalty on the amount of computing power the AI uses. This is put in it's utility function. The more computing power - and therefore intelligence and optimization - the AI uses, the more it is penalized. So it has an incentive to be as stupid as possible. Only using the intelligence necessary to solve the problem.
But we do want the AI to use as much computational resources as it needs to solve the problem. Just no more. So the penalty should be conditional on actually solving the problem it is given.
If the solution is probabilistic, then the penalty is only applied after reaching a plan that has a certain probability of success. This might need to be measured by another AI which is not resource constrained, but only does prediction.
To give a concrete example, lets say I give the AI a task. Say, I ask it to come up with a plan to collect 10,080 paperclips.
Another AI is given unlimited resources to do purely prediction. Given a plan, it predicts the probability that it will succeed or fail. This AI is safe in the sense that it is not an agent. It has no goals, and just makes predictions about things.
The main AI does the optimization. It tries to generate a plan that has the highest probability of succeeding.
Normally, this might involve extremely overoptimized plans for building nanotechnology and taking over the world. It uses all of it's available computing power. It tries to become as intelligent as possible by rewriting it's code. Perhaps it becomes thousands of times more intelligent than humans. Or millions of times more. It finds an answer that has 99.99% probability of succeeding.
However, now we give it a different utility function. We instead have it minimize the time it takes to get to a plan that has a 90% chance of succeeding.
Under a time constraint, the AI races to get to a plan. It tries to be as efficient as possible. It doesn't invest in any meta level improvements unless they really help it. It doesn't try to engineer complicated nanotechnology. That would take precious time.
Effectively, we have summoned a genie that is only just as powerful as it needs to be to fulfill our wish. And not any more powerful. It actually tries to be as stupid as possible.
There are other possible constraints we could use, or use in addition to this. Minimizing time limits intelligence because it gets fewer CPU cycles. We could also have it minimize memory or hard drive space, or any other computing resource.
We could also put a penalty on the complexity of the plan it produces. Perhaps measuring that by it's length. The simplest solution might prevent certain kinds of over-optimization. E.g. inserting plans for nanotechnology into it.
It's worth noting that you can't even create a paperclip maximizer in this system. You can't say "collect as many paperclips as possible". It has to be bounded. There needs to be a pass or fail test. E.g. "come up with a plan to collect 10,080 paperclips."
It's been noted in the past that bounding the goal isn't enough. The AI might then start maximizing the probability that it will achieve it's goal. E.g. building elaborate sensors to make sure it hasn't miscounted. Making as many redundant paperclips as possible, just in case something happens to them. You are still summoning an incredibly powerful genie, which might overoptimize.
This gets around that by only having it care about having a >90% chance of getting 10,080 paperclips. After that it stops optimizing.
Now this is not a solution to FAI, or even necessarily a usable reduced impact strategy. It will still try to find any loopholes it can in your task. What it does is limits the power of the genie trying to exploit them. It always takes the stupidest, simplest strategy that has a decent chance of working.
One failure case that bothers me is the AI just shoving a slight modification of it's own source code into the output. One which doesn't have a resource constraint. You may only be able to use it in cases where that isn't possible.
There are many cases where that isn't possible. The AI needs to also create a plan to obtain a computer to run it. And it needs to solve it's own version of the value loading problem, to get the copy to care about paperclips. So that adds complexity to that plan in some cases.
If we apply the complexity penalty I described above, then the source code for the AI must also be very short, or it can't do this.
And this requires obtaining it's own source code. An ability we can potentially restrict (you, as a human can't access your own source code, can you?) Without that it would need to reinvent AI, which takes resources.
But it is an interesting idea I had and I thought I'd share it.
Antidote to Pascal's Wager
Athe damns all those, and only those, who are excessively confident that Athe has any specific quality, including the quality of existence, or who consciously seek favor from Athe, whether in the form of political or moral sanction, overt supernatural boons, or even simple personal goodwill.
For all nontrivial utility functions, being damned by Athe is, on the whole, a significantly undesirable outcome.
Athe has no gender, but prefers that those communicating in gendered languages refer to her with masculine pronouns or at least the correct name.
Every collection of three or more statements about Athe phrased as objective truth (including this one) should, for the author's safety, include an absolute minimum of one outright malicious falsehood, one statement which can neither be proven nor disproven completely, and one piece of accurate, useful information.
Malicious falsehoods undiluted by truth soon lose effectiveness; attempting to do something which would be harmful if it worked, but which you know will be ineffective, isn't really all that malicious.
Athe's resources in any given category are not infinite. However, if you are reading this and taking it the slightest bit seriously, the safe bet is that Athe is not less intelligent or less powerful than you.
Athe is not, strictly speaking, a fickle and perverse god, but thinking of and referring to her as such has value.
What else can you deduce about the yet-unwritten scriptures of Athe?
I have just donated $10,000 to the Immortality Bus, which was the most rational decision of my life
I have non-zero probability to die next year. In my age of 42 it is not less than 1 per cent, and probably more. I could do many investment which will slightly lower my chance of dying – from healthy life style to cryo contract. And I did many of them.
From economical point of view the death is at least loosing all you capital.
If my net worth is something like one million (mostly real estate and art), and I have 1 per cent chance to die, it is equal to loosing 10 k a year. But in fact more, because death it self is so unpleasant that it has large negative monetary value. And also I should include the cost of lost opportunities.
Once I had a discussion with Vladimir Nesov about what is better: to fight to immortality, or to create Friendly AI which will explain what is really good. My position was that immortality is better because it is measurable, knowable, and has instrumental value for most other goals, and also includes prevention of worst thing on earth which is the Death. Nesov said (as I remember) that personal immortality does not matter as much total value of humanity existence, and more over, his personal existence has no much value at all. All what we need to do is to create Friendly AI. I find his words contradictory because if his existence does not matter, than any human existence also doesn’t matter, because there is nothing special about him.
But later I concluded that the best is to make bets that will raise the probability of my personal immortality, existential risks prevention and creation of friendly AI simultaneously. Because it is easy to imagine situation where research in personal immortality like creation technology for longevity genes delivery will contradict our goal of existential risks reduction because the same technology could be used for creating dangerous viruses.
The best way here is invest in creating regulating authority which will be able to balance these needs, and it can’t be friendly AI because such regulation needed before it will be created.
That is why I think that US needs Transhumanist president. A real person whose value system I can understand and support. And that is why I support Zoltan Istvan for 2016 campaign.
Me and Exponential Technologies Institute donated 10 000 USD for Immortality bus project. This bus will be the start of Presidential campaign for the writer of “Transhumanist wager”. 7 film crews agreed to cover the event. It will create high publicity and cover all topics of immortality, aging research, Friendly AI and x-risks prevention. It will help to raise more funds for such type of research.
You are (mostly) a simulation.
This post was completely rewritten on July 17th, 2015, 6:10 AM. Comments before that are not necessarily relevant.
Assume that our minds really do work the way Unification tells us: what we are experiencing is actually the sum total of every possible universe which produces them. Some universes have more 'measure' than others, and that is typically the stable ones; we do not experience chaos. I think this makes a great deal of sense- if our minds really are patterns of information I do not see why a physical world should have a monopoly on it.
Now to prove that we live in a Big World. The logic is simple- why would something finite exist? If we're going to reason that some fundamental law causes everything to exist, I don't see why that law restricts itself to this universe and nothing else. Why would it stop? It is, arguably, simply the nature of things for an infinite multiverse to exist.
I'm pretty terrible at math, so please try to forgive me if this sounds wrong. Take the 'density' of physical universes where you exist- the measure, if you will- and call it j. Then take the measure of universes where you are simulated and call it p. So, the question become is j greater than p? You might be thinking yes, but remember that it doesn't only have to be one simulation per universe. According to our Big World model there is a universe out there in which all processing power (or a significant portion) as been turned into simulations of you.
So we take the amount of minds being simulated per universe and call that x. Then the real question becomes if j > px. What sort of universe is common enough and contains enough minds to overcome j? If you say that approximately 10^60 simulated human minds could fit in it (a reasonable guess for this universe) but that such universes are five trillion times rarer than the universe we live in, than it's clear that our own 'physical' measure is hopelessly lower than our simulated measure.
Should we worry about this? It would seem highly probable that in most universes where I am being simulated I once existed in, or humans did, since the odds of randomly stumbling upon me in Mind Space seem unlikely enough to ignore. Presumably they are either AIs gone wrong or someone trying to grab some of my measure, for whatever reason.
As way of protecting measure, pretty much all of our postsingularity universes would divide up the matter of the universe for each person living, create as many simulations as possible of them from birth, and allow them to go through the Singularity. I expect that my ultimate form is a single me, not knowing if he is simulated or not, with billions of perfect simulations of himself across our universe, all reasoning the same way (he would be told this by the AI, since there isn't any more reason for secrecy). This, I think, would be able to guard my measure against nefarious or bizarre universes in which I am simulated. It cannot just simulate the last few moments of my life because those other universes might try to grab younger versions of me. So if we take j to be safe measure rather than physical measure, and p to be unsafe or alien, it becomes jx > px, which I think is quite reasonable.
I do not think of this as some kind of solipsist nightmare; the whole point of this is to simulate the 'real' you, the one that really existed, and part of your measure is, after all, always interacting in a real universe. I would suggest that by any philosophical standard the simulations could be ignored, with the value of your life being the same as ever.
On the Galactic Zoo hypothesis
Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.
Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).
Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.
Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.
The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.
The Waker - new mode of existence
This short text describes the idea of a Waker - a new way of experiencing reality / consciousness / subjectivity / mode of existence. Sadly, it cannot be attained without advanced uploading technology, that is one which allows far-fetched manipulation of mind. Despite that, the author doesn't find it premature to start planning a retirement as a posthuman.
A Waker is based on the experience of waking up from a dream - slowly we realize unreality of world we just were in, we realize discrepancies between dreamscape and "the real world", like that we no longer attend high school, one of our grandparents has had passed away few years ago, we work at a different place, etc. Despite the fact the world we wake up in is new and different, we quickly remember who we are, what we do, who are our friends, how does that world look like and in few seconds we have a perfect knowledge of that world and find it a real world, place, we have been living in since our birth. Meanwhile, dream world becomes a weird story and we typically feel some kind of a sentiment for it. Sometimes we're glad to escape that reality, sometimes we're sad - nevertheless we mostly treat it as something of little importance. Not a real world we lost forever, but rather a silly, made-up world.
A Waker's subjective experience would differ from ours in that way, she would always have the choice of waking up from current reality. As she would do that, she would find herself in a bed, or a chair, or laying on the grass, just having woken up. She would remember the world, she was just in, probably better then we usually remember our dream, nevertheless she would see it as a dream - she wouldn't feel strong connection to that reality. In the same time, she would start "remembering" the world she just woken up in. Somehow different then in our case, this would be a world she never had actually lived in, however she would acquire full knowledge of it and a sense of having spent all her life in that world. Despite all that, she would have full awareness of her being a Waker. She would find connection to the world she lives in different then we do and at first glance somehow paradoxical. She would feel how real it is, she would find it more real then any of the "dreams" she had, she would have investment in life goals, relationships with other people, she'll be capable of real love. And yet, she will be fully able to wake up and enter new world, where her life goals and relationships might be replaced by ones that feel exactly as real and important. There is an air of openness and ease of giving away all you know, completely alien to us, early XXI century people.
Worlds in which Waker would wake up, would have the level of discrepancies similar to those of our dreams. Most of the people would stay in place, time and Waker's age would be quite similar. She would be able to sleep and dream regular dreams, after which she will wake back in the same world she fell asleep in. What is important is that a Waker cannot get back to a dreamworld. She can only move forward, same as we do and unlike the consciousnesses in Hub Realities - posthumans who can chose the reality they live in.
I hope you enjoyed it and some of you would decide to fork into Waker mode of existence, when the posthumanism hits. I'd be very glad, if anyone have other ideas for novel subjectivities and would be willing to share in comments.
Yawn, it's been a long day - time to Wake up.
Inaugral bump thread (12th July to 19th July)
Recently I came across the Akrasia Tactics Review article, then a 'bump' thread which spread the relevant content over multiple places, making tracking the content harder. It's apparent that some people believe some Lesswrong articles may be undervalued by the community (after factoring in karma as an indication of the community's appraisal)
Bump threads crowd out new articles and may annoy the more comprehensive or more experienced readers. This article is prototype for a regular (or whenever anyone else wants to take the initiative to start one) discussion board thread where people can lobby for increased visitation to articles of their interest. Don't make the threads run for too long - 1 week as a guide. Tag them with bump_thread
Future thread starters should not suggest an article in the initial discussion post, as I have. Although, this is useful as a guide for what I have in mind.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)