Making Less Wrong Great Again




Please post other Making Less Wrong Great Again memes in the comments
Wrong however unnamed
Related to: 37 ways that words can be wrong.
Consider the following sentence (from Internet; but I have heard it before): 'Lichens consist of fungi and algae, but they are more than the sum of their constituents.'
It is supposed to say something like 'the fungus and the alga don't just live very close to each other, they influence each other's habitat(s) and can be considered, for most purposes, to form a physiologically integrated body'. It never actually says that, although people gradually come to this conclusion if they look at illustrations or read long enough. And I don't think the phrase is sufficiently catchy to explain its popularity; rather, that it is a tenuous introduction to the much-later-explained term 'synergism'. A noble (in principle) preparation of the mind.
Yet how is a lichen 'more than the sum of fungus and alga'? I suppose one could speak of a 'sum' if the lichen was pulverized and consumed as medicine, and then its effect on the patient was compared to that of the mixture of similarly treated fungus (grown how exactly?) and alga (same here). It doesn't exist in the wild. It shouldn't exist in the literature.
A child is not bothered by its lack of sense. When she encounters 'synergism', she'll remember having been told of something like it, and be reassured by the unity of science. It flies under the radar of 'established biological myths', because it doesn't have enough meaning to be one.
I picked a dictionary of zoological terms and tried to recall how the notions were put before me for the first time, but of course I failed. (I guess it should be high-level things, like 'variability', or colloquial expressions - 'bold as a lion', etc., that distort and get distorted the most.) They seem to 'have always been there'. Then, I looked at the definitions and tried to imagine them misapplied (intuitively, a simpler task). No luck. Yet someday, something other truly unknown to me will appear familiar and simple.
We can weed out improper concepts from textbooks, but there are too many sources which are written far more engagingly and 'clearly', and which propagate not even wrong ideas. Explained like I'm five.
And never named.
Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited
After a 6+ month hiatus driven by grad school and personal projects, I am finally able to continue my sequence on astrobiology. I was flabbergasted by the positive response my last post got, and despite my status as a biologist with a hobby rather than an astronomer I decided to take a more rigorously mathematical approach to figuring out our biosphere's position in space and time rather than talking in generalizations and impressions.
Post is here: http://thegreatatuin.blogspot.com/2016/03/space-and-time-revisited.html. Seeing as this post is an elaboration on the last one, I am posting a link rather than reproducing the text.
To summarize, I found some actual rigorous observational fits to the star formation rate in the universe over time and projected them into the future. These fits show the Sun as forming after 79% of all stars that will ever exist, and that 90% of all stars that will ever exist already exist. This makes sense in the light of recent work on 'galaxy quenching' - a process by which galaxies more or less completely shut off star formation through a number of processes - indicating that the majority of gas in the universe probably won't form stars if trends that have held for most of the history of the universe continue to hold. It relies heavily on analysis I began in comments on this site a few months ago.
I then lift two distinct metallicity normalizations from a paper that was making the rounds here a while back ("On The History and Future of Cosmic Planet Formation"), in an attempt to deal with the fact that that is a measurement of STAR formation, not terrestrial-planet-with-a-biosphere formation. Depending on which metallicity normalization you use (and how willing you are to take a couple naive assumptions I make in order to slot the math that is too complicated for me to comment on on top of my star formation numbers) the Earth shows up as forming after either 72% or 51% of all terrestrial planets.
These numbers are remarkable in how boring they are. We find ourselves in an utterly typical position in planet-order, even if I am wrong by quite a bit. We are not early. Of interest to many here, explanations of the so called Fermi paradox must go elsewhere, into the genesis of intelligent systems being exceedingly rare or the genesis of intelligent systems not implying interstellar spread.
Now that I seem to have a life again, I will be getting back to my original plan next, talking about our own solar system.
Two super-intelligences (evolution and science) already exist: what could we learn from them in terms of AI's future and safety?
There are two things in the past that may be named super-intelligences, if we consider level of tasks they solved. Studying them is useful when we are considering the creation of our own AI.
The first one is biological evolution, which managed to give birth to such a sophisticated thing as man, with its powerful mind and natural languages. The second one is all of human science when considered as a single process, a single hive mind capable of solving such complex problems as sending man to the Moon.
What can we conclude about future computer super-intelligence from studying the available ones?
Goal system. Both super-intelligences are purposeless. They don’t have any final goal which would direct the course of development, but they solve many goals in order to survive in the moment. This is an amazing fact of course.
They also lack a central regulating authority. Of course, the goal of evolution is survival at any given moment, but is a rather technical goal, which is needed for the evolutionary mechanism's realization.
Both will complete a great number of tasks, but no unitary final goal exists. It’s just like a man in their life: values and tasks change, the brain remains.
Consciousness. Evolution lacks it, science has it, but to all appearances, it is of little significance.
That is, there is no center to it, either a perception center or a purpose center. At the same time, all tasks are completed. The sub-conscious part of the human brain works the same way too.
Master algorithm. Both super-intelligences are based on the principle: collaboration of numerous smaller intelligences plus natural selection.
Evolution is impossible without billions of living creatures testing various gene combinations. Each of them solves its own egoistic tasks and does not care about any global purpose. For example, few people think that selection of the best marriage partner is a species evolution tool (assuming that sexual selection is true). Interestingly, the human brain has the same organization: it consists of billions of neurons, but they don’t all see its global task.
Roughly, there have been several million scientists throughout history. Most of them have been solving unrelated problems too, while the least refutable theories passed for selection (considering social mechanisms here).
Safety. Dangerous, but not hostile.
Evolution may experience ecological crises; science creates an atomic bomb. There are hostile agents within both, which have no super-intelligence (e.g. a tiger, a nation state).
Within an intelligent environment, however, a dangerous agent may appear which is stronger than the environment and will “eat it up”. This will be difficult to initiate. Transition from evolution to science was so difficult to initiate from evolution’s point of view, (if it had one).
How to create our super-intelligence. Assume, we agree that super-intelligence is an environment, possessing multiple agents with differing purposes.
So we could create an “aquarium” and put a million differing agents into it. At the top, however, we set an agent to cast tasks into it and then retrieve answers.
Hardware requirements now are very high: we should simulate millions of human-level agents. A computational environment of about 10 to the power of 20 flops is required to simulate a million brains. In general, this is close to the total power of the Internet. It can be implemented as a distributed network, where individual agents are owned by individual human programmers and solve different tasks – something like SETI-home or the Bitcoin network.
Everyone can cast a task into the network, but provides a part of their own resources in return.
Speed of development of superintelligent environment
Hyperbolic law. The Super-intelligence environment develops hyperbolically. Korotaev shows that the human population grows governed by the law N = 1/T (Forrester law , which has singularity at 2026), which is a solution to the following differential equation:
dN/dt = N*N
A solution and more detailed explanation of the equation can be found in this article by Korotaev (article in Russian, and in his English book on p. 23). Notably, the growth rate depends on the second power of the population size. The second power was derived as follows: one N means that a bigger population has more descendants; the second N means that a bigger population provides more inventors who generate a growth in technical progress and resources.
Evolution and tech progress are also known to develop hyperbolically (see below to learn how it connects with the exponential nature of Moore’s law; an exact layout of hyperbolic acceleration throughout history may be found in Panov’s article “Scaling law of the biological evolution and the hypothesis of the self-consistent Galaxy origin of life” ) The expected singularity will occur in the 21st century. And now we know why. Evolution and tech progress are both controlled by the same development law of the superinteligent environment. This law states that the intelligence in an intelligence environment depends on the number of nodes, and on the intelligence of each node. This is of course is very rough estimation, as we should also include the speed of transactions
However, Korotaev gives an equation for population size only, while actually it is also applicable to evolution – the more individuals, the more often that important and interesting mutations occur, and for the number of scientists in the 20th century. (In the 21st century it has reached its plateau already, so now we should probably include the number of AI specialists as nodes).
In short: Korotayev provides a hyperbolic law of acceleration and its derivation from plausible assumptions but it is only applicable to demographics in human history from its beginning and until the middle of the 20t century, when demographics stopped obeying this law. Panov provides data points for all history from the beginning of the universe until the end of the 20th century, and showed that these data points are controlled by hyperbolic law, but he wrote down this law in a different form, that of constantly diminishing intervals between biological and (lately) scientific revolutions. (Each interval is 2.67 shorter that previous one, which implies hyperbolic law.)
What I did here: I suggested that Korotaev’s explanation of hyperbolic law stands as a pre-human history explanation of an accelerated evolutionary process, and that it will work in the 21st century as a law describing the evolution of an AI-agents' environment. It may need some updates if we also include speed of transactions, but it would give even quicker results.
Moore's law is only exponential approximation, it is hyperbolical in the longer term, if seen as the speed of technological development in general. Kurzweil wrote: “But I noticed something else surprising. When I plotted the 49 machines on an exponential graph (where a straight line means exponential growth), I didn’t get a straight line. What I got was another exponential curve. In other words, there’s exponential growth in the rate of exponential growth. Computer speed (per unit cost) doubled every three years between 1910 and 1950, doubled every two years between 1950 and 1966, and is now doubling every year.”
While we now know that Moore's law in hardware has slowed to 2.5 years for each doubling, we will probably now start to see exponential growth in the ability of programs.
Neural net development has a doubling time of around one year or less. Moore's law is like spiral, which circles around more and more intelligent technologies, and it consists of small s-like curves. It all deserves a longer explanation. Here I show that Moore's law, as we know it, is not contradicting the hyperbolic law of acceleration of a superintelligent environment, but this is how we see it on a small scale.
Neural networks results: Perplexity
46.8, "one billion word benchmark", v1, 11 Dec 2013
43.8, "one billion word benchmark", v2, 28 Feb 2014
41.3, "skip-gram language modeling", 3 Dec 2014
24.2, "Exploring the limits of language modeling", 7 Feb 2016 http://arxiv.org/abs/1602.02410
Child age equivalence in question about a picture:
3 May 2015 —4.45 years old http://arxiv.org/abs/1505.00468
7 November 2015 —5.45 y.o. (за 6 месяцев - на год подросла)http://arxiv.org/abs/1511.02274
4 March 2016 —6.2 y.o. http://arxiv.org/pdf/1603.01417
Material from Sergey Shegurin
Other considerations
Human level agents and Turing test. Ok, we know that the brain is very complex, and if the power of individual agents in if AI environment grows so quickly, there should appear agents capable of passing a Turing test - and it will happen very soon. But for a long time the nodes of this net will be small companies and personal assistants, which could provide superhuman results. There is already a market place where various projects could exchange results or data using API. As a result, a Turing test will be meaningless, because most powerful agents will be helped by humans.
In any case, some kind of “mind brick”, or universal robotic brain will also appear.
Physical size of Strong AI: if the velocity of light is limited, the super-intelligence must decrease in size rather than increase in order to make quick communications inside itself. Otherwise, the information exchange will slow down, and the development rate will be lost.
Therefore, the super-intelligence should have a small core, e.g. up to the size of the Earth, and even less in the future. The periphery can be huge, but that will perform technical functions – defence and nutrition.
Transition to the next super-intelligent environment. It is logical to suggest that the next super-intelligence will also be an environment rather than a small agent. It will be something like a net of neural net-based agents as well as connected humans. The transition may seem to be soft on a small time scale, but it will be disruptive by it final results. It is already happening: the Internet, AI-agents, open AI, you name it. The important part of such a transition is changing the speed of interaction between agents. In evolution the transaction time was thousands of years, which was the time needed to check new mutations. In science it was months, which was the time needed to publish an article. Now it is limited by the speed of the Internet, which depends not only on the speed of light, but also on its physical size, bandwidth and so on and have transaction time order of seconds.
So, a new super-intelligence will rise in a rather “ordinary” fashion: The power and number of interacting AI agents will grow, become quicker and they will quickly perform any tasks which are fed to them. (Elsewhere I discussed this and concluded that such a system may evolve into two very large super-intelligent agents which will have a cold war, and that hard take-off of any AI-agent against an AI environment is unlikely. But this does not result in AI safety since war between such two agents will be very destructive – consider nanoweapons. ).
Super-intelligent agents. As the power of individual agents grows, they will reach human and latterly superhuman levels. They may even invest in self-improvement, but if many agents do this simultaneously, it will not give any of them a decisive advantage.
Human safety in the super-intelligent agents environment. There is well known strategy to be safe in the environment there are more powerful than you, and agents fight each other. It is making alliances with some of the agents, or becoming such an agent yourself.
Fourth super-intelligence? Such an AI neural net-distributed super-intelligence may not be the last, if a quicker way of completing transactions between agents is found. Such a way may be an ecosystem containing miniaturization of all agents. (And this may solve the Fermi paradox – any AI evolves to smaller and smaller sizes, and thus makes infinite calculations in final outer time, perhaps using an artificial black hole as an artificial Tippler Omega point or femtotech in the final stages). John Smart's conclusions are similar:
Singularity: It could still happen around 2030, as was predicted by Forrester law, and the main reason for this is the nature of hyperbolic law and its underlying reasons of the growing number of agents and the IQ of each agent.
Oscillation before singularity: Growth may become more and more unstable as we near singularity because of the rising probability of global catastrophes and other consequences of disruptive technologies. If true, we will never reach singularity dying off shortly before, or oscillating near its “Schwarzschild sphere”, neither extinct, nor able to create a stable strong AI.
The super-intelligent environment still reaches a singularity point, but a point cannot be the environment by definition. Oops. Perhaps an artificial black hole as the ultimate computer would help to solve such a paradox.
Ways of enhancing the intelligent environment: agent number growth, agent performance speed growth, inter-agent data exchange rate growth, individual agent intelligence growth, and growth in the principles of building agent working organizations.
The main problem of an intelligent environment: chicken or egg? – Who will win: the super-intelligent environment or the super-agent? Any environment can be covered by an agent submitting tasks to it and using its data. On the other hand, if there are at least two super-agents of this kind, they form an environment.
Problems with the model:
1) The model excludes the possibility of black swans and other disruptive events, and assumes continuous and predictable acceleration, even after human level AI is created.
2) The model is disruptive itself, as it predicts infinity, and in a very short time frame of 15 years from now. But expert consensus puts AI in the 2060-2090 timeframe.
These two problems may somehow cancel each other out.
In the model exists the idea of oscillation before the singularity, which may result in postponing AI and preventing infinity. The singularity point inside the model is itself calculated using remote past points, and if we take into account more recent points, we could get a later date for the singularity, thus saving the model.
If we say that because of catastrophes and unpredictable events the hyperbolic law will slow down and strong AI will be created before 2100, as a result, we could get a more plausible picture.
This may be similar to R.Hanson’s “ems universe” , but here, neural net-based agents are not equal to human emulations, which play a minor role in all stories.
Limitation of the model: It is only a model, so it will stop working at some point. Reality will surprise us at some point, but reality doesn’t consist only of black swans. Models may work between them.
TL;DR: Science and evolution are super-intelligent environments governed by the same hyperbolic acceleration law, which soon will result in a new super-intelligent environment, consisting of neural net-based agents. Singularity will come after this, possibly as soon as 2030.
The Brain Preservation Foundation's Small Mammalian Brain Prize won
The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.
- BPF announcement (21CM’s announcement)
- evaluation
-
The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)
(They had problems with 2 pigs and got 1 pig brain successfully cryopreserved but it wasn’t part of the entry. I’m not sure why: is that because the Large Mammalian Brain Prize is not yet set up?)We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.
- previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
- commentary: Alcor, Robin Hanson, John Smart, Evidence-Based Cryonics, Vice, Pop Sci
To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.
This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.
EDIT: BPF’s founder Ken Hayworth (Reddit account) has posted a piece, arguing that ALCOR & CI cannot be trusted to do procedures well and that future work should be done via rigorous clinical trials and only then rolled out. “Opinion: The prize win is a vindication of the idea of cryonics, not of unaccountable cryonics service organizations”
…“Should cryonics service organizations immediately start offering this new ASC procedure to their ‘patients’?” My personal answer (speaking for myself, not on behalf of the BPF) has been a steadfast NO. It should be remembered that these same cryonics service organizations have been offering a different procedure for years. A procedure that was not able to demonstrate, to even my minimal expectations, preservation of the brain’s neural circuitry. This result, I must say, surprised and disappointed me personally, leading me to give up my membership in one such organization and to become extremely skeptical of all since. Again, I stress, current cryonics procedures were NOT able to meet our challenge EVEN UNDER IDEAL LABORATORY CONDITIONS despite being offered to paying customers for years[1]. Should we really expect that these same organizations can now be trusted to further develop and properly implement such a new, independently-invented technique for use under non-ideal conditions?
Let’s step back for a moment. A single, independently-researched, scientific publication has come out that demonstrates a method of structural brain preservation (ASC) compatible with long-term cryogenic storage in animal models (rabbit and pig) under ideal laboratory conditions (i.e. a healthy living animal immediately being perfused with fixative). Should this one paper instantly open the floodgates to human application? Under untested real-world conditions where the ‘patient’ is either terminally ill or already declared legally dead? Should it be performed by unlicensed persons, in unaccountable organizations, operating outside of the traditional medical establishment with its checks and balances designed to ensure high standards of quality and ethics? To me, the clear answer is NO. If this was a new drug for cancer therapy, or a new type of heart surgery, many additional steps would be expected before even clinical trials could start. Why should our expectations be any lower for this?
The fact that the ASC procedure has won the brain preservation prize should rightly be seen as a vindication of the central idea of cryonics –the brain’s delicate circuitry underlying memory and personality CAN in fact be preserved indefinitely, potentially serving as a lifesaving bridge to future revival technologies. But, this milestone should certainly not be interpreted as a vindication of the very different cryonics procedures that are practiced on human patients today. And it should not be seen as a mandate for more of the same but with an aldehyde stabilization step casually tacked on. …
The correct response to uncertainty is *not* half-speed
Related to: Half-assing it with everything you've got; Wasted motion; Say it Loud.
Once upon a time (true story), I was on my way to a hotel in a new city. I knew the hotel was many miles down this long, branchless road. So I drove for a long while.

After a while, I began to worry I had passed the hotel.

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

- I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it. So, I sat there kind-of-writing it while also fretting about whether the task was correct.
- (Solution: Take a minute out to think through heuristics. Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
- I wasn't sure (back in early 2012) that CFAR was worthwhile. So, I kind-of worked on it.
- An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work. So I kind-of hung out with her while feeling bad and distracted about my work.
- A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
- Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
- It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

Voiceofra is banned
I've gotten sufficient evidence from support that voiceofra has been doing retributive downvoting. I've banned them without prior notice because I'm not giving them more chances to downvote.
I'm thinking of something like not letting anyone give more than 5 downvotes/week for content which is more than a month old. The numbers and the time period are tentative-- this isn't my ideal rule. This is probably technically possible. However, my impression is that highly specific rules like that are an invitation to gaming the rules.
I would rather just make spiteful down-voting impossible (or maybe make it expensive) rather than trying to find out who's doing it. Admittedly, putting up barriers to downvoting for past comments doesn't solve the problem of people who down-vote everything, but at least people who downvote current material are easier to notice.
Any thoughts about technical solutions to excessive down-voting of past material?
The Trolley Problem and Reversibility
The most famous problem used when discussing consequentialism is that of the tram problem. A tram is hurtling towards the 5 people on the track, but if you flick a switch it will change tracks and kill only the one person instead. Utilitarians would say that you should flick the switch as it is better for there to be a single death than five. Some deontologists might agree with this, however, much more would object and argue that you don’t have the right to make that decision. This problem has different variations, such as one where you push someone in front of the train instead of them being on the track, but we’ll consider this one, as if it is accepted then it moves you a large way towards utilitarianism.
Let’s suppose that someone flicks the switch, but then realises the other side was actually correct and that they shouldn’t have flicked it. Do they now have an obligation to flick the switch back? What is interesting is that if they had just walked into the room and the train was heading towards the one person, they would have had an obligation *not* to flick the switch, but, having flicked it, it seems that they have an obligation to flick it back the other way.
Where this gets more puzzling is when we imagine Bob having observed Aaron flicking the switch? Arguably, if Aaron had no right to flick the switch, then Bob would have obligation to flick it back (or, if not an obligation, this would surely count as a moral good?). It is hard to argue against this conclusion, assuming that there is a strong moral obligation for Aaron not to flick the switch, along the lines of “Do not kill”. This logic seems consistent with how we act in other situations; if someone had tried to kill someone or steal something important from them; then most people would reverse or prevent the action if they could.
But what if Aaron reveals that he was only flicking the switch because Cameron had flicked it first? Then Bob would be obligated to leave it alone, as Aaron would be doing what Bob was planning to do: prevent interference. We can also complicate it by imagining that a strong gust of wind was about to come and flick the switch, but Bob flicked it first. Is there now a duty to undo Bob's flick of the switch or does that fact that the switch was going to flick anyway abrogate that duty? This obligation to trace back the history seems very strange indeed. I can’t see any pathway to find a logical contradiction, but I can’t imagine that many people would defend this state of affairs.
But perhaps the key principle here is non-interference. When Aaron flicks the switch, he has interfered and so he arguably has the limited right to undo his interference. But when Bob decides to reverse this, perhaps this counts as interference also. So while Bob receives credit for preventing Aaron’s interference, this is outweighed by committing interference himself - acts are generally considered more important than omissions. This would lead to Bob being required to take no action, as there wouldn’t be any morally acceptable pathway with which to take action.
I’m not sure I find this line of thought convincing. If we don’t want anyone interfering with the situation, couldn’t we lock the switch in place before anyone (including Aaron) gets the chance or even the notion to interfere? It would seem rather strange to argue that we have to leave the door open to interference even before we know anyone is planning to do so. Next suppose that we don’t have glue, but we can install a mechanism that will flick the switch back if anyone tries to flick it. Principally, this doesn’t seem any different from installing glue.
Next, suppose we don’t have a machine to flick it back, so instead we install Bob. It seems that installing Bob is just as moral as installing an actual mechanism. It would seem rather strange to argue that “installing” Bob is moral, but any action he takes is immoral. There might be cases where “installing” someone is moral, but certain actions they take will be immoral. One example would be “installing” a policeman to enforce a law that is imperfect. We can expect the decision to hire the policeman to be moral if the law is general good, but, in certain circumstances, flaws in this law might make enforcement immoral. But here, we are imagining that *any* action Bob takes is immoral interference. It therefore seems strange to suggest that installing him could somehow be moral and so this line of thought seems to lead to a contradiction.
We consider one last situation: that we aren't allowed to interfere and that setting up a mechanism to stop interference also counts as interference. We first imagine that Obama has ordered a drone attack that is going to kill a (robot, just go with it) terrorist. He knows that the drone attack will cause collateral damage, but it will also prevent the terrorist from killing many more people on American soil. He wakes up the next morning and realises that he was wrong to violate the deontological principles, so he calls off the attack. Are there any deotologists who would argue that he doesn’t have the right to rescind his order? Rescinding the order does not seem to count as "further interference", instead it seems to count as "preventing his interference from occurring". Flicking the switch back seems functionally identical to rescinding the order. The train hasn’t hit the intersection; so there isn’t any casual entanglement, so it seems like flicking the switch is best characterised as preventing the interference from occurring. If we want to make the scenarios even more similar, we can imagine that flicking the switch doesn't force the train to go down one track or another, but instead orders the driver to take one particular track. It doesn't seem like changing this aspect of the problem should alter the morality at all.
This post has shown that deontological objections to the Trolley Problem tend to lead to non-obvious philosophical commitments that are not very well known. I didn't write this post so much as to try to show that deontology is wrong, as to start as conversation and help deontologists understand and refine their commitments better.
I also wanted to include one paragraph I wrote in the comments: Let's assume that the train will arrive at the intersection in five minutes. If you pull the lever one way, then pull it back the other, you'll save someone from losing their job. There is no chance that the lever will get stuck out that you won't be able to complete the operation on trying. Clearly pulling the lever, then pulling it back is superior to not touching it. This seems to indicate that the sin isn't pulling the lever, but pulling it without the intent to pull it back. If the sin is pulling it without intent to pull it back, then it would seem very strange that gaining the intent to pull it back, then pulling it back would be a sin.
Examples of growth mindset or practice in fiction
As people who care about rationality and winning, it's pretty important to care about training. Repeated practice is how humans acquire skills, and skills are what we use for winning.
Unfortunately, it's sometimes hard to get System 1 fully on board with the fact that repeated, difficult, sometimes tedious practice is how we become awesome. I find fiction to be one of the most useful ways of communicating things like this to my S1. It would be great to have a repository of fiction that shows characters practicing skills, mastering them, and becoming awesome, to help this really sink in.
However, in fiction the following tropes are a lot more common:
- hero is born to greatness and only needs to discover that greatness to win [I don't think I actually need to give examples of this?]
- like (1), only the author talks about the skill development or the work in passing… but in a way that leaves the reader's attention (and system 1 reinforcement?) on the "already be awesome" part, rather that the "practice to become awesome" part [HPMOR; the Dresden Files, where most of the implied practice takes place between books.]
- training montage, where again the reader's attention isn't on the training long enough to reinforce the "practice to become awesome" part, but skips to the "wouldn't it be great to already be awesome" part [TVtropes examples].
- The hero starts out ineffectual and becomes great over the course of the book, but this comes from personal revelations and insights, rather than sitting down and practicing [Nice Dragons Finish Last is an example of this].
Example of exactly the wrong thing:
The Hunger Games - Katniss is explicitly up against the Pledges who have trained their whole lives for this one thing, but she has … something special that causes her to win. Also archery is her greatest skill, and she's already awesome at it from the beginning of the story and never spends time practicing.
Close-but-not-perfect examples of the right thing:
The Pillars of the Earth - Jack pretty explicitly has to travel around Europe to acquire the skills he needs to become great. Much of the practice is off-screen, but it's at least a pretty significant part of the journey.
The Honor Harrington series: the books depict Honor, as well as the people around her, rising through the ranks of the military and gradually levelling up, with emphasis on dedication to training, and that training is often depicted onscreen – but the skills she's training in herself and her subordinates aren't nearly as relevant as the "tactical genius" that she seems to have been born with.
I'd like to put out a request for fiction that has this quality. I'll also take examples of fiction that fails badly at this quality, to add to the list of examples, or of TVTropes keywords that would be useful to mine. Internet hivemind, help?
The Temptation to Bubble
"Never discuss religion or politics."
I was raised in a large family of fundamentalist Christians. Growing up in my house, where discussing politics and religion were the main course of life, the above proverb was said often -- as an expression of regret, shock, or self-flagellation. Later, the experience impressed a deep lesson about bubbling up that even intelligent and rational people fall into. And I ... I am often tempted, so tempted, to give in.
Religion and political identity were the languages of love in my house. Affirming the finer points of a friend's identical values was a natural ritual, like sharing coffee or a meal together, and so soothing we attributed the afterglow to God himself. We can use some religious nonsense to illustrate, but please keep in mind, there's a much more interesting point here than "certain religious views are wrong".
A point of controversy was an especially excellent topic of mutual comfort. How could anyone else be *so* stupid as to believe we came from monkeys and monkeys came from *nothing*! that exploded a gazillion years ago, especially given all the young earth creation evidence that they stubbornly ignored. They obviously just wanted to sin and needed an excuse. Agreeing about something like this, you both felt smarter than the hostile world, and you had someone to help defend you against that hostility. We invented byzantine scaffolding for our shared delusions to keep the conversation interested and agree with each other in ever more creative ways. We esteemed each other, and ourselves, much more.
This safety bubble from the real world would allow denial of anything too painful. Losing a loved one to cancer? God will heal them. God mysteriously decided not this time? They're in Heaven. Did your incredible stupidity lose you your job, your wife, your reputation? God would forgive you and rescue you from the consequences. You could probably find a Bible verse to justify anything you're doing. Ironically, this artificial shell of safety, which kept us from ever facing the pain and finality reality often has, made us all the more fragile inside. The bubble became necessary to psychologically survive.
In this flow of happy mirror neuron dances, minor disagreements felt like a slap on the face. The shock afterward burned harder than a hand-print across the face.
25 years and, what seems like 86 billion light years of questioning, testing, and learning from that world-view, can see even beyond religion, people fall into bubbles so easily. The political conservatives only post articles from conservative blogs. The liberals post from liberal news sources. None have ever gone hunting on the opposing side for ways to test their own beliefs even once. Ever debate someone over a bill that they haven't even read? All their info comes from the pravda wing of their preferred political party / street gang, none of it is first hand knowledge. They're in a bubble.
Three of the most popular religions that worship the same God will each tell you the others are counterfeits, despite the shared moral codes, values, rituals and traditions. Apple fanboys who wholesale swallowed the lies about their OS / machines being immune from viruses, without ever having read one article of an IT security blog. It's not just confirmation bias at work, people live in an artificial information bubble of information sources that affirm their identity, soothe their egos, and never test any idea that they have. Scientific controversies create bubbles no less. But it doesn't even take a controversy, just a preferred source of information -- news, blogs, books, authors. Even if such sources attempt to present an idea or argument from the others who disagree, they do not present it with sufficient force.
Even Google will gladly do this for you by customizing your search results by location, demographic, past searches, etc, to filter out things you may not want to see, providing a convenient invisible bubble for you even if you don't want it!
If you're rational, there's daily work to break the bubbles by actually looking for ways to test the beliefs you care about. The more you care about them, the more they should be tested.
Problem is, the bigger our information sharing capabilities are, the harder it is to find quality information. Facebook propaganda posts get repeated over and over. Re-tweets. Blog reposts. Academic "science" papers that have never been replicated, but are in the news headlines everywhere. The more you actually dig into the agitprop looking for a few gems, or at least sources of interesting information, the more you realize even the questions have been framed wrongly, especially over controversial things. Without searching for high quality evidence about a thing, I resign myself to "no opinion" until I care enough to do the work.
And now you don't fit in anyone's bubble. Not in politics, not in religion, not even in technical arenas where people bubble up also. Take politics ... it's not that I'm a liberal and I miss the company of my conservative friends, or the other way around. Like the "underground man" I feel I actually understand the values and arguments from both sides, leading to wanting to tear the whole system apart and invent new ways or angles of addressing the problems.
But try to have a conversation, for example, about the trade-offs of huge military superiority the US has created: costs and murder vs eventually conceding dominance to who knows who, as they say-- you either wear the merciless boot or live with it on your neck. Approaching the topic this way, and you may be seen as a weak peacenik who dishonors our hero troops or as a monster who gladly trades blood for oil; you're not even understood as having no firm conclusion.
Okay, so don't throw your pearls before swine you say. But you know, you're going to have to do it quite a few times just to find out where the pig-pen ends and information close to the raw sources and unbiased data begin. If you want to hear interesting new ideas from other minds, you're going to have to accept that they are biased and often come from inside their bubble. If you want to test your own beliefs, actively seek to disprove what you think, you will have to wade through oceans of bullshit and agitprop to find the one pearl that shifts your awareness. There is no getting around the work.
Then there are these kinds of situations: my father has also left the fundamentalist fold, but he has gone deeply into New Age mysticism instead of the more skeptical method I've taken. I really want to preserve our closeness and friendship. I know I can't change his mind, but he really likes to talk about this stuff so to stay close I should really try hard to understand his perspective and ideas. But even asking to define terms like "higher consciousness" or explain experiences of "higher awareness" or try to understand the predictions about human "evolutionary" steps coming up ... and he falls back to "it can't be described" or "it's beyond our present intelligence to grasp" or even "beyond rational thought" to understand. So I can artificially nod along not understanding a damn word about it, or I can try to get some kind of hook into his ideas and totally burst his bubble, without even trying. Bursting someone's bubble is not cool. If you burst their bubble, they will cry. If only inwardly. Burst their bubble, and they will try to burst yours, not to help you but from pain.
Problem is, trying to burst your own bubble, you're breaking everyone else's bubbles left and right.
There is the temptation to seek out your own bubble just for temporary comfort ... just how many skeptical videos about SpiritScience or creationism or religion am I going to watch? The scale of evidence is already tipped so far, investing more time to learn more details that nudge it 0.0001% toward 100% isn't about anything other than emotional soothing. Emotional soothing is dangerous; it's reinforcing my bubbles that I will now have to work all the harder to burst, to test, and to train myself to have no emotional investment in any provisional belief.
But it is so, so tempting, when you see yet another propaganda post for the republicrips or bloodocrat gang, vast scientific conspiracy posts, watch your friends and family shut down mid-conversation, so tempting to go read another Sagan book that teaches me nothing new but makes me feel good about my current provisional beliefs. It's tempting to think about blocking friends who run a pravda outlet over facebook, or even shut down your facebook account. It's tempting to give up on family in their own bubble and artificially nod along to concepts that have no meaning.
To some extent, I am even giving in by writing this ... I would like to see many other rationalists feel the same way and affirm my perspective and struggle with this, and that reinforces my bubble, doesn't it? There are probably psychological limits and needs that make some degree of it minimal. We're compelled to eat, but if give ourselves over to that instinct without regard or care it will eventually kill us.
Don't bubble, don't give into the temptation, keep working to burst the bubbles that accrete around you. It's exhausting, it's painful, and it's the only thing keeping your eyes open to reality.
And friend, as you need it here and there, come here and I'll agree with you about something we both already have mountains of evidence for and almost none against. ;)
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)