A similar phenomenon arises in trying to bound the error of a numerical computation by running it using interval arithmetic. The result is conservative, but sometimes useful. However, once in a while one applies it to a slowly converging iterative process that produces an accurate answer. Lots of arithmetic leads to large intervals even though the error to be bounded is small.
I think I can answer the question about Ayn Rand. Turn away from her heroes and look at her villains. They seem realistic and scary. How did she manage it? Well, she left Russia in 1926, having seen the aftermath of the revolution up close and personal. So I guessed that she attained realism by drawing on her own real life experiences. I wasn't happy with this answer, because her villains were too redolent of the corporatist new-speak of the Heath-Wilson years, too Westernised for Bolsheviks. I knew little of the Lenin years, so I left this little puzzle on the back burner.
Recent financial turmoil has sparked interest in the causes of the Great Depression. The basic tale is of a monetary contraction of one third. Trade barriers made a small contribution. I was not happy with the basic tale because the quantity theory of money suggests that a monetary contraction should lead to deflation. After a dreadful couple of years, in which both prices and wages fall by a third, the economy should start working as before, running on a third less money. Why did the Great Depression last so long?
Enter the National Recovery Administration. Its job was to introduce regulations for price and wage controls to stop deflation. This leads to a situation in which Hoover doesn't understand why unemployment is 25%, but he does know that he has done something right because real wages of the still employed have risen. Whoops! The NRA stopped the economy working around the failure of the Federal reserve and introduced lots of police state style snooping to enforce price and wage controls, plus the inevitable corruption and arbitrage opportunities for those in the know with the pull to take advantage of the situation.
Ayn Rand comes to the US in 1926, aged 21. Glad to escape the soviet system. Three years later, the Crash. That is followed by Americans making extra-ordinary policy errors and the American economy going down the tubes as though John Galt were seducing the competent people away. Plus she gets to learn the NewSpeak used to justify those policy errors and defend them from criticism. She goes on to write her wishfulfillment fantasies, in which these events don't just happen, there is grand historical design behind them. Her villains are skillfully written. It is worth reading Atlas Shrugged, all 3^^^3 pages, to meet them.
Her personal history is fleeing halfway round the world, to escape the bad guys, only to have "them" catch up with her 7 years later.
This seems closely related to inside-view versus outside-view. The think-lobe of the brain comes up with a cunning plan. The plan breaks an ethical rule but calculation shows it is for the greater good. The executive-lobe of the brain then ponders the outside view. Every-one who has executed an evil cunning plan has run a calculation of the greater good and had their plan endorsed. So the calculation lack outside-view credibility.
What kind of evidence could give outside-view credibility? Consider a plan with lots of traceability to previous events. If it goes badly, past events will have to be re-interpreted, and much learning will take place. Well, people generally don't learn from the past. If the think-lobe's cunning plan retains enough debugging information to avoid going wrong and later going wrong again, that distinguishes it from what people usually do and gives it outside-view credibility.
Randomised controlled trials of medical treatments can be attacked on ethical grounds from both sides. They deny some patients medical treatments that is quite likely beneficial. They inflict unproven and potentially dangerous treatment on others. Both attacks lack outside-view credibility. We always think we know. The randomised trial itself has outside-view credibility. It will place us in the position that we can do the right thing without having to use our judgement or be clever.
On the question of blocking thoughts, may I offer a personal anecdote, conscious that readers of Overcoming Bias will read it heterophenomenologically?
Years ago, when my health was good, I had a Buddhist meditation practice of great vigour and depth. Sitting on my cushion, noticing my train of thought pull into the station of consciousness, refusing to board the train and watching the thoughts leave, I would become more and more aware that it was the same old crap coming round again and again.
Forcibly stopping my thoughts had always worked badly. I coined a meditation slogan encapsulating what I had learned: When thoughts spin round in your head, like the wheels on a bicycle, don't apply the breaks, just stop peddling.
There was little pleasure to be had, peddling away, only to see the same old crap coming back into view yet again. No peddling. No thought.
That was bloody scary. I was an intellectual. All these clever thoughts? They were me, my identity, my core. Without them, who was I? Did I still like cats? Did I still like music?
I needn't have worried. After I few days Mara noticed that my mind was calm and free from distraction. Did He concede defeat, admitting that another human had gained enlightenment and slipped from his grasp? No, ofcourse not. I had seen through the old familiar crap, but it was crap and there was no problem about improving the quality. I had learned to resist the temptations of low quality distracting thoughts, but all that happened was that my mind came up with more creative, more clever, more insightful, and more distracting thoughts.
Soon I was caught up in them, back to business as usual.
I see a secular moral to this tale. If you want more insightful and creative thoughts all you have to do is stop recycling the usual crud. You would guess that withdrawing your mental energy from the pumps that circulate the usual shit round your head would leave an empty silence, but the mind doesn't work like that.
Study this deranged rant. Its ardent theism is expressed by its praise of the miracles God can do, if he choses.
And yet,... There is something not quite right here. Isn't it merely cloakatively theistic? Isn't the ringing denounciation of "Crimes against silence" militant atheism at its most strident?
So here is my idea: Don't try to doubt a whole core belief. That is too hard. Probe instead for the boundary. Write a little fiction, perhaps a science fiction of first contact, in which you encounter a curious character from a different culture. Write him a borderline belief, troublingly odd to both sides in a dispute about which your own mind is made up. He sits on one of our culture's fences. What is his view like from up there?
Is he "really" on your side, or "really" on the other side. Now there is doubt you can actually be curious about. You have a thread to pull on; what unravels if you tug?
Wearing my mechanical engineer's hat I say "Don't be heavy handed.". Set your over-force trips low. When the switch is hard to flip or the mechanism is reluctant to operate, fail and signal the default over-force exception.
You can always wiggle it, or lubricate it and try again, provided you haven't forced it and broken it. For me, trying is about running the compiler with the switches set to retain debugging information and running the code in verbose mode. It is about setting up a receiver down-range. Maybe the second rocket will blow up, just like the first did, but at least I will still be recording the telemetry.
I think that Plan A will be stymied by Problem Y, but I try it anyway, before I try to solve Problem Y. My optimistic side is hoping Problem Y might not actually matter, while my pessimistic side thinks Problem X is lurking in the shadows, ready to emerge and kill Plan A whether I solve Problem Y or not.
I try in order to gain information.
It is usually important to procede with confidence. When things go wrong they throw off fragments of broken machinery and fragments of information. Suprised, we fail to catch the flying fragments of information, and must try again, forewarned.
Two meanings of the word "try" fight for mind share.
To try: to position oneself in the right spot to catch the flying fragments of information flung out from failure.
To try: The psychological mechanism that lets us fail through faint-heartedness, again and again, but never quite understand why.
Two meanings sharing a word is a common problem with natural language. The particular danger I see for Eliezer is when the second meaning hides the first.
He says he isn't ready to write code. If you don't try to code up a general artificial intelligence you don't succeed, but you don't fail either. So you can't fail earlier and harder than you ever expected and cannot suspect that the singular is far. If you won't try, you'll never know.
The interesting question is "What would it be like if we lived for 700 years instead of 70 years?". This is more interesting than contemplating immortality because it pries open the issue of scaling. What changes by a factor of 10? What changes by a factor of 100? What changes by a factor of 3.162?
Presumably acient towers get a straight ten times less impressive.
Speed limits would be set much lower. You lose ten times as much when you die in a car accident so you would be willing to spend more time on your journey to avoid that.
An academic could go into his subject ten times as deeply, or study ten different subjects. A longer life raised interesting questions about the depth versus breadth trade-off.
Proportional scaling suggests that we would study history ten times as long, but would we? We might no longer feel the need, because we would have so much more personal experience. Oppositely we might feel that we will have to live with the consequences of avoidable error for so much longer that we were willing to spend a greater proportion of our time learning the lessons of history.
When someone sets out to write an atheistic hymn - "Hail, oh unintelligent universe," blah, blah, blah - the result will, without exception, suck.
I've just submitted my Militant Atheists' Marching Song to Reddit. Does it suck? Since I wrote the words, it is not my place to judge.
Whether not there is a good definition of intelligence depends on whether there is a sufficiently unitary concept there to be defined. That is crucial because it also determines whether AI is seedable or not.
Think about a clever optimising compiler that runs a big search looking for clever ways of coding the source that it is compiling. Perhaps it is in a competitions based on compiling a variety of programs, running them and measuring their performance. Now use it to compile itself. It runs faster, so it can search more deeply, and produce cleverer, faster code. So use it to compile itself again!
One hopes that the speed ups from successive self-compilations keep adding a little. 1, 1+r, 1+r+r², 1+r+r²+r³ If it works like that, then the limiting speed up is 1/(1-r) with a singularity at r=1 when the software wakes up. So far software disappoints these hopes. The tricks work once, add a tiny improvement second time around, and makes things worse on the third go for complicated and impenetrable reasons. It appears very different from the example of a nuclear reactor in which each round of neutron multiplication is like the previous round and runaway is a real possibility.
The core issue is the precise sense in which intelligence is real. If it is real in the sense of there being a unifying, codify-able theme, then we can define it and write a seed AI. But maybe it is real only in the "I know it when I see it" sense. Each increment is unique and never comes as "more of the same".