The more privileged lover
David is an atheist. He is dating Jane, who is a devout Christian. They have a fairly good relationship, except in the sex department: David thinks that having regular sex is important in a relationship, whereas Jane would like to remain a virgin until marriage due to religious reasons. Before they became a couple, David assumed that not having sex was something that he could tolerate, since he liked Jane very much, and was really eager to be with her. However, as months go by, David has become increasingly frustrated with the lack of physical intimacy, and is beginning to consider breaking up with Jane, even though he is still very fond of her.
What would you advise David to do? Given my experience, I think the most common response would be to advise David to leave Jane. Some people might even say that David shouldn't have started the relationship with Jane in the first place, since he has known all along that she intends to remain a virgin until marriage. They say that, if he really loves her and respects her religious beliefs, he should not ask her to have sex before marriage. Instead, he should break up with her so that they may both go on to look for more suitable partners.
Why is it that nobody says that Jane shouldn't have started the relationship with David in the first place, since she has known all along that he thinks that sexual compatibility/activity is very important in a relationship? Why is that nobody says that if she really loves him and respects his values, she should not make him abstain, and should instead engage in sex with him? Why do her religious beliefs render her position more privileged?
Perhaps the response would be this: Well, the criticism is mostly directed at David because he is the one who went into the relationship with unrealistic views of what he can or cannot do. Besides, since Jane lay out the terms clearly before they became a couple, then she could hardly be faulted.
That is a reasonable response. But imagine if the situation were reversed: What if, while they were still discussing whether to commit to each other, David lay out the terms that Jane would be expected to have sex regularly with him? Even if she agreed, chances are that people would say that he should have respected her religious convictions. Those who criticise David might point out that perhaps Jane was very reluctant when agreeing to it, but thought that it was something on which she could compromise, and that David should not have put her in such a difficult position in the first place. Well, then, perhaps David was very reluctant when agreeing to not have sex as well, but thought that it was something on which he could compromise, and Jane should not have put him in such a difficult position in the first place.
The emotional harm done to Jane by making her engage in pre-marital sexual activity could be as severe as the emotional harm done to David by making him agree to abstain from pre-marital sexual activity, and yet few people acknowledge it, at least in my experience. Or maybe many people do acknowledge it, but nevertheless there are few of them who would admit it openly and defend David. Why is wanting sex worse than not wanting sex?
What is it about being religious that gives one the more privileged position in love?
Seize the Maximal Probability Moment
Try and remember 3 or 4 things that you think would be effective hacks for your life but you have not so far implemented. Really, find three.
Probably that was not so hard.
Now think of at which moment in time did you have a maximal probability of having implemented such hacks. Sometimes you had no idea that was the moment. But sometimes you did, like when a friend tells you "I just read this great paper on how people report cartoons being funnier when their face is shaped in a more smiling fashion." and you thought "Great! I may one day implement the algorithm: if studying, force a smile".
You knew you didn't plan to read the article, you knew you trust that friend, and you knew you'd either forget it later, or in any case that from that moment on, the likelihood of you implementing the algorithm would lower.
So my hack of the day is: If you feel you are likely at the maximal probability moment to start a new policy, start immediately.
My friend was telling me about how he went abroad to research: "...so at this place and people there used very strong lights as cognitive enhancement and yadda yadda yadda... (stopped listening for 40s) yadda yadda yadda.... and I wrote a paper on ..." By that time my room had an extra 110W light working.
Just now0 I thought: It was good I installed that light. Why didn't I do the same when I felt like finding a personalized shirt website where the front would be "I Don't want to talk about: [list]" and the back "Pick your topic: [list]" to once and for all stop the gossip and sports ice-breakers?
I didn't seize the maximal probability moment. That's what happened.
Then I noticed that that1 was the maximal probability moment to install in my mind the maximal probability moment algorithm, I did, and that2 was the maximal probability moment of writing this post.
Now if you'll excuse me, I have3 a shirt to buy.
Case Study: the Death Note Script and Bayes
"Who wrote the Death Note script?"
I give a history of the 2009 leaked script, discuss internal & external evidence for its authenticity including stylometrics; and then give a simple step-by-step Bayesian analysis of each point. We finish with high confidence in the script's authenticity, discussion of how this analysis was surprisingly enlightening, and what followup work the analysis suggests would be most valuable.
[Link] Hey Extraverts: Enough is Enough
A fun article by Alan Jacobs. Check out the paper he cites, if anyone finds an non-paywalled version, I'll edit in the link here. HT for the link to Michael Bloom.
So in 2005 a very thoroughly researched and well-argued scholarly article was published that demonstrates, quite clearly, that group productivity is an illusion. All those brainstorming sessions and group projects you’ve been made to do at school and work? Useless. Everybody would have been better off working on their own. Here’s the abstract of the article:
"It has consistently been found that people produce more ideas when working alone as compared to when working in a group. Yet, people generally believe that group brainstorming is more effective than individual brainstorming. Further, group members are more satisfied with their performance than individuals, whereas they have generated fewer ideas. We argue that this ‘illusion of group productivity’ is partly due to a reduction of cognitive failures (instances in which someone is unable to generate ideas) in a group setting. Three studies support that explanation, showing that: (1) group interaction leads to a reduction of experienced failures and that failures mediate the effect of setting on satisfaction; and (2) manipulations that affect failures also affect satisfaction ratings. Implications for group work are discussed."
Has the puncturing of that “illusion of group productivity” had any effect? Of course not. Groupthink is as powerful as ever. Why is that?
I’ll tell you. It’s because the world is run by extraverts. (And FYI, that’s the proper spelling: extrovert is common but wrong, because extra- is the proper Latin prefix.) Extraverts love meetings — any possible excuse for a meeting, they’ll seize on it. They might hear others complain about meetings, but the complaints never sink in: extraverts can’t seem to imagine that the people who say they hate meetings really mean it. “Maybe they hate other meetings, but I know they’ll enjoy mine, because I make them fun! Besides, we’ll get so much done!” (Let me pause here to acknowledge that the meeting-caller is only one brand of extravert: some of the most pronouncedly outgoing people I know hate meetings as much as I do.)
The problem with extraverts — not all of them, I grant you, but many, so many — is a lack of imagination. They simply assume that everyone will feel about things as they do. “The more the merrier, right? It’s a proverb, you know.” Yes it is: a proverb coined by an extravert. So people I do not know will regularly send me emails: “Hey, I’ll be in your town soon and I’d love to have lunch or coffee. Just let me know which you’d prefer!” Notice the missing option: not being forced to have a meal and make conversation with a stranger. (Once a highly extraverted friend of mine was trying to get me involved in some project and said, cheerily, “You’ll get to meet lots of new people!” I turned to him and replied, “You realize, don’t you, that you’ve just ensured my refusal to participate?”)
I really do need to find more written by this author. But while I certainly do very much share this sentiment I have a hard time figuring out how common it is. After all people don't look good saying they "don't like meeting new people".
Though my introversion has grown deeper in recent years, it’s always been there. When I was a kid I’d read about people who got the chance to meet their favorite musician or sports hero or whatever, and I’d think: No way. I would have preferred then, and still prefer now, to write a letter to whomever I deeply admire and hope for a response. I even deliberately lost the school-wide spelling bee in fifth grade so I wouldn’t have to participate in the city-wide competition: it would have meant meeting so many strange kids!
Spelling bees are, of course, organized by extraverts — indeed, pretty much everything that is organized is organized by extraverts, which in turn is their justification for their ruling of the world. “See? If we didn’t organize things they wouldn’t get organized at all!” Precisely, mutters the introvert, under his breath, to avoid confrontation.
So, extraverts of the world, I invite you to make a New Year’s resolution: Refrain from organizing stuff. Don’t plan parties or outings or, God forbid, “team-building exercises.” Just don’t call meetings. (I would ask you to refrain from calling unnecessary meetings, but so many of you think almost all meetings necessary that it’s best you not call them at all.) Leave people alone and let them get their work done. Those who want to socialize can do it after work. I’ll not tell you you’ll enjoy it: you won’t. You’ll be miserable, at least at first, because you won’t be pulling others’ puppet-strings. But everyone will be more productive, and many people will be happier. Give it a try. Let go for a year. Just leave us alone.
A cure for akrasia
Some of you guys have been a little down on philosophy articles lately. This article by Roy Sorensen appeared in Mind in 1997, and it is awesome, therefore all philosophy papers are awesome.
Published in Mind 106/424 (October 1997) 743
A CURE FOR INCONTINENCE!
Tired of being weak-willed? Do you want to end procrastination and back-sliding? Are you envious of those paragons of self-control who always do what they consider best?
Thanks to a breakthrough in therapeutic philosophy, you too can now close the gap between what you think you ought to do and what you actually do. Just send $1000 to the address below and you will never again succumb to temptation. This is a MONEY-BACK GUARANTEE. The first time you do something that you know to be irrational, your money will be refunded, no questions asked. Of course, you might nevertheless have some questions. How can you act incontinently when you know that the "irrational" act will earn you a $1000 refund? Well, that's what's revolutionary in this new cure for incontinence.
Old approaches focus on punishing the weak willed. This follows the antiquated behaviorist principle that negative reinforcement extinguishes bad behavior. The new humanitarian approach rewards incontinence -- and lavishly at that. The key is to make the reward so strongly motivating that an otherwise irrational act becomes rational.
Some may seek a refund on the grounds that the reward for incontinence played no role in their (apparently) incontinent act; although aware of the reward, they would have performed the act anyway. These folks should distinguish between actual and hypothetical incontinence. If you act in accordance with your judgement as to what is best overall, then you did nothing irrational.
True, the hypothetical incontinent act is a sign that you have a weak will. But the presence of this disposition gives you all the more reason to block its manifestation -- by sending $1000. Granted, there are people who cannot be swayed from temptation by a mere $1000. These recalcitrant individuals are advised to send in more than $1000. Give until it hurts.
Rush your cheque to:
Dr. Roy Sorensen
Department of Philosophy
New York University
503 Main Building
100 Washington Square East
New York, New York 10003-6688
(Note, address is not current)
Modafinil now covered by insurance
Modafinil is now being covered by at least one insurance company in Massachusetts under which it costs less than $1 for a 200 MG pill. I predict a huge college black-market trade in the drug.
Dragon Ball's Hyperbolic Time Chamber
A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.
Master version on gwern.net
An Anthropic Principle Fairy Tale
A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?
The Fallacy of Large Numbers
I've been seeing this a lot lately, and I don't think it's been written about here before
Let's start with a motivating example. Suppose you have a fleet of 100 cars (or horses, or people, or whatever). For any given car, on any given day, there's a 3% chance that it'll be out for repairs (or sick, or attending grandmothers' funerals, or whatever). For simplicity's sake, assume all failures are uncorrelated. How many cars can you afford to offer to customers each day? Take a moment to think of a number.
Well, 3% failure means 97% success. So we expect 97 to be available and can afford to offer 97. Does that sound good? Take a moment to answer.
Well, maybe not so good. Sometimes we'll get unlucky. And not being able to deliver on a contract is painful. Maybe we should reserve 4 and only offer 96. Or maybe we'll play it very safe and reserve twice the needed number. 6 in reserve, 94 for customers. But is that overkill? Take note of what you're thinking now.
The likelihood of having more than 4 unavailable is 18%. The likelihood of having more than 6 unavailable is 3.1%. About once a month. Even reserving 8, requiring 9 failures to get you in trouble, gets you in trouble 0.3% of the time. More than once a year. Reserving 9 -- three times the expected -- gets the risk down to 0.087% or a little less than every three years. A number we can finally feel safe with.
So much for expected values. What happened to the Law of Large Numbers? Short answer: 100 isn't large.
The Law of Large Numbers states that for sufficiently large samples, the results look like the expected value (for any reasonable definition of like).
The Fallacy of Large Numbers states that your numbers are sufficiently large.
This doesn't just apply to expected values. It also applies to looking at a noisy signal and handwaving that the noise will average away with repeated measurements. Before you can say something like that, you need to look at how many measurements, and how much noise, and crank out a lot of calculations. This variant is particularly tricky because you often don't have numbers on how much noise there is, making it hard to do the calculation. When the calculation is hard, the handwave is more tempting. That doesn't make it more accurate.
I don't know of any general tools for saying when statistical approximations become safe. The best thing I know is to spot-check like I did above. Brute-forcing combinatorics sounds scary, but Wolfram Alpha can be your friend (as above). So can python, which has native bignum support. Python has a reputation as being slow for number crunching, but with n<1000 and a modern cpu it usually doesn't matter.
One warning sign is if your tools were developed in a very different context than where you're using them. Some approximations were invented for dealing with radioactive decay, where n resembles Avogadro's Number. Applying these tools to the American population is risky. Some were developed for the American population. Applying them to students in your classroom is risky.
Another danger is that your dataset can shrink. If you've validated your tools for your entire dataset, and then thrown out some datapoints and divided the rest along several axes, don't be surprised if some of your data subsets are now too small for your tools.
This fallacy is related to "assuming events are uncorrelated" and "assuming distributions are normal". It's a special case of "choosing statistical tools based on how easy they are to use whether they're applicable to your use-case or not".
Friendly AI and the limits of computational epistemology
Very soon, Eliezer is supposed to start posting a new sequence, on "Open Problems in Friendly AI". After several years in which its activities were dominated by the topic of human rationality, this ought to mark the beginning of a new phase for the Singularity Institute, one in which it is visibly working on artificial intelligence once again. If everything comes together, then it will now be a straight line from here to the end.
I foresee that, once the new sequence gets going, it won't be that easy to question the framework in terms of which the problems are posed. So I consider this my last opportunity for some time, to set out an alternative big picture. It's a framework in which all those rigorous mathematical and computational issues still need to be investigated, so a lot of "orthodox" ideas about Friendly AI should carry across. But the context is different, and it makes a difference.
Begin with the really big picture. What would it take to produce a friendly singularity? You need to find the true ontology, find the true morality, and win the intelligence race. For example, if your Friendly AI was to be an expected utility maximizer, it would need to model the world correctly ("true ontology"), value the world correctly ("true morality"), and it would need to outsmart its opponents ("win the intelligence race").
Now let's consider how SI will approach these goals.
The evidence says that the working ontological hypothesis of SI-associated researchers will be timeless many-worlds quantum mechanics, possibly embedded in a "Tegmark Level IV multiverse", with the auxiliary hypothesis that algorithms can "feel like something from inside" and that this is what conscious experience is.
The true morality is to be found by understanding the true decision procedure employed by human beings, and idealizing it according to criteria implicit in that procedure. That is, one would seek to understand conceptually the physical and cognitive causation at work in concrete human choices, both conscious and unconscious, with the expectation that there will be a crisp, complex, and specific answer to the question "why and how do humans make the choices that they do?" Undoubtedly there would be some biological variation, and there would also be significant elements of the "human decision procedure", as instantiated in any specific individual, which are set by experience and by culture, rather than by genetics. Nonetheless one expects that there is something like a specific algorithm or algorithm-template here, which is part of the standard Homo sapiens cognitive package and biological design; just another anatomical feature, particular to our species.
Having reconstructed this algorithm via scientific analysis of human genome, brain, and behavior, one would then idealize it using its own criteria. This algorithm defines the de-facto value system that human beings employ, but that is not necessarily the value system they would wish to employ; nonetheless, human self-dissatisfaction also arises from the use of this algorithm to judge ourselves. So it contains the seeds of its own improvement. The value system of a Friendly AI is to be obtained from the recursive self-improvement of the natural human decision procedure.
Finally, this is all for naught if seriously unfriendly AI appears first. It isn't good enough just to have the right goals, you must be able to carry them out. In the global race towards artificial general intelligence, SI might hope to "win" either by being the first to achieve AGI, or by having its prescriptions adopted by those who do first achieve AGI. They have some in-house competence regarding models of universal AI like AIXI, and they have many contacts in the world of AGI research, so they're at least engaged with this aspect of the problem.
Upon examining this tentative reconstruction of SI's game-plan, I find I have two major reservations. The big one, and the one most difficult to convey, concerns the ontological assumptions. In second place is what I see as an undue emphasis on the idea of outsourcing the methodological and design problems of FAI research to uploaded researchers and/or a proto-FAI which is simulating or modeling human researchers. This is supposed to be a way to finesse philosophical difficulties like "what is consciousness anyway"; you just simulate some humans until they agree that they have solved the problem. The reasoning goes that if the simulation is good enough, it will be just as good as if ordinary non-simulated humans solved it.
I also used to have a third major criticism, that the big SI focus on rationality outreach was a mistake; but it brought in a lot of new people, and in any case that phase is ending, with the creation of CFAR, a separate organization. So we are down to two basic criticisms.
First, "ontology". I do not think that SI intends to just program its AI with an apriori belief in the Everett multiverse, for two reasons. First, like anyone else, their ventures into AI will surely begin with programs that work within very limited and more down-to-earth ontological domains. Second, at least some of the AI's world-model ought to be obtained rationally. Scientific theories are supposed to be rationally justified, e.g. by their capacity to make successful predictions, and one would prefer that the AI's ontology results from the employment of its epistemology, rather than just being an axiom; not least because we want it to be able to question that ontology, should the evidence begin to count against it.
For this reason, although I have campaigned against many-worlds dogmatism on this site for several years, I'm not especially concerned about the possibility of SI producing an AI that is "dogmatic" in this way. For an AI to independently assess the merits of rival physical theories, the theories would need to be expressed with much more precision than they have been in LW's debates, and the disagreements about which theory is rationally favored would be replaced with objectively resolvable choices among exactly specified models.
The real problem, which is not just SI's problem, but a chronic and worsening problem of intellectual culture in the era of mathematically formalized science, is a dwindling of the ontological options to materialism, platonism, or an unstable combination of the two, and a similar restriction of epistemology to computation.
Any assertion that we need an ontology beyond materialism (or physicalism or naturalism) is liable to be immediately rejected by this audience, so I shall immediately explain what I mean. It's just the usual problem of "qualia". There are qualities which are part of reality - we know this because they are part of experience, and experience is part of reality - but which are not part of our physical description of reality. The problematic "belief in materialism" is actually the belief in the completeness of current materialist ontology, a belief which prevents people from seeing any need to consider radical or exotic solutions to the qualia problem. There is every reason to think that the world-picture arising from a correct solution to that problem will still be one in which you have "things with states" causally interacting with other "things with states", and a sensible materialist shouldn't find that objectionable.
What I mean by platonism, is an ontology which reifies mathematical or computational abstractions, and says that they are the stuff of reality. Thus assertions that reality is a computer program, or a Hilbert space. Once again, the qualia are absent; but in this case, instead of the deficient ontology being based on supposing that there is nothing but particles, it's based on supposing that there is nothing but the intellectual constructs used to model the world.
Although the abstract concept of a computer program (the abstractly conceived state machine which it instantiates) does not contain qualia, people often treat programs as having mind-like qualities, especially by imbuing them with semantics - the states of the program are conceived to be "about" something, just like thoughts are. And thus computation has been the way in which materialism has tried to restore the mind to a place in its ontology. This is the unstable combination of materialism and platonism to which I referred. It's unstable because it's not a real solution, though it can live unexamined for a long time in a person's belief system.
An ontology which genuinely contains qualia will nonetheless still contain "things with states" undergoing state transitions, so there will be state machines, and consequently, computational concepts will still be valid, they will still have a place in the description of reality. But the computational description is an abstraction; the ontological essence of the state plays no part in this description; only its causal role in the network of possible states matters for computation. The attempt to make computation the foundation of an ontology of mind is therefore proceeding in the wrong direction.
But here we run up against the hazards of computational epistemology, which is playing such a central role in artificial intelligence. Computational epistemology is good at identifying the minimal state machine which could have produced the data. But it cannot by itself tell you what those states are "like". It can only say that X was probably caused by a Y that was itself caused by Z.
Among the properties of human consciousness are knowledge that something exists, knowledge that consciousness exists, and a long string of other facts about the nature of what we experience. Even if an AI scientist employing a computational epistemology managed to produce a model of the world which correctly identified the causal relations between consciousness, its knowledge, and the objects of its knowledge, the AI scientist would not know that its X, Y, and Z refer to, say, "knowledge of existence", "experience of existence", and "existence". The same might be said of any successful analysis of qualia, knowledge of qualia, and how they fit into neurophysical causality.
It would be up to human beings - for example, the AI's programmers and handlers - to ensure that entities in the AI's causal model were given appropriate significance. And here we approach the second big problem, the enthusiasm for outsourcing the solution of hard problems of FAI design to the AI and/or to simulated human beings. The latter is a somewhat impractical idea anyway, but here I want to highlight the risk that the AI's designers will have false ontological beliefs about the nature of mind, which are then implemented apriori in the AI. That strikes me as far more likely than implanting a wrong apriori about physics; computational epistemology can discriminate usefully between different mathematical models of physics, because it can judge one state machine model as better than another, and current physical ontology is essentially one of interacting state machines. But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
In a phrase: to use computational epistemology is to commit to state-machine materialism as your apriori ontology. And the problem with state-machine materialism is not that it models the world in terms of causal interactions between things-with-states; the problem is that it can't go any deeper than that, yet apparently we can. Something about the ontological constitution of consciousness makes it possible for us to experience existence, to have the concept of existence, to know that we are experiencing existence, and similarly for the experience of color, time, and all those other aspects of being that fit so uncomfortably into our scientific ontology.
It must be that the true epistemology, for a conscious being, is something more than computational epistemology. And maybe an AI can't bootstrap its way to knowing this expanded epistemology - because an AI doesn't really know or experience anything, only a consciousness, whether natural or artificial, does those things - but maybe a human being can. My own investigations suggest that the tradition of thought which made the most progress in this direction was the philosophical school known as transcendental phenomenology. But transcendental phenomenology is very unfashionable now, precisely because of apriori materialism. People don't see what "categorial intuition" or "adumbrations of givenness" or any of the other weird phenomenological concepts could possibly mean for an evolved Bayesian neural network; and they're right, there is no connection. But the idea that a human being is a state machine running on a distributed neural computation is just a hypothesis, and I would argue that it is a hypothesis in contradiction with so much of the phenomenological data, that we really ought to look for a more sophisticated refinement of the idea. Fortunately, 21st-century physics, if not yet neurobiology, can provide alternative hypotheses in which complexity of state originates from something other than concatenation of parts - for example, entanglement, or from topological structures in a field. In such ideas I believe we see a glimpse of the true ontology of mind, one which from the inside resembles the ontology of transcendental phenomenology; which in its mathematical, formal representation may involve structures like iterated Clifford algebras; and which in its biophysical context would appear to be describing a mass of entangled electrons in that hypothetical sweet spot, somewhere in the brain, where there's a mechanism to protect against decoherence.
Of course this is why I've talked about "monads" in the past, but my objective here is not to promote neo-monadology, that's something I need to take up with neuroscientists and biophysicists and quantum foundations people. What I wish to do here is to argue against the completeness of computational epistemology, and to caution against the rejection of phenomenological data just because it conflicts with state-machine materialism or computational epistemology. This is an argument and a warning that should be meaningful for anyone trying to make sense of their existence in the scientific cosmos, but it has a special significance for this arcane and idealistic enterprise called "friendly AI". My message for friendly AI researchers is not that computational epistemology is invalid, or that it's wrong to think about the mind as a state machine, just that all that isn't the full story. A monadic mind would be a state machine, but ontologically it would be different from the same state machine running on a network of a billion monads. You need to do the impossible one more time, and make your plans bearing in mind that the true ontology is something more than your current intellectual tools allow you to represent.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)