You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Using the Copernican mediocrity principle to estimate the timing of AI arrival

2 turchin 04 November 2015 11:42AM

Gott famously estimated the future time duration of the Berlin wall's existence:

“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott

The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.

AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that. 

But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.

We can get data on AI research growth from Luke’s post

“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”

From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)

This means that during the next five years more AI research will be conducted than in all the previous years combined. 

If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created  within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035. 

This conclusion itself depends of several assumptions: 

•   AI is possible

•   The exponential growth of AI research will continue 

•   The Copernican principle has been applied correctly.

 

Interestingly this coincides with other methods of AI timing predictions: 

•   Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)

•   Survey of the field of experts

•   Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)

•   Brain emulation roadmap

•   Computer power brain equivalence predictions

•   Plans of major companies

 

It is clear that this implementation of the Copernican principle may have many flaws:

1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.

2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.

3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.

 

Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?

 

 

List of Fully General Counterarguments

9 Gunnar_Zarncke 18 July 2015 09:49PM

Follow-up to: Knowing About Biases Can Hurt People

See also: Fully General Counterargument (LW Wiki)

fully general counterargument [FGCA] is an argument which can be used to discount any conclusion the arguer does not like.

With the caveat that the arguer doesn't need to be aware that this is the case. But if (s)he is not aware of that, this seems like the other biases we are prone to. The question is: Is there a tendency or risk to accidentally form FGCAs? Do we fall easily into this mind-trap? 

This post tries to (non-exhaustively) list some FGCAs as well as possible countermeasures.

continue reading »

Is my theory on why censorship is wrong correct?

-24 hoofwall 12 April 2015 04:03AM

So, I have next to no academic knowledge. I have literally not read or perhaps even picked up any book since eighth grade, which is where my formal education ended, and I turn 20 this year, but I am sitting on some theories pertaining to my understanding of rationality, and procrastinating about expressing them has gotten me here. I'd like to just propose my theory on why censorship is wrong, here. Please tell me whether or not you agree or disagree, and feel free to express anything else you feel you would like to in this thread. I miss bona fide argument, but this community seems way less hostile than the one community I was involved in elsewhere....

 

Also, I feel I should affirm again that my academic knowledge is almost entirely just not there... I know the LessWrong community has a ton of resources they turn to and indulge in, which is more or less a bible of rationality by which you all abide, but I have read or heard of none of it. I don't mean to offend you with my willful ignorance. Sorry. Also, sorry for possibly incorporating similes and stuff into my expression... I know many out there are on the autistic spectrum and can't comprehend it so I'll try to stop doing that unless I'm making a point.

 

Okay, so, since the following has been bothering me a lot since I joined this site yesterday and even made me think against titling this what I want, consider the written and spoken word. Humans literally decided as a species to sequence scribbles and mouth noises in an entirely arbitrary way, ascribe emotion to their arbitrary scribbles and mouth noises, and then claim, as a species, that very specific arbitrary scribbles and mouth noises are inherent evil and not to be expressed by any human. Isn't that fucking retarded?

 

I know what you may be thinking. You might be thinking, "wow, this hoofwall character just fucking wrote a fucking arbitrary scribble that my species has arbitrarily claimed to be inherent evil without first formally affirming, absolutely, that the arbitrary scribble he uttered could never be inherent evil and that writing it could never in itself do any harm. This dude obviously has no interest in successfully defending himself in argument". But fuck that. This is not the same as murdering a human and trying to conceive an excuse defending the act later. This is not the same as effecting the world in any way that has been established to be detrimental and then trying to defend the act later. This is literally sequencing the very letters of the very language the human has decided they are okay with and will use to express themselves in such a way that it reminds the indoctrinated and conditioned human of emotion they irrationally ascribe to the sequence of letters I wrote. This is possibly the purest argument conceivable for demonstrating superfluity in the human world, and the human psyche. There could never be an inherent correlation to one's emotionality and an arbitrary sequence of mouth noises or scribbles or whatever have you that exist entirely independent of the human. If one were to erase an arbitrary scribble that the human irrationally ascribes emotion to, the human will still have the capacity to feel the emotion the arbitrary scribble roused within them. The scribble is not literally the embodiment of emotionality. This is why censorship is retarded.

 

Mind you, I do not discriminate against literal retards, or blacks, or gays, or anything. I do, however, incorporate the words "retard", "nigger", and "faggot" into my vocabulary literally exclusively because it triggers humans and demonstrates the fact that the validity of one's argument and one's ability to defend themselves in argument does not matter to the human. I have at times proposed my entire argument, actually going so far to quantify the breadth of this universe as I perceive it, the human existence, emotionality, and right and wrong before even uttering a fuckdamn swear, but it didn't matter. Humans think plugging their ears and chanting a mantra of "lalala" somehow gives themselves a valid argument for their bullshit, but whatever. Affirming how irrational the human is is a waste of time. There are other forms of censorship I shout address, as well, but I suppose not before proposing what I perceive the breadth of everything less fundamental than the human to be.

 

It's probably very easy to deduce the following, but nothing can be proven to exist. Also, please do bear with my what are probably argument by assertion fallacies at the moment... I plan on defending myself before this post ends.

 

Any opinion any human conceives is just a consequence of their own perception, the likes of which appears to be a consequence of their physical form, the likes of which is a consequence of properties in this universe as we perceive it. We cannot prove our universe's existence beyond what we have access to in our universe as we perceive it, therefore we cannot prove that we exist. We can't prove that our understanding of existence is true existence; we can only prove, within our universe, that certain things appear to be in concurrence with the laws of this universe as we perceive it. We can propose for example that an apple we can see occupies space in this universe, but we can't prove that our universe actually exists beyond our understanding of what existence is. We can't go more fundamental than what composes our universe... We can't go up if we are mutually exclusive with the very idea of "up", or are an inferior consequence of "up" which is superior to us.

 

I really don't remember what else I would say after this but, I guess, without divulging how much I obsess about breaking emotionality into a science, I believe nudity can't be inherent evil either because it is literally the cause of us, the human, and we are necessary to be able to perceive good and evil in the first place. If humans were not extant to dominate the world and force it to tend to the end they wanted it to anything living would just live, breed, and die, and nothing would be inherently "good" or "evil". It would just be. Until something evolved if it would to gain the capacity to force distinctions between "good" and "evil" there would be no such constructs. We have no reason to believe there would be. I don't know how I can affirm that further. If nudity- and exclusively human nudity, mind you- were to be considered inherent evil that would mean that the human is inherent evil, that everything the human perceives is is inherent evil and that the human's understanding of "rationality" is just a poor, grossly-misled attempt at coping with the evil properties that they retain and is inherently worthless. Which I actually believe, but an opinion that contrary is literally satanism and fuck me if I think I'm going to be expounding all of that here. But fundamentally, human nudity cannot be inherent evil if the human's opinions are to be considered worth anything at all, and if you want to go less fundamental than that and approach it from a "but nudity makes me feel bad" standpoint, you can simply warp your perception of the world to force seeing or otherwise being reminded of things to be correlated to certain emotion within you. I'm autistic it seems so I obsess about breaking emotionality down to a science every day but this isn't the post to be talking about shit like that. In any case, you can't prove that the act of you seeing another human naked is literal evil, so fuck you and your worthless opinions.

 

Yeah... I don't know what else I could say here, or if censorship exists in forms other than preventing humans from being exposed to human nudity, or human-conceived words. I should probably assert as well that I believe the human's thinking that the inherent evil of human nudity somehow becomes okay to see when a human reaches the age of 18, or 21, or 16, or 12 depending on which subset of human you ask is retarded. Also, by "retarded" I do not literally mean "retarded". I use the word as a trigger word that's meant to embody and convey bad emotion the human decides they want to feel when they're exposed to it. This entire post is dripping with the grossest misanthropy but I'm interested in seeing what the responses to this are... By the way, if you just downvote me without expressing to me what you think I'm doing wrong, as far as I can tell you are just satisfied with vaguely masturbating your dissenting opinion you care not for even defining in my direction, so, whatever makes you sleep at night, if you do that... but you're wrong though, and I would argue that to the death.

Is arguing worth it? If so, when and when not? Also, how do I become less arrogant?

9 27chaos 27 November 2014 09:28PM

I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.

It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.

I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.

How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.

Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?

How realistic would AI-engineered chatbots be?

-1 kokotajlod 11 September 2014 11:00PM

I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.

But what if the chatbots were designed by a superintelligent AI?

If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?

I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.

Thoughts?

EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:

I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)

When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?

I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of

(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or

(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."

These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.

Free online course: How to Reason and Argue starting Mon. Any interest in study group?

5 pinyaka 10 January 2014 07:25PM

I am going to take the free Coursera class "Think Again: How to Reason and Argue" starting Monday, January 13 (three days from now) and I thought I'd to see if there was any interest in going through this as a group. This is one of the MIRI recommended courses under the "Heuristics and Biases" section. If you're interested and you will sign up if we get a group together, please leave a note in the comments (if you will only sign up if the group hits a specific size, please leave that requirement in the comments as well). If enough people are willing to sign up (5 or more? idk), I will start a group on Google (or somewhere else if that's preferred) so that we can have a forum to share thoughts, ask questions, etc. Otherwise, email may be a better way to maintain contact. 

EDIT: We hit five people willing to start, so I created a Google group here. If you're interested in taking the course with us, please sign up there.

 

The recommended text is fairly inexpensive on Amazon (<$20 USD) and can be found on libgen.info for free if that's your thing. It's taught in English, lasts 12 weeks and predicts that it will take 5-6 hours/week. More info from the course website:

 

 

Think Again: How to Reason and Argue

Reasoning is important.  This course will teach you how to do it well.  You will learn how to understand and assess arguments by other people and how to construct good arguments of your own about whatever matters to you.

 

About the Course

Reasoning is important.  This course will teach you how to do it well.  You will learn some simple but vital rules to follow in thinking about any topic at all and some common and tempting mistakes to avoid in reasoning.  We will discuss how to identify, analyze, and evaluate arguments by other people (including politicians, used car salesmen, and teachers) and how to construct arguments of your own in order to help you decide what to believe or what to do. These skills will be useful in dealing with whatever matters most to you.

Course Syllabus

PART I: HOW TO ANALYZE ARGUMENTS

Week 1: How to Spot an Argument
Week 2: How to Untangle an Argument 
Week 3: How to Reconstruct an Argument 
Quiz #1: At the end of Week 3, students will take their first quiz. 

PART II: HOW TO EVALUATE DEDUCTIVE ARGUMENTS

Week 4: Propositional Logic and Truth Tables 
Week 5: Categorical Logic and Syllogisms 
Week 6: Representing Information
Quiz #2: At the end of Week 6, students will take their second quiz. 

PART III: HOW TO EVALUATE INDUCTIVE ARGUMENTS

Week 7: Inductive Arguments 
Week 8: Causal Reasoning 
Week 9: Chance and Choice 
Quiz #3: At the end of Week 9, students will take their third quiz. 

PART IV: HOW TO MESS UP ARGUMENTS

Week 10: Fallacies of Unclarity 
Week 11: Fallacies of Relevance and of Vacuity 
Week 12: Refutation 
Quiz #4: At the end of Week 12, students will take their fourth quiz.

Recommended Background

This material is appropriate for introductory college students or advanced high school students—or, indeed, anyone who is interested. No special background is required other than knowledge of English.

In-course Textbooks

As a student enrolled in this course, you will have free access to selected chapters and content for the duration of the course. All chapters were selected by the instructor specifically for this course. You will be able to access the Coursera edition of the e-textbook via an e-reader in the class site hosted by Chegg. If you click on “Buy this book”, you will be able to purchase the full version of the textbook, rather than the limited chapter selection in the Coursera edition. This initiative is made possible by Coursera’s collaboration with textbook publishers and Chegg.

Cengage Advantage Books: Understanding Arguments

Author: Sinnott-Armstrong, Walter, Sinnott-Armstrong, Walter (Walter Sinnott-Armstrong), Fogelin, Robert J.
Publisher: CENGAGE Learning

Suggested Readings

Students who want more detailed explanations or additional exercises or who want to explore these topics in more depth should consult Understanding Arguments: An Introduction to Informal Logic

Course Format

Each week will be divided into multiple video segments that can be grouped as three lectures or viewed separately. There will be short exercises after each segment (to check comprehension) and several longer midterm quizzes.

FAQ

  • Will I get a Statement of Accomplishment after completing this class?

    Yes. Students who successfully complete the class will receive a Statement of Accomplishment signed by the instructor.

  • What resources will I need for this class?

    Only a working computer and internet connection.

  • What is the coolest thing I'll learn if I take this class?

    Nasty names (equivocator!) to call people who try to fool you with bad arguments.

  • What are people saying about this class?

    Here are some remarks from students that have taken the class: 

    “I'd like to thank both professors for the course. It was fun, instructive, and I loved the input from people from all over the world, with their different views and backgrounds.”

    “Somewhere in the first couple weeks of the course, I was ruminating over some concept or perhaps over one of the homework exercises and suddenly it occurred to me, "'Is this what thinking is?" Just to clarify, I come from a thinking family and have thought a lot about various concepts and issues throughout my life and career...but somehow I realized that, even though I seemed to be thinking all the time, I hadn't been doing this type of thinking for quite some time...so, thanks!”

    “The rapport between Dr. Sinott-Armstrong and Dr. Neta and their senses of humor made the lectures engaging and enjoyable. Their passion for the subject was apparent and they were patient and thorough in their explanations.”

    The course has also been featured in a number of news articles and news reports.  Here are links to some of these:

    Raleigh News and Observer Article - January 20, 2013

    "How Free Online Courses are Changing the Traditional Liberal Arts Education" PBS Newshour - January 8, 2013

Baseline of my opinion on LW topics

7 Gunnar_Zarncke 02 September 2013 12:13PM

To avoid repeatly saying the same I'd like to state my opinion on a few topics I expect to be relevant to my future posts here.

You can take it as a baseline or reference for these topics. I do not plan to go into any detail here. I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is really only to provide a context for my comments and posts elsewhere.

If you google me you may find some of my old (but not that off the mark) posts about these position e.g. here:

http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView

Now my position on LW topics. 

The Simulation Argument and The Great Filter

On The Simulation Argument I definitely go for 

"(1) the human species is very likely to go extinct before reaching a “posthuman” stage"

Correspondingly on The Great Filter I go for failure to reach 

"9. Colonization explosion".

This is not because I think that humanity is going to self-annihilate soon (though this is a possibility). Instead I hope that humanity will earlier or later come to terms with its planet. My utopia could be like that of the Pacifists (a short story in Analog 5).

Why? Because of essential complexity limits.

This falls into the same range as "It is too expensive to spread physically throughout the galaxy". I know that negative proofs about engineering are notoriously wrong - but that is currently my best guess. Simplified one could say that the low hanging fruits have been taken. I have lots of empirical evidence of this on multiple levels to support this view.

Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.  

What could prove me wrong? 

If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).

At the very high end a singularity might be possible if a way could be found to simulate physics faster than physics itself. 

AI

Basically I don't have the least problem with artificial intelligence or artificial emotioon being possible. Philosophical note: I don't care on what substrate my consciousness runs. Maybe I am simulated.  

I think strong AI is quite possible and maybe not that far away.

But I also don't think that this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest - but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.

One temporary dystopia that I see is that cognitive tasks are out-sourced to AI and a new round of unemployment drives humans into depression. 

I have studied artificial intelligence and played around with two models a long time ago:
  1. A simplified layered model of the brain; deep learning applied to free inputs (I cancelled this when it became clear that it was too simple and low level and thus computationally inefficient)
  2. A nested semantic graph approach with propagation of symbol patterns representing thought (only concept; not realized)

I'd really like to try a 'synthesis' of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.

What could prove me wrong?

On the low success end if it takes longer than I think it would take me given unlimited funding. 

On the high end if I'm wrong with the complexity limits mentioned above. 

Conquering space

Humanity might succeed at leaving the planet but at high costs.

With leaving the planet I mean permanently independent of earth but not neccessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).

I think it more likely that life leaves the planet - that can be 

  1. artificial intelligence with a robotic body - think of curiosity rover 2.0 (most likely).
  2. intelligent life-forms bred for life in space - think of Magpies those are already smart, small, reproducing fast and have 3D navigation.    
  3. actual humans in suitable protective environment with small autonomous biosperes harvesting asteroids or mars. 
  4. 'cyborgs' - humans altered or bred to better deal with certain problems in space like radiation and missing gravity.  
  5. other - including misc ideas from science fiction (least likely or latest). 

For most of these (esp. those depending on breeding) I'd estimate a time-range of a few thousand years.

What could prove me wrong?

If I'm wrong on the singularity aspect too.

If I'm wrong on the timeline I will be long dead likely in any case except (1) which I expect to see in my lifetime.

Cognitive Base of Rationality, Vaguesness, Foundations of Math

How can we as humans create meaning out of noise?

How can we know truth? How does it come that we know that 'snow is white' when snow is white?

Cognitive neuroscience and artificial learning seems to point toward two aspects:

Fuzzy learning aspect

Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like 'spoon', 'fear', 'running', 'hot', 'near', 'I'. These are basically symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching not correctness or uniqueness.

Semantic learning aspect

Upon the qualia builds the semantic part which takes the qualia and instead of acting directly on them (as is the normal effect for animals) finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/symbols.

The use of these patterns is that the patterns allow to capture concepts which are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).

Concepts like ('cry-sound' 'fear') or ('digitalis' 'time-forward' 'heartache') or ('snow' 'white') or - and that is probably the demain of humans: (('one' 'successor') 'two') or (('I' 'happy') ('I' 'think')).  

Concepts

The interesting thing is that learning works on these concepts like on the normal neuronal nets too. Thus concepts that are reinforced by positive feedback will stabilize and mutually with them the qualia they derive from (if any) will also stabilize.

For certain pure concepts the usability of the concept hinges not on any external factor (like "how does this help me survive") but on social feedback about structure and the process of the formation of the concepts themselves. 

And this is where we arrive at such concepts as 'truth' or 'proposition'.

These are no longer vague - but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is stability due to absence of external factors possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from some external use but from internal consistency could be called math.

And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/or the usefulness doesn't derive from internal consistency but from external ("teachers password") usefulness then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so highly and there is little more dangerous to science that allowing other incentives).

I really hope that this all makes sense. I haven't summarized this for quite some time.

A few random links that may provide some context:

http://www.blutner.de/NeuralNets/ (this is about the AI context we are talking about)

http://www.blutner.de/NeuralNets/Texts/mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular) 

http://c2.com/cgi/wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)

http://c2.com/cgi/wiki?FuzzyAndSymbolicLearning (old post by me)

http://grault.net/adjunct/index.cgi?VaguesDependingOnVagues (dito)

Note: Details about the modelling of the semantic part are mostly in my head. 

What could prove me wrong?

Well. Wrong is too hard here. This is just my model and it is not really that concrete. Probably a longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this appart (provided that I'd find time to prepare my model suitably). 

God and Religion

I wasn't indoctrinated as a child. My truely loving mother is a baptised christian living it and not being sanctimony. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal christian belief. 

I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity and the bible (understood as a mix of non-literal word of God and history tale).

I mean, it is not that hard if you can imagine a timeless (simulation of) the universe. If you are god and have whatever plan on earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or 'person lifes'. Constraints that realize free-will in the sense of 'not subject to the whole universe plan satisfaction algorithm'.  

Surely not more difficult than consistent time-travel.

And souls and afterlife should be easy to envision for any science fiction reader familiar with super intelligences.

But why? Occams razor applies. 

There could be a God. And his promise could be real. And it could be a story seeded by an emphatizing God - but also a 'human' God with his own inconsistencies and moods.

But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.

Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.

I can't say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.

My epiphanies - the aha feelings of clarity that I did experience - have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.

But I haven't lost my morality. It has deepend and widened. I have become even more tolerant (I hope). 

So if God does against all odds exists I hope he will understand my doubts, weight my good deeds and forgive me. You could tag me godless christian. 

What could prove me wrong? 

On the atheist side I could be moved a bit further by more proofs of religion being a human artifact.   

On the theist side there are two possible avenues:

  1. If I'd have an unsearched for epiphany - a real one where I can't say I was hallucinating but e.g. a major consistent insight or a proof of God. 
  2. If I'd be convinced that the singularity is possible. This is because I'd need to update toward being in a simulation as per Simulation argument option 3. That's because then the next likely explanation for all this god business is actually some imperfect being running the simulation.

Thus I'd like to close with this corollary to the simulation argument:

Arguments for the singularity are also (weak) arguments for theism.

Note: I am aware that this long post of controversial opinions unsupported by evidence (in this post) is bound to draw flak. That is the reason I post it in Comments lest my small karma be lost completely. I have to repeat that this is meant as context and that I want to elaborate on these points on LW in due time with more and better organized evidence.

The failure of counter-arguments argument

14 Stuart_Armstrong 10 July 2013 01:38PM

Suppose you read a convincing-seeming argument by Karl Marx, and get swept up in the beauty of the rhetoric and clarity of the exposition. Or maybe a creationist argument carries you away with its elegance and power. Or maybe you've read Eliezer's take on AI risk, and, again, it seems pretty convincing.

How could you know if these arguments are sound? Ok, you could whack the creationist argument with the scientific method, and Karl Marx with the verdict of history, but what would you do if neither was available (as they aren't available when currently assessing the AI risk argument)? Even if you're pretty smart, there's no guarantee that you haven't missed a subtle logical flaw, a dubious premise or two, or haven't got caught up in the rhetoric.

One thing should make you believe the argument more strongly: and that's if the argument has been repeatedly criticised, and the criticisms have failed to puncture it. Unless you have the time to become an expert yourself, this is the best way to evaluate arguments where evidence isn't available or conclusive. After all, opposite experts presumably know the subject intimately, and are motivated to identify and illuminate the argument's weaknesses.

If counter-arguments seem incisive, pointing out serious flaws, or if the main argument is being continually patched to defend it against criticisms - well, this is strong evidence that main argument is flawed. Conversely, if the counter-arguments continually fail, then this is good evidence that the main argument is sound. Not logical evidence - a failure to find a disproof doesn't establish a proposition - but good Bayesian evidence.

In fact, the failure of counter-arguments is much stronger evidence than whatever is in the argument itself. If you can't find a flaw, that just means you can't find a flaw. If counter-arguments fail, that means many smart and knowledgeable people have thought deeply about the argument - and haven't found a flaw.

And as far as I can tell, critics have constantly failed to counter the AI risk argument. To pick just one example, Holden recently provided a cogent critique of the value of MIRI's focus on AI risk reduction. Eliezer wrote a response to it (I wrote one as well). The core of Eliezer's and my response wasn't anything new; they were mainly a rehash of what had been said before, with a different emphasis.

And most responses to critics of the AI risk argument take this form. Thinking for a short while, one can rephrase essentially the same argument, with a change in emphasis to take down the criticism. After a few examples, it becomes quite easy, a kind of paint-by-numbers process of showing that the ideas the critic has assumed, do not actually make the AI safe.

You may not agree with my assessment of the critiques, but if you do, then you should adjust your belief in AI risk upwards. There's a kind of "conservation of expected evidence" here: if the critiques had succeeded, you'd have reduced the probability of AI risk, so their failure must push you in the opposite direction.

In my opinion, the strength of the AI risk argument derives 30% from the actual argument, and 70% from the failure of counter-arguments. This would be higher, but we haven't yet seen the most prominent people in the AI community take a really good swing at it.

In Defense of Tone Arguments

24 OrphanWilde 19 July 2012 07:48PM

Suppose, for a moment, you're a strong proponent of Glim, a fantastic new philosophy of ethics that will maximize truth, happiness, and all things good, just as soon as 51% of the population accepts it as the true way; once it has achieved majority status, careful models in game theory show that Glim proponents will be significantly more prosperous and happy than non-proponents (although everybody will benefit on average, according to its models), and it will take over.

Glim has stalled, however; it's stuck at 49% belief, and a new countermovement, antiGlim, has arisen, claiming that Glim is a corrupt moral system with fatal flaws which will destroy the country if it has its way.  Belief is starting to creep down, and those who accepted the ideas as plausible but weren't ready to commit are starting to turn away from the movement.

In response, a senior researcher of Glim ethics has written a scathing condemnation of antiGlim as unpatriotic, evil, and determined to keep the populace in a state of perpetual misery to support its own hegemony.  He vehemently denies that there are any flaws in the moral system, and refuses to entertain antiGlim in a public debate.

In response to this, belief creeps slightly up, but acceptance goes into a freefall.

You immediately ascertain that the negativity was worse for the movement than the criticisms; you write a response, and are accused of attacking the tone and ignoring the substance of the arguments.  Glim and antiGlim leadership proceed into protracted and nasty arguments, until both are highly marginalized, and ignored by the general public.  Belief in Glim continues, but when the leaders of antiGlim and Glim finally arrive on a bitterly agreed upon conclusion - the arguments having centered on an actual error in the original formulations of Glim philosophy, they're unable to either get their remaining supports to cooperate, or to get any of the public to listen.  Truth, happiness, and all things good never arise, and things get slightly worse, as a result of the error.

Tone arguments are not necessarily logical errors; they may be invoked by those who agree with the substance of an argument who nevertheless may feel that the argument, as posed, is counterproductive to its intended purpose.

I have stopped recommending Dawkin's work to people who are on the fence about religion.  The God Delusion utterly destroyed his effectiveness at convincing people against religion.  (In a world in which they couldn't do an internet search on his name, it might not matter; we don't live in that world, and I assume other people are as likely to investigate somebody as I am.)  It doesn't even matter whether his facts are right or not, the way he presents them will put most people on the intellectual defensive.

If your purpose is to convince people, it's not enough to have good arguments, or good facts; these things can only work if people are receptive to those arguments and those facts.  Your first move is your most important - you must try to make that person receptive.  And if somebody levels a tone argument at you, your first consideration should not be "Oh!  That's DH2, it's a fallacy, I can disregard what this person has to say!"  It should be - why are they leveling a tone argument at you to begin with?  Are they disagreeing with you on the basis of your tone, or disagreeing with the tone itself?

Or, in short, the categorical assessment of "Responding to Tone" as either a logical fallacy or a poor argument is incorrect, as it starts from an unfounded assumption that the purpose of a tone response is, in fact, to refute the argument.  In the few cases I have seen responses to tone which were utilized against an argument, they were in fact ad-hominems, of the formulation "This person clearly hates [x], and thus can't be expected to have an unbiased perspective."  Note that this is a particularly persuasive ad-hominem, particularly for somebody who is looking to rationalize their beliefs against an argument - and that this inoculation against argument is precisely the reason you should, in fact, moderate your tone.

Counterfactual Coalitions

15 Larks 16 February 2012 09:42PM

Politics is the mind-killer; our opinions are largely formed on the basis of which tribes we want to affiliate with. What's more, when we first joined a tribe, we probably didn't properly vet the effects it would have on our cognition.  
 
One illustration of this is the apparently contingent nature of actual political coalitions, and the prima facie plausibility of others. For example,

  • In the real world, animal rights activists tend to be pro-choice.
  • But animal rights & fetus rights seems just as plausible coalition - an expanding sphere of moral worth.

 
This suggests a de-biasing technique; inventing plausible alternative coalitions of ideas. When considering the counterfactual political argument, each side will have some red positions and some green positions, so hopefully your brain will be forced to evaluate it in a more rational manner.
 
Obviously, political issues are not all orthogonal; there is mutual information, and you don't want to ignore it. The idea isn't to decide your belief on every issue independently. If taxes on beer, cider and wine are a good idea, taxes on spirits are probably a good idea too. However, I think this is reflected in the "plausible coalitions" game; the most plausible reason I could think of for the political divide to fall between these is lobbying on behalf of distilleries, suggesting that these form a natural cluster in policy-space.
 
In case the idea can be more clearly grokked by examples, I'll post some in the comments.

"The Conditional Fallacy in Contemporary Philosophy"

2 gwern 31 January 2012 09:06PM

Split from "Against Utilitarianism: Sobel's attack on judging lives' goodness" for length.

Robert K. Shope, back in his 1978 paper "The Conditional Fallacy in Contemporary Philosophy", identified a kind of argument that us transhumanists will find painfully familiar: you propose idea X, the other person says bad thing Y is a possible counterexample if X were true, so X can't be true - ignoring that Y may not happen, and X can just be modified to deal with Y if it's really that important.

("If we augment our brains, we may forget how to love!" "So don't remove love when you're augmenting, sheesh." "But it might not be possible!" "But wouldn't you agree that augmentation without loss of love would be better than the status quo?")

Excerpts follow:

continue reading »

Handling Emotional Appeals

11 fiddlemath 10 December 2011 07:30AM

In a comment elsewhere, BrandonReinhart asked:

Why is it not acceptable to appeal to emotion while at the same time back it with well evidenced research? Or rather, why are we suspicious of the findings of those who appeal to emotion while at the same time uninterested in turning an ear to those who do not?

[...] Emotional appeals would seem to have more of an urgency, requiring our attention while the scientific view's far-mode appeal would seem less immediate. In that case, we might simply ignore the far mode story because of all the other urgent-seeming vacuous emotional appeals fighting for our attention and time. Even if we politically agreed on a course of action given a far mode analysis, we might choose to spend our time on the near-mode emotional problem set.

I suspect that we percieve a dichotomy between emotional appeal and a well-reasoned, well-evidenced argument.

I have a just-so story for why our kind can't cooperate: We've learned to distrust emotional appeal. This is understandable: the strength of an emotional appeal to believe X and do Y doesn't correlate with the truth of X or the consequences of Y. In fact, we are surrounded by emotional appeals to believe nonsense and do useless things. The production and delivery of emotional appeal is politics, policy, and several major industries. So, in our environment, emotional appeal is a strong indicator against rational argument.

In order to defend against irrationality, I have a habit of shutting out emotional appeals. I tune out emotive religious talk. I remain carefully aloof from political speeches. I put emotional distance between myself and any enthusiastic crowd. In general, my immediate response to emotional appeal is to ignore the message it bears. It's automatic now, subverbal -- I have an aversion to naked emotional appeal.

I strongly suspect that I'm not only describing myself, but many of you as well. (Is this true? This is a testable hypothesis.)

If we largely manage to broadly ignore emotional appeal, then we shut out not only harmful manipulations, but worthwhile rallying cries. We are motivated only by the motivation we can muster ourselves, rather than what motivation we can borrow from our peers and leaders. This may go some way towards explaining not just why Our Kind Can't Cooperate, but why we seem to so often report that Our Kind Can't Get Much Done.

On the other hand, if this is a real problem, it suggests a solution. We could try to learn an alternative response to emotional appeal. Upon noticing near-mode emotional appeal, instead of rejecting the message outright, go to far mode and consider the evidence. If the argument is sound under careful, critical consideration, and you approve of its motivation, then allow the emotional appeal to move you. On the other hand, I don't know if this is psychologically realistic.

So, questions:

  1. I hypothesize that we are much more averse to emotional appeals than the normal population. Does this stike you as true? Do you have examples or counterexamples?

  2. How might we test this hypothesis?

  3. I further hypothesize that, if we are averse to emotional appeals, that this is a strong factor in both our widely-reported akrasia and our sometimes-noted inability to work well together. How could we test this hypothesis?

  4. Can you postpone being moved by an emotional appeal until after making a calm decision about it?

  5. Can you somehow otherwise filter for emotional appeals that are highly likely to have positive effects?

New York Times on Arguments and Evolution [link]

7 Nic_Smith 14 June 2011 06:12PM

I saw this in the Facebook "what's popular" box, so it's apparently being heavily read and forwarded. There's nothing earthshattering for long-time LessWrong readers, but it's a bit interesting and not too bad a condensation of the topic:

Now some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. Certitude works, however sharply it may depart from the truth. -- Cohen, Patricia "Reason Seen More as Weapon Than Path to Truth"

A glance at the comments [at the Times], however, seems to indicate that most people are misinterpreting this, and at least one person has said flatly that it's the reason his political opponents don't agree with him.

ETA: Oops, I forgot the most import thing. The article is at http://www.nytimes.com/2011/06/15/arts/people-argue-just-to-win-scholars-assert.html

Existing Absurd Technologies

23 Desrtopa 30 May 2011 06:12AM

When attempting to introduce non-rationalists to the ideas of cryonics or Strong AI, it appears that their primary objections tend to be rooted in the absurdity heuristic. They don't believe they inhabit a universe where such weird technologies could actually work. To deal with this, I thought it would be useful to have a cache of examples of technologies that have actually been implemented that did, or ideally, still do, challenge our intuitions about the way the universe works.

The first example that comes to my mind is computers in general; imagine what Ernest Rutherford, let alone Benjamin Franklin, would have thought of a machine that uses electricity to calculate, and do those calculations so fast that they can express nearly anything as calculations. Nothing we know about how the universe works says it shouldn't be possible, indeed it obviously is knowing what we do now, but imagine how weird this would have seemed back when we were just coming to grips with how electricity actually worked.

I suspect there may be better examples to challenge the intuitions of people who've grown up in an age where computers are commonplace though. So does anyone have any to volunteer?

"I know I'm biased, but..."

22 [deleted] 10 May 2011 08:03PM

Inspired by: The 5-Second Level, Knowing About Biases Can Hurt People

"I know I'm biased, but..." and its equivalents seem to be relatively common in casual conversation--I've encountered the phrase in classroom discussions, on Internet message boards, and in political arguments. In most cases, "I know I'm biased, but..." is used as a way of feigning humility and deflecting criticism by preemptively responding to accusations of bias. That is, the speaker acknowledges that their argument may be flawed in order to deny their opponent the opportunity point out particular biases. It's a way of signaling to the audience, "Yes, there are errors in this line of reasoning, but I already know that, so you can't accuse me of being biased."

But as we all know by now, it's not enough to just acknowledge biases--you have to actually correct the error before you can move on. Admitting that your argument is based on bias does not absolve you of your error, and it doesn't make your argument any truer.

Therefore, "I know I'm biased, but" is a cached thought that we would be better off without. But how can we get rid of it? Tabooing the phrase "I know I'm biased, but..." is not enough, since your brain will probably end up substituting something similar, such as "I may be wrong, but..." instead of making the appropriate correction. Instead, it is necessary to force your brain to consciously think about the bias instead of instinctively rationalizing the biased argument. This is a skill that takes place on the 5-second level: you have to stop your train of thought mid-sentence and think about the situation more clearly. The following should serve as an anti-pattern for when you notice yourself thinking, "I know I'm biased, but...":

1) Stop. I'm not ready to proceed. If there's a bias in my argument, bulldozing over it is never the correct solution. I need to just cut myself off in mid-sentence and think about this.

2) Identify the bias. What is this bias that my brain is trying to cover up? Does it have a name? Where have I read about it before? What heuristic am I using that is causing the problem? Do I have any emotional attachment to this argument that might cloud my judgment? How would I feel if this argument was wrong? Where is my information coming from? Did I do a thorough job researching this argument?

3) Think about potential solutions. What heuristic should I be using instead of the one I am using? Can I substitute a quantitative analysis or Bayesian update instead of jumping to a particular conclusion? Do I need to do more research to determine if this argument is true? What other sources of evidence can I consult?

4) Re-analyze using a different method. What happens when I use the heuristics I just thought about instead of the ones I originally used? What pieces of evidence really support my argument? What facts would need to be different for it to be false? Can I compare multiple perspectives on this argument?

5) Re-evaluate the argument. Does the argument still look correct? Does approaching the problem with a different method yield the same results? Have I completely explained away the bias?

An abstract explanation isn't always enough, so here is an example:

 


 

"...and that's why," Albert concluded, "the iPhone is absolutely terrible!"

"I know I'm biased," Barry replied, "but iPhone is the best smartphone on the market!"

Uh-oh, thought Barry. I said that phrase again. Something's not right here. "Hang on a moment..."

Why would I think that the iPhone is the best smartphone on the market? How would I feel if it wasn't the best phone? Well, I'd be kind of annoyed that I spent all that money to buy one. I'd feel disappointed because the advertisement made it look really awesome, and I've always told everyone that it was worth the price. Am I rationalizing this? Hmm, maybe I am rationalizing and I just don't want to believe that I made a bad purchase.

Ok, so what if it is rationalization? What am I supposed to do now? Didn't I read something on LessWrong about this? This feels like "politics is the mind-killer" territory--I should probably be re-thinking my arguments and checking for bias.

But how should I be evaluating the quality of my iPhone? I guess I should ask myself what features I care about--let's pick three. Well, the most important thing to me is service--I make a lot of calls for work and I don't want any of them to be dropped. I want my phone to be durable, too--I'm pretty clumsy and I drop it from time to time. And the phone bill is important too.

Alright, let's add all of this up: The iPhone is pretty fragile, I've already cracked the screen slightly. And it does drop calls sometimes--there might be a network with better coverage, I'm not sure. And the phone bill--my old phone was definitely a lot cheaper, but it also wasn't a smartphone. I'd have to research other networks' coverage and pricing to be sure.

Wow, I might've been wrong about this. That means I wasted a lot of money. And it also means that the iPhone probably isn't "the best" phone out there. Wait, that's not right--it could be the best, but I don't have the evidence to prove it, so my argument isn't right. I have to gather more evidence.

"Are you still there?" Albert frowned in puzzlement. "You kinda fuzzed out there for a second."

"Nevermind," said Barry. "What I should have said was, the iPhone doesn't really do all of the things I want it to do. Say, where's the electronics store?"

 


 

Next time you catch yourself thinking, "I know I'm biased, but...", don't let your brain finish the sentence--stop that train of thought and analyze it!

Edit: Many commenters have suggested that "I know I'm biased, but..." is sometimes used to signal being open to counterarguments. As a result, it is best to double-check what you (or your discussion partners) are really signaling so that you can respond appropriately.

Link: Why and how to debate charitably

14 RobinZ 14 April 2011 04:14PM

Even though this was written by a current Less Wrong poster (hi, pdf23ds!), I don't think it has been posted here: Why and how to debate charitably (pg. 2, comments). (Edit: The original pdf23ds.net site has sadly been lost to entropy – Less Wrong poster MichaelBishop found a repost on commonsenseatheism.com. He also provides this summary version.)

I was linked to this article from a webcomic forum which had a low-key flamewar smouldering in the "Serious Business" section. (I will not link to it here; if you can tell from the description which forum it is, I would thank you not to link it either.) Three things struck me about it:

  1. I have been operating under similar rules for years, with great success.
  2. The participants in the flamewar on the forum where it was posted were not operating under these rules.
  3. Less Wrong posters generally do operate under these rules, at least here.

The list of rules is on pg. 2 - a good example is the rule titled "You cannot read minds":

As soon as you find someone espousing seemingly contradictory positions, you should immediately suspect yourself of being mistaken as to their intent. Even if it seems obvious to you that the person has a certain intent in their message, if you want to engage them, you must respond being open to the possibility that where you see contradictions (or, for that matter, insults), none were intended. While you keep in mind what the person’s contradictory position seems to be, raise your standards some, and ask questions so that the person must state the position more explicitly—this way, you can make sure whether they actually hold it. If you still have problems, keep raising your standards, and asking more specific questions, until the person starts making sense to you.

If part of their position is unclear or ambiguous to you, say that explicitly. Being willing to show uncertainty is an excellent way to defuse the person’s, and your own, defensiveness. It also helps them to more easily understand which aspects of their position they are not making clear enough.

The less their position makes sense to you, the more you should rely on interrogative phrase and the less on declarative. Questions defuse defensiveness and are much more pointed and communicative than statements, because they force you to think more about the person’s arguments, and to really articulate what it about their position you most need clarification on. They help to keep the discussion moving, and help you to stop arguing past each other. Phrase the questions sincerely, and use as much of the person’s own reasoning (putting in the best light) as you can. This requires that you have a pretty good grasp on what the person is arguing—try to understand their position as well as you can. If it’s simply not coherent enough, the case may be hopeless.

Fine-Tuned Mind Projection

3 Alexandros 29 November 2010 12:08AM

The Fine-Tuning Argument (henceforth FTA) is the pet argument of many a religious apologist, allowing them as it does to build support for their theistic thesis on the findings of cosmology. The basic premise is this: The laws of nature appear to contain constants that if changed slightly would yield universes inhospitable to life. Even though a lot can be said about this premise, Let's assume it true for the purposes of this article.

Luke Muehlhauser over at Common Sense Atheism recently wrote an article pointing out what I think is a central flaw of the FTA. To summarise, he notes that there are multitudes of propositions that are true for this universe and would not be true in a different universe. For instance galaxies, or, Luke's tongue-in-cheek example: iPads. If you accept that the universe is fine-tuned for life, you also have to accept that it's fine-tuned for galaxies, and iPads, given that some changes in the fine-tuned constants would not produce galaxies, and certainly not iPads. 

So the question posed to defenders of the FTA is 'why life'? Why focus on this particular fact? What is it that sets life apart from all the other propositions true about our universe but not other the other possible universes? The usual answer is that life stands out, being valuable in ways that galaxies, iPads, and all the other true propositions are not. It seems that this is an unstated premise of the FTA. But where does that premise come from? Physics gives us no instrument to measure value, so how did this concept get in what was supposed to be a cosmology-based argument?

I present the FTA here as an argument that while seemingly complex, simply evaporates in light of the Mind Projection Fallacy. Knowing that humans tend to confuse 'I see X as valuable' with 'x is valuable', the provenance of the hidden premise 'life is valuable' is laid bare, as is the identity of the agent who is doing the valuing, and it is us. With the mystery solved, explaining why humans find life valuable does not require us to go to the extreme lengths of introducing a non-naturalistic cause for the universe.

Without any support for life being special in some way, the FTA devolves into a straightforward case of Texas Sharpshooter Fallacy: There exists life, our god would have wanted to create life, therefore our god is real! Not quite as compelling.

 

The Sin of Persuasion

27 Desrtopa 27 November 2010 09:44PM

 

Related to Your Rationality is My Business

Among religious believers in the developed world, there is something of a hierarchy in terms of social tolerability. Near the top are the liberal, nonjudgmental, frequently nondenominational believers, of whom it is highly unpopular to express disapproval. At the bottom you find people who picket funerals or bomb abortion clinics, the sort with whom even most vocally devout individuals are quick to deny association.

Slightly above these, but still very close to the bottom of the heap, are proselytizers and door to door evangelists. They may not be hateful about their beliefs, indeed many find that their local Jehovah’s Witnesses are exceptionally nice people, but they’re simply so annoying. How can they go around pressing their beliefs on others and judging people that way?

I have never known another person to criticize evangelists for not trying hard enough to change others’ beliefs.

continue reading »