You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Questions to ask theist philosophers? I will soon be speaking with several

8 kokotajlod 26 April 2014 12:46AM

I am about to graduate from one of the only universities in the world that has a high concentration of high-caliber analytic philosophers who are theists. (Specifically, the University of Notre Dame, IN) So as not to miss this once-in-a-lifetime opportunity, I have sent out emails asking many of them if they would like to meet and discuss their theism with me. Several of them have responded already in the affirmative; fingers crossed for the rest. I'm really looking forward to this because these people are really smart, and have spent a lot of time thinking about this, so I expect them to have interesting and insightful things to say.

Do you have suggestions for questions I could ask them? My main question will of course be "Why do you believe in God?" and variants thereof, but it would be nice if I could say e.g. "How do you avoid the problem of X which is a major argument against theism?"

Questions I've already thought of:

1-Why do you believe in God?

2-What are the main arguments in favor of theism, in your opinion?

3-What about the problem of evil? What about objective morality: how do you make sense of it, and if you don't, then how do you justify God?

4-What about divine hiddenness? Why doesn't God make himself more easily known to us? For example, he could regularly send angels to deliver philosophical proofs on stone tablets to doubters.

5-How do you explain God's necessary existence? What about the "problem of many Gods," i.e. why can't people say the same thing about a slightly different version of God?

6-In what sense is God the fundamental entity, the uncaused cause, etc.? How do you square this with God's seeming complexity? (he is intelligent, after all) If minds are in fact simple, then how is that supposed to work?

I welcome more articulate reformulations of the above, as well as completely new ideas.

Baseline of my opinion on LW topics

7 Gunnar_Zarncke 02 September 2013 12:13PM

To avoid repeatly saying the same I'd like to state my opinion on a few topics I expect to be relevant to my future posts here.

You can take it as a baseline or reference for these topics. I do not plan to go into any detail here. I will not state all my reasons or sources. You may ask for separate posts if you are interested. This is really only to provide a context for my comments and posts elsewhere.

If you google me you may find some of my old (but not that off the mark) posts about these position e.g. here:

http://grault.net/adjunct/index.cgi?GunnarZarncke/MyWorldView

Now my position on LW topics. 

The Simulation Argument and The Great Filter

On The Simulation Argument I definitely go for 

"(1) the human species is very likely to go extinct before reaching a “posthuman” stage"

Correspondingly on The Great Filter I go for failure to reach 

"9. Colonization explosion".

This is not because I think that humanity is going to self-annihilate soon (though this is a possibility). Instead I hope that humanity will earlier or later come to terms with its planet. My utopia could be like that of the Pacifists (a short story in Analog 5).

Why? Because of essential complexity limits.

This falls into the same range as "It is too expensive to spread physically throughout the galaxy". I know that negative proofs about engineering are notoriously wrong - but that is currently my best guess. Simplified one could say that the low hanging fruits have been taken. I have lots of empirical evidence of this on multiple levels to support this view.

Correspondingly there is no singularity because progress is not limited by raw thinking speed but by effective aggregate thinking speed and physical feedback.  

What could prove me wrong? 

If a serious discussion would ruin my well-prepared arguments and evidence to shreds (quite possible).

At the very high end a singularity might be possible if a way could be found to simulate physics faster than physics itself. 

AI

Basically I don't have the least problem with artificial intelligence or artificial emotioon being possible. Philosophical note: I don't care on what substrate my consciousness runs. Maybe I am simulated.  

I think strong AI is quite possible and maybe not that far away.

But I also don't think that this will bring the singularity because of the complexity limits mentioned above. Strong AI will speed up some cognitive tasks with compound interest - but only until the physical feedback level is reached. Or a social feedback level is reached if AI should be designed to be so.

One temporary dystopia that I see is that cognitive tasks are out-sourced to AI and a new round of unemployment drives humans into depression. 

I have studied artificial intelligence and played around with two models a long time ago:
  1. A simplified layered model of the brain; deep learning applied to free inputs (I cancelled this when it became clear that it was too simple and low level and thus computationally inefficient)
  2. A nested semantic graph approach with propagation of symbol patterns representing thought (only concept; not realized)

I'd really like to try a 'synthesis' of these where microstructure-of-cognition like activation patterns of multiple deep learning networks are combined with a specialized language and pragmatics structure acquisition model a la Unsupervised learning of natural languages. See my opinion on cognition below for more in this line.

What could prove me wrong?

On the low success end if it takes longer than I think it would take me given unlimited funding. 

On the high end if I'm wrong with the complexity limits mentioned above. 

Conquering space

Humanity might succeed at leaving the planet but at high costs.

With leaving the planet I mean permanently independent of earth but not neccessarily leaving the solar system any time soon (speculating on that is beyond my confidence interval).

I think it more likely that life leaves the planet - that can be 

  1. artificial intelligence with a robotic body - think of curiosity rover 2.0 (most likely).
  2. intelligent life-forms bred for life in space - think of Magpies those are already smart, small, reproducing fast and have 3D navigation.    
  3. actual humans in suitable protective environment with small autonomous biosperes harvesting asteroids or mars. 
  4. 'cyborgs' - humans altered or bred to better deal with certain problems in space like radiation and missing gravity.  
  5. other - including misc ideas from science fiction (least likely or latest). 

For most of these (esp. those depending on breeding) I'd estimate a time-range of a few thousand years.

What could prove me wrong?

If I'm wrong on the singularity aspect too.

If I'm wrong on the timeline I will be long dead likely in any case except (1) which I expect to see in my lifetime.

Cognitive Base of Rationality, Vaguesness, Foundations of Math

How can we as humans create meaning out of noise?

How can we know truth? How does it come that we know that 'snow is white' when snow is white?

Cognitive neuroscience and artificial learning seems to point toward two aspects:

Fuzzy learning aspect

Correlated patterns of internal and external perception are recognized (detected) via multiple specialized layered neural nets (basically). This yields qualia like 'spoon', 'fear', 'running', 'hot', 'near', 'I'. These are basically symbols, but they are vague with respect to meaning because they result from a recognition process that optimizes for matching not correctness or uniqueness.

Semantic learning aspect

Upon the qualia builds the semantic part which takes the qualia and instead of acting directly on them (as is the normal effect for animals) finds patterns in their activation which is not related to immediate perception or action but at most to memory. These may form new qualia/symbols.

The use of these patterns is that the patterns allow to capture concepts which are detached from reality (detached in so far as they do not need a stimulus connected in any way to perception).

Concepts like ('cry-sound' 'fear') or ('digitalis' 'time-forward' 'heartache') or ('snow' 'white') or - and that is probably the demain of humans: (('one' 'successor') 'two') or (('I' 'happy') ('I' 'think')).  

Concepts

The interesting thing is that learning works on these concepts like on the normal neuronal nets too. Thus concepts that are reinforced by positive feedback will stabilize and mutually with them the qualia they derive from (if any) will also stabilize.

For certain pure concepts the usability of the concept hinges not on any external factor (like "how does this help me survive") but on social feedback about structure and the process of the formation of the concepts themselves. 

And this is where we arrive at such concepts as 'truth' or 'proposition'.

These are no longer vague - but not because they are represented differently in the brain than other concepts but because they stabilize toward maximized validity (that is stability due to absence of external factors possibly with a speed-up due to social pressure to stabilize). I have written elsewhere that everything that derives its utility not from some external use but from internal consistency could be called math.

And that is why math is so hard for some: If you never gained a sufficient core of self-consistent stabilized concepts and/or the usefulness doesn't derive from internal consistency but from external ("teachers password") usefulness then it will just not scale to more concepts (and the reason why science works at all is that science values internal consistency so highly and there is little more dangerous to science that allowing other incentives).

I really hope that this all makes sense. I haven't summarized this for quite some time.

A few random links that may provide some context:

http://www.blutner.de/NeuralNets/ (this is about the AI context we are talking about)

http://www.blutner.de/NeuralNets/Texts/mod_comp_by_dyn_bin_synf.pdf (research applicable to the above in particular) 

http://c2.com/cgi/wiki?LeibnizianDefinitionOfConsciousness (funny description of levels of consciousness)

http://c2.com/cgi/wiki?FuzzyAndSymbolicLearning (old post by me)

http://grault.net/adjunct/index.cgi?VaguesDependingOnVagues (dito)

Note: Details about the modelling of the semantic part are mostly in my head. 

What could prove me wrong?

Well. Wrong is too hard here. This is just my model and it is not really that concrete. Probably a longer discussion with someone more experienced with AI than I am (and there should be many here) might suffice to rip this appart (provided that I'd find time to prepare my model suitably). 

God and Religion

I wasn't indoctrinated as a child. My truely loving mother is a baptised christian living it and not being sanctimony. She always hoped that I would receive my epiphany. My father has a scientifically influenced personal christian belief. 

I can imagine a God consistent with science on the one hand and on the other hand with free will, soul, afterlife, trinity and the bible (understood as a mix of non-literal word of God and history tale).

I mean, it is not that hard if you can imagine a timeless (simulation of) the universe. If you are god and have whatever plan on earth but empathize with your creations, then it is not hard to add a few more constraints to certain aggregates called existences or 'person lifes'. Constraints that realize free-will in the sense of 'not subject to the whole universe plan satisfaction algorithm'.  

Surely not more difficult than consistent time-travel.

And souls and afterlife should be easy to envision for any science fiction reader familiar with super intelligences.

But why? Occams razor applies. 

There could be a God. And his promise could be real. And it could be a story seeded by an emphatizing God - but also a 'human' God with his own inconsistencies and moods.

But it also could be that this is all a fairy tale run amok in human brains searching for explanations where there are none. A mass delusion. A fixated meme.

Which is right? It is difficult to put probabilities to stories. I see that I have slowly moved from 50/50 agnosticism to tolerent atheism.

I can't say that I wait for my epiphany. I know too well that my brain will happily find patterns when I let it. But I have encouraged to pray for me.

My epiphanies - the aha feelings of clarity that I did experience - have all been about deeply connected patterns building on other such patterns building on reliable facts mostly scientific in nature.

But I haven't lost my morality. It has deepend and widened. I have become even more tolerant (I hope). 

So if God does against all odds exists I hope he will understand my doubts, weight my good deeds and forgive me. You could tag me godless christian. 

What could prove me wrong? 

On the atheist side I could be moved a bit further by more proofs of religion being a human artifact.   

On the theist side there are two possible avenues:

  1. If I'd have an unsearched for epiphany - a real one where I can't say I was hallucinating but e.g. a major consistent insight or a proof of God. 
  2. If I'd be convinced that the singularity is possible. This is because I'd need to update toward being in a simulation as per Simulation argument option 3. That's because then the next likely explanation for all this god business is actually some imperfect being running the simulation.

Thus I'd like to close with this corollary to the simulation argument:

Arguments for the singularity are also (weak) arguments for theism.

Note: I am aware that this long post of controversial opinions unsupported by evidence (in this post) is bound to draw flak. That is the reason I post it in Comments lest my small karma be lost completely. I have to repeat that this is meant as context and that I want to elaborate on these points on LW in due time with more and better organized evidence.

Teapots and Soda Cans

6 Odinn 01 September 2013 10:21PM

Reading an earnest and thought provoking editorial1 from one James Wood, reviewing 'Letter To a Christian Nation' by Sam Harris. Though atheist himself, he admits a flagging patience with certain attitudes of atheists. I can concede that an atheist's superior and glib demeanor may be due to frustration and no small amount of pessimistic inference about the human condition, though I had to comment about a rebuttal he gives regarding Bertrand Russell's celestial teapot2.

He claims that God, so much grander and more complex than a teapot, cannot be banished with such a simplistic comparison, when I would insist that God is actually much less believable than the teapot for that exact reason. I think Russell's teapot is due for an update which is more approachable and grounded. Here goes:

I claim that there is a discarded Coke can somewhere in the vastness of the Sahara, but I will brook absolutely no discussion about doubting my claim or investigating it for veracity. "Okay," you think, "I suppose I can assume that much to be true. Whatever this man's sources, the odds of a Coke can being somewhere in the desert must be considerable." But I then elaborate with claims that it's actually many, many cans, folded into glorious and artistically pleasing forms, and my obdurate refusal to discuss how it can be proved continues. At this point even the most generous theists would likely start getting annoyed with my odd behavior, yet at the very least what I'm asking you to believe isn't outside the realm of possibility. For all you know (though I refuse to allow you to check) there could be a folk art bazaar currently set up in the Sahara, so really it costs you very little to entertain my view.

And then I say that the cans have taken on beautiful, shimmering consciousness and are forming a society which hides from humanity, burying their chrome castles beneath the sand and moving their aluminum cities whenever we get too close to discovering them. "But..." you try to cut in. Before you can even begin to tell me what you find odd about my fantasy, I'm on the next detail. I claim that all of our major technological achievements of the last several hundred years are all thanks to the secret influence of the Shiny Can People.

Now you have countless legitimate doubts, but every time you try to tell me that, for starters, soda didn't even come in aluminum cans several hundred years ago, I insist that you weren't there so you can't be sure, and how could a mere burden of proof destroy the mighty empire of the Shiny Cans?

I like the utility of the can people because it doesn't start with an outlandish proposition, but if you stick around it gets absolutely ridiculous. Not only does that remind me more of how religion is actually sold, but it also serves to strengthen the original analogy of the teapot by reminding the curious mind that Russell's teapot is infinitely smaller and less complex than God, making it much less embarrassing to genuinely believe in since it would have so much more room to hide.

Odinn Celusta

1) http://www.newrepublic.com/article/the-celestial-teapot

2) http://en.wikipedia.org/wiki/Russell's_teapot

Help please!

13 Michelle_Z 06 June 2012 03:51PM

Yesterday my mom noticed (at a funeral) that I wasn't praying or participating in the mass. She confronted me about it, and I told her that no, I am not Catholic. Apparently it's sinking in and she's a bit hysterical... crying and screaming that she doesn't know me anymore.

What do I do? I don't know how to react/behave when she's doing this. It's like she wants me to feel like I'm doing something wrong, but it isn't working, so she's getting hysterical.

 

*edit*

I gave her a hug when she calmed down and told her I love her. That seemed to help, a little. Based on her previous behavior in situations where I've done something "wrong," she will (in the future) make barbs and slight passes at my beliefs. (Already she made one: insisting my love of science is causing my social anxiety disorder.) The advice given in the comments is really helpful. I plan on making the most of it.

A Problem with Human Intuition about Conventional Statistics:

-1 Kai-o-logos 20 April 2011 11:41PM

 

As an aspiring scientist, I hold the Truth above all. As Hodgell once said, "That which can be destroyed by the truth should be." But what if the thing that is holding our pursuit of the Truth back is our own system? I will share an example of an argument I overheard between a theist and an atheist once - showing an instance where human intuition might fail us.

*General Transcript*

Atheist: Prove to me that God exists!

Theist: He obviously exists – can’t you see that plants growing, humans thinking, [insert laundry list here], is all His work?

Atheist: Those can easily be explained by evolutionary mechanisms!

Theist: Well prove to me that God doesn’t exist!

Atheist: I don’t have to! There may be an invisible pink unicorn baby flying around my head, there is probably not. I can’t prove that there is no unicorn, that doesn’t mean it exists!

Theist: That’s just complete reductio ad ridiculo, you could do infrared, polaroid, uv, vacuum scans, and if nothing appears it is statistically unlikely that the unicorn exists! But God is something metaphysical, you can’t do that with Him!

Atheist: Well Nietzsche killed metaphysics when he killed God. God is dead!

Theist: That is just words without argument. Can you actually…..

As one can see, the biggest problem is determining burden of proof.  Statistically speaking, this is much like the problem of defining the null hypothesis.

A theist would define: H0 : God exists. Ha: God does not exist.

An atheist would define: H0: God does not exist. Ha God does exist.

Both conclude that there is no significant evidence hinting at Ha over H­0. Furthermore, and this is key, they both accept the null hypothesis. The correct statistical term for the proper conclusion if insignificant evidence exists for the acceptance of the alternate hypothesis is that one fails to reject the null hypothesis. However, human intuition fails to grasp this concept, and think in black and white, and instead we tend to accept the null hypothesis.

This is not so much a problem with statistics as it is with human intuition. Statistics usually take this form because simultaneous 100+ hypothesis considerations are taxing on the human brain. Therefore, we think of hypotheses to be defended or attacked, but not considered neutrally.

Considered a Bayesian outlook on this problem.

There are two possible outcomes: At least one deity exists(D). No deities exist(N).

Let us consider the natural evidence (Let’s call this E) before us.

P(D+N) = 1. P[(D+N)|E] = 1. P(D|E) + P(N|E) = 1. P(D|E) = 1- P(N|E).

Although the calculation of the prior probability of the probability of god existing is rather strange, and seems to reek of bias, I still argue that this is a better system of analysis than just the classical H0 and Ha, because it effectively compares the probability of D and N with no bias inherent in the brain’s perception of the system.

Example such as these, I believe, show the flaws that result from faulty interpretations of the classical system. If instead we introduced a Bayesian perspective – the faulty interpretation would vanish.

This is a case for the expanded introduction of Bayesian probability theory. Even if cannot be applied correctly to every problem, even if it is apparently more complicated than the standard method they teach in statistics class ( I disagree here), it teaches people to analyze situations from a more objective perspective.

And if we can avoid Truth-seekers going awry due to simple biases such as those mentioned above, won’t we be that much closer to finding Truth?