This post was rejected for the following reason(s):
Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission. Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience.
(For new users, we require people to state the strongest single argument in the post within the introduction, to make it easier to evaluate at a glance whether it's a good fit for LessWrong)
Confusion / muddled reasoning. I felt your submission has a bit too much confusion or muddled thinking to approve. Reasons I check the box for this feedback item include things like “really strange premises that aren’t justified”, “inferences that don’t seem to follow from the premises,” “use of weird categories,” “failure to understand basics topics of what it discusses (e.g. completely misunderstand how LLMs work)”, and/or “failure to respond to basic arguments about the topic”. Often the right thing to do in this case is read more about the topic you’re discussing.
This is my first post here. Let me know your thoughts. If you'd like to chat more, you can reach me at https://www.linkedin.com/in/sai-mupparaju/.
As much as a forum like this values traditional ideas of the rational mind, I've been reflecting a lot recently on whether or not this is truly the best way to go about things. More specifically, I began reading Notes from the Underground recently (I started 3 months ago and am 50 pages in). I came across a statement that I frequently find myself coming back to now and then.
Now, I understand the quote should be taken in the context of the broader sociopolitical environment in which Dostoyevsky wrote it. But still, as someone who, for the majority of his life, deeply valued reasonability as a characteristic over all else, I really can't help but take it literally.
Where does rationality fail?
Rationality fails to help us find meaning. Partly, this is because we are eternally faced with divergences between our feelings and our sense of rationality. Mostly, though, I'm referring to the nature of accepting something to be true. There is a deep and irremovable frustration in handing away my free will to reason. Take the following example:
What is a man to do when he wakes up one morning and 2+2 = 4 frustrates him?
Such a man cannot do anything in practice. Cry as he may, the source of his sorrows will forever haunt him. It's not just that this man is irrationally angry with the surface-level truths he encounters. Even more, he’s enraged with the more daunting realization that these are true. The man has explored the bounds of his free will and has come face-to-face with a seemingly insurmountable wall that is TRUTH.
To accept something as irrevocably true is to yield his sense of authority and control of it.
The Historical Look
Sometimes I wonder if these frustrations are a factor of our modern age. Long before the time of scientific methods, our interpretations of the world were very disjoint from reality. Think about rain, for example. We know that rain is a product of the water cycle - a cycle that is vast and utterly uncontrollable. Before this conceptualization, what did we do?
We prayed. When confronted with such a grand and mysterious system, a system we should now feel was totally out of control - we prayed for rain. This prayer, oddly enough, proved that they did feel control over the world. They prayed because they thought that their prayers would change the will of some higher power that could bring about rain. We felt ownership of the world around us, and we felt we had control. To pray for rain was an exertion of our will on the very world itself.
But that’s gone now and rationality has killed it. In that sense, it's almost as if the story of science and mathematics is instead the story of mankind’s self-diminishment. As we expand the bounds of what we know, we inevitably are faced with the truth that much of the universe is beyond our grasp.
Am I being Manipulative?
But this seems like a very perverse and manipulative way to frame human development. After all, we have no doubt been able to use engineering as a means of exponentially broadening our capabilities. Penicillin, no doubt, has helped us conquer many diseases and extend our lifespans greatly. Nuclear warheads have taken our destructive capacities to infinitum. We have taken our explorations to far reaches outer space. It's entirely possible that shortly, we may indeed learn to control the weather. Surely we can now do things that previously we had felt we could not?
It might well be the case that rationality permits us to go beyond our ‘natural’ capabilities. But does this imply that the scope of our free will has been altered? Isn’t it true that whatever we can do now we are always capable of doing? My point is much more foundational - that these extraordinary capabilities are built on fundamental truths and axioms we cannot control fully. And thus, our capabilities are still limited by the whims of these truths. Returning to the previous example: isn’t it true that the same mathematical truths that tell us 2+2=4 also tell us that we cannot go past the speed of light because of a division by zero error? Are we just lucky that these axioms have so far worked in our favor, like a temporary alliance in a war? Can we guarantee that this alliance will forever be in our favor? What do we do if one day we find ourselves on opposite ends of the battlefield? Do we then become powerless despite all our might?
AI Doomerism
As a bit of a tangent, I realized recently that this is also where my anguish about the upcoming AI revolution stems from. At my core, I am deeply dissatisfied with the notion of an unalive super intelligence telling me about the truths of this world. I am uncomfortable with the idea that something that itself doesn’t embody the essence of some sort of underlying “human spirit” will soon confront me with my possible limitations (though it may also tell me the opposite and bolster my sense of control). Sure, many of the truths that I take for granted every day don’t come from independent thought (if I were dropped into a rainforest with no external contact, how much of modern-day mathematics and science would I be able to replicate), but at least I take comfort in knowing that it was a human mind, not too much unlike my own, that arrived at these truths I’ve accepted.
The alternative sends a chill down my spine. One day there will be something all too inhuman that dictates my bounds, and against it, I will have no chance. From here I think there are two natural paths:
Which one I will go with, I am currently unsure. Recently, it has also come to my attention that there exists a third path where I focus myself entirely inward and on not doing anything. Instead, I can single-mindedly focus on self-cultivating tasks, letting AI feed and take care of my every need (Child Path).
The Happy Ending?
Kurt Gödel was a logician who did some pretty cool stuff. He's most well-known for his Incompleteness Theorems which prove that at their core any formal mathematical system is incomplete. Let's get some definitions out of the way here.
A formal system is a mathematical language that consists of symbols and how they should be structured, as well as a deductive system or proof system that lets us derive new knowledge computationally from our current knowledge. Every formal system starts with a set of axioms, which are statements that are assumed to be true without needing proof in a system. In short, this is a long and complicated way of describing typical reasoning methods as we know them. If a system is incomplete, there are true statements in its language that cannot be proven as true.
So the key idea that Gödel showed was that in any mathematical, rational system we can come up with, there will be true things that we cannot prove in the system. It's really important to note a crucial point here that people get wrong. It's not that these statements are improvable in any formal system. Rather, it's just that they're improvable in the specific formal system we're working with. We can trivially find a formal system where the statement is known to be true by simply accepting it as an axiom.
This whole thing is cool, sure, but for a long time, I never gave it much thought. That was until I watched this interview by Roger Penrose.
What he says in this video stood out to me. Penrose points out that when we work with certain formal systems and we encounter the above improvable statements, we still know that the statement is true. We can reach conclusions outside the computational scope of this formal system, simply just by using a human mind's capability for understanding. Even more, we can draw these amazing conclusions simply by our belief in the underlying axioms themselves. This is why Penrose and other philosophers, argue that the human mind behaves beyond rationality. It's entirely possible that the underlying mechanisms of our thoughts and human essence cannot ever be described fully by any formal system's computational capacities.
The Case for Irrationality
However, I only admit that it's a mere possibility. I see 2 cases here that seem mutually exclusive.
Case 1: There does not exist a formal system that encapsulates human thought through its computation. The human mind is beyond computation and can reach true conclusions that can't be reached rationally.
If this is true, then the human mind is super-rational. Strictly adhering in fullness to the idea of rationality is a step-down from what we are truly capable of as humans. In fact, strictly doing so strips a part of human nature away from us. It is below us and shuts us off from crucial parts of the world we can access.
As an add-on, what are the implications in this case for a purely computational component such as AI? Maybe AI will never truly be able to replace the broader needs of the human experience fully?
Case 2: There does exist a formal system that can perfectly describe human thought through some mapping.
Well, if this were true then by Gödel's Incompleteness there are guaranteed to be certain truths in our domain of thought that we cannot ever prove to be true. There are truths that we can conceive of that will never be provable by the formal system that our thoughts define. I can already think of countless questions in this realm that may be subject to such a situation (God, Love, Purpose, etc.). Therefore, living purely rationally leads us to missing out on fundamental truths. So what should we do? It seems that in some respect, we must account for irrational jumps that lead to conclusions in line with nature as we see it.
Conclusion
We need not live as nonsensical scoundrels. With that said, let's just keep in mind that not every crossroad in life, be it God or Love, needs to be solved with reason. Pave way for your intuition sometimes, maybe it's the tendrils of something greater pulling at you from the future.
- Sai M.