Wiki Contributions

Comments

Sorted by
tygorton-30

But is being an effective human all that "rational".

When I look at humans who are hugely successful, I do not see rationality as their guiding principal.

Aren't there lateral benefits to learning something as complex as a new language? The level of mental focus and commitment required must have cognitive rewards and I would think any level of cognitive improvement would be of great value.

In order to learn any language, it requires a certain level of immersion in cultural concepts/perspectives outside of your own. Broadening cultural awareness and gaining new perspectives certainly contributes to an individual's ability to see the world with increased clarity.

It seems to me that measuring the worth of learning anything in terms of how directly one might make use of it cannot measure its total value.

No, I"m sure it is just my lack of knowledge regarding philosophy and the associated methods of discussing it. I never actually believed that the author was trying to convince me that death was not bad, but (as I stated above) playing devil's advocate in order to explore ideas and challenge the reader. I simply wouldn't know enough about it to name it the "Socratic method". My bad.

tygorton-40

"A purely rational person would be nigh omniscient"

Even at current human intelligence levels? I don't see how pure rationality without the ability to crunch massive amounts of data extremely fast would make someone omniscient, but I may be missing something.

"If a combustible engine does more good than bad (which it does)"

Of course, I'm playing devil's advocate with this post a bit, but I do have some uncertainty about.... well, your certainty about this :)

What if a purely rational mind decides that while there is a high probability that the combustible engine would bring about more "good" than "bad", the probable risks compels them to reject its production in favor of first improving the technology into something with a better reward/risk ratio? A purely rational mind would certainly recognize that, over time, the resource of gasoline derived from oil would lead to shortages and potential global warfare. This is a rather high risk probability. Perhaps a purely rational mind would opt to continue development until a more sustainable technology could be mass produced, greatly reducing the potential need for war/pollution/etc. Keep in mind, we have yet to see the final aftermath of our combustible engine reliance.

"The lives lost due to technological advancements have been dwarfed by the lives saved."

How does a purely rational mind feel about the inevitable over-population issue that will occur if more and more lives are saved and/or extended by technology? How many people lead very low quality lives today due to over population? Would a purely rational mind make decisions to limit population rather than help them explode?

Does a purely rational mind value life less or more? Are humans MORE expendable to a purely rational mind so long as it is 51% beneficial, or is there a rational reason to value each individual life more passionately?

I feel that we tend to associate pure rationality with a rather sci-fi notion of robotic intelligence. In other words, pure rationality is cold and mathematical and would consider compassion a weakness. While this may be true, a purely rational mind may have other reasons than compassion to value individual life MORE rather than less, even when measured against a potential benefit.

The questions seem straight forward at first, but is it possible that we lean toward the easy answers that may or may not be highly influenced by very irrational cultural assumptions?

Nice, I'm going to experiment with this. It is like a thought experiment that intentionally creates the opportunity for an "accidental discovery", which are usually the most useful.

Right..... :) Oops. Fixed.

But the same paragraph continues with: But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don't exist?

It feels like the article is playing devil's advocate but I perceived that the bulk of it was playing to the tune that the sentiment of death being "bad" is rather irrational.

The last lines of the article:

So is death bad for you? I certainly think so, and I think the deprivation account is on the right track for telling us why. But I have to admit: Puzzles still remain.

Kagan does feel that death is "bad", but he only throws this in at the very end after spending the entirety of the article arguing the opposite.

One of his dominant questions is: Why do we feel bad about the loss of time after our death as opposed to feeling bad about the loss of time before our birth. I won't go into detail here about the article's content, but I do have a thought about it.

This is just me running with an idea in the moment, so I apologize if it is not well organized:

Let's say we have just purchased tickets to a concert. It features a band we have always wanted to see play live and the concert is several months away. We may certainly feel impatient and agonize over the wait, but in some sense the anticipation is a build-up to the inevitable moment of pleasure we feel when the actual day arrives, followed by the actual moment when we are at the concert hearing the band play in a crowd of people. Once the concert is over, it is over in every sense. The anticipation--having something to look forward to--is over, AND the event itself is over.

If we look at being born and subsequently dying as though they are similar to buying tickets to a concert and attending the concert, I think we can define why the time before the concert is not perceived as "bad" but the time after the concert has ended could certainly be percieved as "bad". Before we are born, the events of the world can be perceived as the build-up, the anticipation phase, or "buying the ticket". The world is being prepped for our entry. Life itself is the concert, it is the show we all want to be a part of.... we want to be in that crowd hearing the music. When the concert is over, there is an inevitable sense of loss. Everything leading up to the concert fueled the ultimate enjoyment of the concert itself. What comes after the concert can only be seen as "less appealing", or "bad" in comparison to the build-up to and excitement of the concert itself.

In other words, we see the events leading up to something we "want" as being positive, even if they present some level of agitation due to impatience or a strong desire to just get there already. We inherently know that the waiting will make it all that much sweeter. Yet the end of something we "want" to continue is difficult to define as anything but "bad".

Being upset about the time lost BEFORE our birth would be like being upset about missing a concert we never wanted to buy tickets for in the first place.

I have never been good at math and a high percentage of content discussed here is over my head. However, I am hoping this does not exclude me from a sincere attempt to grasp the general concepts and discuss them as best I can. In other words, I'm hoping my enthusiasm makes up in some way for my total ignorance.

My take on this is that, within a mathematical equation, if a specific variable does not have a discernible impact on the resulting value, it is irrelevant to the equation. Such a variable may exist merely as a conceptual "comfort" to the human method of perceiving the universe, but that doesn't mean it serves any meaningful/rational purpose within the equation. If pure rationality is the ideal, then all "truths" should be reduced to their absolute smallest value. In other words, trim the fat no matter how damn tasty it is.

If all possibilities exist at all times as variable probabilities, I can begin to grasp the irrelevance of time as being necessary to arrive at meaningful insights about the universe. If time always exists as an infinite quantity, it may as well be zero because along an infinite timeline, all possibilities, even those with extremely finite probability, will exist.

I am wholly new to all of these concepts and as I stated, math might as well be a rapid-fire auctioneer speaking a foreign language. The above thoughts are the best I could solidify and I would love to know if I'm even in A ballpark... not THE ballpark, but at least A ballpark that is somewhere near relevant.

You caught me... I tend to make overly generalized statements. I am working on being more concise with my language, but my enthusiasm still gets the best of me too often.

You make a good point, but I don't necessarily see the requirement of massive infrastructures and political will as the primary barriers for achieving such goals. As I see it, any idea, no matter how grand/costly, is achievable so long as a kernel exists at the core of that idea that promises something "priceless", either spritually, intellectually, materially, etc. For example, a "planet cracking nuke" can only have one outcome, the absolute end to our world. There is no possible scenario imaginable where cracking the planet apart would benefit any group or individual. (Potentially, in the future, there could be benefits to cracking apart a planet that we did not actually live on, but in the context of the here and now, a planet cracking nuke holds no kernel, no promise of something priceless.

AI fascinates because, no matter how many horrorific outcomes the human mind can conceive of, there is an unshakable sense that AI also holds the key to unlocking answers to questions humanity has sought from the beginning of thought itself. That is a rather large kernel and it is never going to go dim, despite the very real OR the absurdly unlikely risks involved.

So, it is this kernel of priceless return at the core of an "agent AI" that, for me, makes its eventual creation a certainty along a long enough timeline, not a likelihood ratio.

I cannot fathom how this is any more than a distraction from the hardline reality that when human beings gain the ability to manufacture "agent AI", we WILL.

Any number of companies and/or individuals can ethically choose to focus on "tool AI" rather than "agent AI", but that will never erase the inevitable human need to create that which it believes and/or knows it can create.

In simple terms, SI's viewpoint (as I understand it) is that "agent AI's" are inevitable.... some group or individual somewhere at some point WILL produce the phenomenon, if for no other reason than because it is human nature to look behind the curtain no matter what the inherent risks may be. History has no shortage of proof in support of this truth.

SI asserts that (again, as I understand it) it is imperative for someone to at least attempt to create a friendly "agent AI" FIRST, so there is at least a chance that human interests will be part of the evolving equation... an equation that could potentially change too quickly for humans to assume there will be time for testing or second chances.

I am not saying I agree with SI's stance, but I don't see how an argument that SI should spend time, money and energy on a possible alternative to "agent AI" is even relevant when the point is explicitly that it doesn't matter how many alternatives there are nor how much more safe they may be to humans; "agent AI" WILL happen at some point in the future and its impact should be addressed, even if our attempts at addressing those impacts are ultimately futile due to unforseen developments.

Try applying Karnofsky's style of argument above to the creation of the atomic bomb. Using the logic of this argument in a pre-atomic world, one would simply say, "It will be fine so long as we all agree NOT to go there. Let's work on something similar, but with less destructive force," and expecting this to stop the scientists of the world from proceeding to produce an atomic bomb. Once the human mind becomes aware of the possibility of something that was once considered beyond comprehension, it will never rest until it has been achieved.

Load More