Declan Molony

"If you're thinking without writing, you only think you're thinking." - Leslie Lamport

Wikitag Contributions

Comments

Sorted by

I believe during sentencing a parole eligibility date is set. That date can, interestingly enough, be set way in the future---past the expected lifespan of the criminal.

The list I cited in my post (which is from Wikipedia) includes a section titled "Prisoners sentenced to one life imprisonment with possibility of parole after a period that exceeds a natural human life".

Admittedly, I'm not the most knowledgeable when it comes to the law, do I have something wrong here?

I know for Parkinson's, there are focused ultrasound therapies being tested on patients. These are one time applications that are having lasting positive effects. 

If we can apply something similar to antisocial behavior (if it can also be targeted in a specific brain region), then the taking-medicine problem can be avoided.

The posts asks the addressable question of: what should we do about prisoner healthcare?

Ignoring the 967 years scenario (as that's more of a hypothetical to conjure more abstract or philosophical thoughts in the reader), there's still the issue of should longevity medicine be given to prisoners or not?

If yes:

  • Then the government is extending a prisoner's incarceration (and suffering) for even longer if the prisoner is not set for release within hundreds of years, which seems cruel.

If no:

  • Then the government is condemning them to a premature death when the rest of us are living longer.

 

^Both of these options are suboptimal. The best case scenario is that we ignore them and find a third alternative: a rehabilitation-based approach to prison (versus the current confinement-based punishment). 

In reckoning with my feelings of doom, I wrote a post in which I drew upon Viktor Frankl's popular book Man's Search for Meaning. Here's an excerpt from that post that discusses why hope is not cheap, but necessary for day-to-day life:

Frankl, a psychologist and holocaust survivor, wrote about his experience in the concentration camps. He observed two types of inmates—some prisoners collapsed in despair, while others were able to persevere in the harsh conditions:

"Psychological observations of the prisoners have shown that only the men who allowed their inner hold on their moral and spiritual selves to subside eventually fell victim to the camp’s degenerating influences.

"We who lived in concentration camps can remember the men who walked through the huts comforting others, giving away their last piece of bread. They [were] few in number, but they offer sufficient proof that everything can be taken from a man but one thing: the last of the human freedoms—to choose one’s attitudes in any given set of circumstances.

A depressed person honestly believes there’s no hope that things will improve. Consequently, they collapse into a vegetative state of despair. It follows, therefore, that hope is literally the prerequisite for action.

So if I didn’t implicitly believe there’s a tomorrow worth living, then I wouldn’t have written this post.

I concluded that post with this:

If a storm disturbs the rock garden of a Zen monk, the next day he goes to work to restore its beauty.

Don't look away

But more than just "what do I believe", I think it's of equal or greater importance what you pay attention to. A person can correctly believe that we face doom yet try to just not think about it. In effect, if you never think about doom, are you any better off than if you didn't believe in it?

The movie Don't Look Up did a good job of capturing the feeling of doom and how the global citizenry might react to a apocalyptic event. Many people in the movie chose to live in blissful ignorance.

 

I discovered LessWrong a year ago and never read the AI-related material. I had a feeling I wouldn't like it so I avoided it. Now that I'm in the thick of it (as of a month ago), I'm reminded of this text my Christian friend sent me 5 years ago:

She was, of course, referring to religion, but it's an excellent series of questions that can equally be applied to AGI-related doom. 

Who can I talk to about my doom? I tried discussing the implications with two of my married friends yesterday: the husband was receptive to the topic, but the wife refused the engage in the discussion because it was too stressful.

I tried talking to my parents about it, but they're older and don't understand AI.

I thought about trying talk therapy for the first time, but if the therapist is uninformed about AGI, I don't want to introduce them to new stress and existential angst.

OP may be interested in a framework I created that Evaluates the ROI of Information. In it, I write:

While stimulating myself with new information all day long (which I imagine many people do), it can be easy to fool myself into thinking that I’m making progress towards a goal.

The first principle is that you must not fool yourself—and you are the easiest person to fool. — Richard Feynman

By evaluating the return on investment (ROI) of different sources of information, I can focus on just consuming the information that helps me make forward progress in life.

So within my framework, it sounds like podcasts for you would fall under the Trivia or Mental Masturbation categories, and not Effective Information.

 

I also wrote a post called Mental Masturbation and the Intellectual Comfort Zone which goes more into depth on how our brains convince us to consume more information than we need.

Nonfiction books are typically around 300 pages because that's what sells, and to hit 300 pages, authors often need to add a lot of fluff.

Agreed. That's why when I occasionally find a book less than 300 pages (say at only 100-200 pages), I think, "Wow, this author managed to not only streamline the book, but also convinced their publishing house that the book is better for it." This makes me want to read that streamlined book more.

Less Wrong has changed my life for the better. But it's time to say goodbye.

When I discovered LW over a year ago, it was reading your posts that inspired me to start writing my own LW posts. Publishing my thoughts has dramatically increased my rationality and writing skills. Your Fear Heuristic, in particular, helped me overcome my social anxiety when I moved to a new city. So thank you for posting on LW. ❤️

No, I wasn't using a third-party. I was viewing it on PC.

It looks normal today and I'm seeing paragraph breaks now.

I enjoy reading your posts, but I skip over the 300-word blocks of text like the following. Without new paragraphs or white space, it's too dense for me to want to read them. 

Thinking about AI impacts down the line without robotics seems to me like thinking about the steam engine without railroads, or computers without spreadsheets. You can talk about that if you want, but it’s not the question we should be asking. And even then, I expect more – for example I asked Claude about automating 80% of non-physical tasks, and it estimated about 5.5% additional GDP growth per year. Another way of thinking about Dean Ball’s growth estimate is that in 20 years of having access to this, that would roughly turn Portugal into the Netherlands, or China into Russia. Does that seem plausible? If you make a sufficient number of the pessimistic objections on top of each other, where we stall out before ASI and have widespread diffusion bottlenecks and robotics proves mostly unsolvable without ASI, I suppose you could get to 2% a year scenario. But I certainly wouldn’t call that wildly optimistic. I will reiterate my position that various forms of ‘intelligence only goes so far’ are almost entirely a Skill Issue, certainly over a decade-long time horizon and at the margins discussed here, amounting to Intelligence Denialism. The ASI cuts through everything. And yes, physical actions take non-zero time, but that’s being taken into account, future automated processes can go remarkably quickly even in the physical realm, and a lot of claims of ‘you can only know [X] by running a physical experiment’ are very wrong, again a Skill Issue. On the decreasing marginal value of goods, I think this is very much a ‘dreamed of in your philosophy’ issue, or perhaps it is definitional. I very much doubt that the physical limits kick in that close to where we are now, even if in important senses our basic human needs are already being met.

Load More