When someone says they have anecdotes but want data, I hear an opportunity for crowdsourcing.
Perhaps a community blog is the wrong tool for this? What if we had a tool that supported tracking rationalist intervention efficacy? People could post specific interventions and others could report their personal results. Then the tool would allow for sorting interventions by reported aggregate efficacy. Maybe even just a simple voting system?
That seems like it could be a killer app for lowering the bar toward encouraging newcomers and data-poor interventions from getting posted and evaluated.
So what happened?
It's clear that there's enough interest in this and enough people think this is a good idea that there will at least be some small market of products like this long term. I think that's not really a debate.
However, what I am incredibly interested in is why this is so polarizing? It seems like people either go "hmm, yeah, okay, yes" or "OMG NO". Why?
He actually spent the first two months on a Soylent-only diet, and only recently added social eating. I think he said something in his three month blog post about a week he spent eating normal food, and he ended up feeling way crappier.
There's kind of a growing movement around Rob Rhinehart's Soylent thing, dunno if you folks have heard of this.
Basically, he got tired of making food all the time and tried to figure out the absolute minimum required chemical compounds required for a healthy diet, and then posted the overall list, and has now been roughly food free for three months, along with a bunch of other people.
It seems awesome to me and I'm hoping this sort of idea becomes more prevalent. My favorite quote from him I can't now find, but it's something along the lines of "I enjoy going to the movie theater, but I don't particularly feel the need to go three times a day."
There's small reddit community/discourse groups around getting your own mixture.
hey me too!
Thank you for all your replies! I guess I should figure out how to turn on email notifications or something.
A few thoughts.
1) Yes, if cost goes down, then this becomes much more palatable, I agree. However, I didn't mean to strictly imply monetary cost. But yes, overall, a great point. Driving costs down sounds like a reasonable goal.
2) As a few of you pointed out, you're absolutely right that I should be consistent in my claims about selfishness - if the cost of cryonics is equal to that of buying a house, then either I should not buy a house or my objection is elsewhere. I think this comes back to the problem of not considering monetary cost solely. I don't object to buying a house as much, even for the same monetary cost, because presumably I am alive and am productively helping society (at least, I would hope so). As far as vacations to the Bahamas go, yeah, I'm not sure I would choose to take said vacation for similar reasons (seems real selfish to me). So perhaps I'm somewhat consistent (ha).
3) True, evolution does not have a human-style "goal" in mind, and perhaps we have beaten evolution in the sense that it no longer will continue to produce productive results, or at least as productive as our technological advancements can achieve. So, that's definitely a fair point.
4) My feeling on death is that your time is your time, but I guess in retrospect I have no more reason to feel that way than anyone has to feel that they should avoid death. Certainly the point that there is no real reason the current life expectancy is what it is is a good one.
So, all, excellent points, well taken. I think I am to the point where my objection to cryonics is only a little above my objection to vacations in the Bahamas. :) Which is to say, still strong - I can understand that others are likely to want to do so, but I doubt I will be encouraging anyone, much less planning trips of my own.
Given that AGI seems imminent and there's no currently good alignment plan, is there any value to discussing what it might take to keep/move the most humans out of the way? I don't want to discourage us steering the car out of the crash, so by all means we should keep looking for a good alignment plan, but seat belts are also a good idea?
As an example: I don't particularly like ants in my house, but as a superior intellect to ants we're not going about trying to exterminate them off the face of the Earth, even if mosquitoes are another story. Exterminating all ants just doesn't help achieve my goals. It's a huge amount of resource use that I don't really care to spend time on. Ants are thriving in a world filled with superintelligence (though of course humans are much more similar to ants than an AGI would be to us).
Assuming we fail at alignment, but AGI is not explicitly trying to exterminate every single human or make the planet uninhabitable as its underlying goals, perhaps humans can just try and stay out of the way? Is it valuable to spend time on what groups of human strategies might cause potential AGI the least amount of grief or be the most out of the way?
Perhaps there are two angles to this question: (1) how can humans in general be as ant-like in the above dynamic as possible? (2) if you were a peaceful mosquito who had sworn off bothering humans, how could you make yourself, friends, family, loved ones, anyone who will listen, least likely to be exterminated alongside bothersome mosquitoes?
As hyperbole to demonstrate the point, e.g. I feel like information workers in S.F. or military personnel in D.C. are more likely to cause an AGI grief than uncontacted tribes on remote islands. An AGI may not decide to invest the energy to deal with the folks on the islands, especially if they are compliant and want to stay there.