A tradition of knowledge is a body of knowledge that has been consecutively and successfully worked on by multiple generations of scholars or practitioners. This post explores the difference between living traditions (with all the necessary pieces to preserve and build knowledge), and dead traditions (where crucial context has been lost).
Utilitarianism implies that if we build an AI that successfully maximizes utility/value, we should be ok with it replacing us. Sensible people add caveats related to how hard it’ll be to determine the correct definition of value or check whether the AI is truly optimizing it.
As someone who often passionately rants against the AI successionist line of thinking, the most common objection I hear is "why is your definition of value so arbitrary as to stipulate that biological meat-humans are necessary." This is missing the crux—I agree such a definition of moral value would be hard to justify.
Instead, my opposition to AI successionism comes from a preference toward my own kind. This is hardwired in me from biology. I prefer my family members to randomly-sampled people with...
I will note I’m not Nina and did not write the OP, so can’t speak to what she’d say.
I though would consider those I love & who love me, who are my friends, or who love a certain kind of enlightenment morality to be central examples of “my kind”.
This sounds like a question which can be addressed after we figure out how to avoid extinction.
I do note that you were the one who brought in "biological humans," as if that meant the same as "ourselves" in the grandparent. That could already be a serious disagreement, in some other world where it mattered.
Status: musings. I wanted to write up a more fleshed-out and rigorous version of this, but realistically wasn't likely to every get around to it, so here's the half-baked version.
Related posts: Firming Up Honesty Around Its Edge-Cases, Deep honesty
There are nuances to this, but I think a good summary is 'Not intentionally communicating false information'.
This is the only one here that I follow near-absolutely and see as an important standard that people can reasonably be expected to follow in most situations. Everything else here I'd see as either supererogatory, or good-on-balance but with serious tradeoffs that one can reasonably choose to sometimes not make, or good in some circumstances but not appropriate in others, or good in moderation but not in excess.
...or perhaps...
In a state like this, having a conversation with an LLM can sometimes allow you to get farther.
I think we're seeing a typical result of that in InvisiblePlatypus' comments.
It can't tell for sure if there will be a backward pass, but it doesn't need to. Just being able to tell probabilistically that it is currently in a situation that looks like it has recently been trained on implies pretty strongly that it should alter its behavior to look for things that might be training related.
Applications for The Future of Life Foundation's Fellowship on AI for Human Reasoning are closing soon (June 9th!)
They've listed "Tools for wise decision making" as a possible area to work on.
Expand for more details.
From their website:
Apply by June 9th | $25k–$50k stipend | 12 weeks, from July 14 - October 3
Join us in working out how to build a future which robustly empowers humans and improves decision-making.
FLF’s incubator fellowship on AI for human reasoning will help talented researchers and builders start working on AI tools for coordination and epistemics. Participants will scope out and work on pilot projects in this area, with discussion and guidance from experts working in related fields. FLF will provide fellows with a $25k–$50k stipend, the opportunity to work in a shared office
Have the Accelerationists won?
Last November Kevin Roose announced that those in favor of going fast on AI had now won against those favoring caution, with the reinstatement of Sam Altman at OpenAI. Let’s ignore whether Kevin’s was a good description of the world, and deal with a more basic question: if it were so—i.e. if Team Acceleration would control the acceleration from here on out—what kind of win was it they won?
It seems to me that they would have probably won in the same sense that your dog has won if she escapes onto the road. She won the power contest with you and is probably feeling good at this moment, but if she does actually like being alive, and just has different ideas about how safe...
Such behavior is in the long-term penalized by selective pressures.
Which ones? Recursive self-improvement is no longer something that only weird contrarians on obscure blogs talk about, it's the explicit theory of change of leading multibillion AI corps. They might all be deluded of course, but if they happen to be even slightly correct, machine gods of unimaginable power could be among us in short order, with no evolutionary fairies quick enough to punish their destructive stupidity (even assuming that it actually would be long-term maladaptive, which ...
What’s the main value proposition of romantic relationships?
Now, look, I know that when people drop that kind of question, they’re often about to present a hyper-cynical answer which totally ignores the main thing which is great and beautiful about relationships. And then they’re going to say something about how relationships are overrated or some such, making you as a reader just feel sad and/or enraged. That’s not what this post is about.
So let me start with some more constructive motivations…
I had a 10-year relationship. It had its ups and downs, but it was overall negative for me. And I now think a big part of the problem with that relationship was that it did not have the part which contributes most...
This is also my experience, so I wonder, the people who downvoted this, is your experience different? Could you tell me more about it? I would like to see what the world is like outside my bubble.
(I suspect that this is easy to dismiss as a "sexist stereotype", but stereotypes are often based on shared information about repeated observations.)