I don't think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can't be disproven any more than the definition of a triangle can be disproven.
What needs to be done instead is show the definition to be incoherent or that it doesn't match our intuition.
Can you explain why that's a misconception? Or at least point me to a source that explains it?
I've started working with neural networks lately and I don't know too much yet, but the idea that they recreate the generative process behind a system, at least implicitly, seems almost obvious. If I train a neural network on a simple linear function, the weights on the network will probably change to reflect the coefficients of that function. Does this not generalize?
It fits with the idea of the universe having an orderly underlying structure. The simulation hypothesis is just one way that can be true. Physics being true is another, simpler explanation.
Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they're the easiest way to create a friendly general intelligence is another question altogether.
They may be used to create complex but boring part of the real AI like image recognition. DeepMind is no where near to NN, it combines several architectures. So NNs are like ToolAIs inside large AI system: they do a lot of work but on low level.
Many civilizations may fear AI, but maybe there's a super-complicated but persuasive proof of friendliness that convinces most AI researchers, but has a well-hidden flaw. That's probably a similar thing to what you're saying about unpredictable physics though, and the universe might look the same to us in either case.
Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn't be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI's "crash radius" of destruction.
Regarding your second point, if it turns out that most organic races can't produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it...
That's a good point. Possible solutions:
AI just don't create them in the first place. Most utility functions don't need non-evolving von Neumann probes, and instead the AI itself leads the expansion.
AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn't in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to des
Infinity is really confusing.
My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.
The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.
Suppose I'm destructively uploaded. Let's assume also that my consciousness is destroyed, a new consciousness is created for the upload, and there is no continuity. The upload of me will continue to think what I would've thought, feel what I would've felt, choose what I would've chosen, and generally optimize the world in the way I would've. The only thing it would lack is my "original consciousness", which doesn't seem to have any observable effect in the world. Saying that there's no conscious continuity doesn't seem meaningful. The only actual...
I expect that most people are biased when it comes to judging how attractive they are. Asking people probably doesn't help too much, since people are likely to be nice, and close friends probably also have a biased view of ones attractiveness. So is there a good way to calibrate your perception of how good you look?
One thing that helped me a lot was doing some soul-searching. It's not so much about finding something to protect so much as realizing what I already care about, even if there are some layers of distance between my current feelings and that thing. I think that a lot of that listless feeling of not having something to protect is just sort of being distracted from what we actually care about. I would recommend just looking for anything you care about at all, even slightly, and just focusing on that feeling.
At least that makes sense and works for me.
There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn't to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.
I think the biggest reason we have to assume that the universe is empty is that the earth hasn't already been colonized.
Ah I see. I was thinking of motte and bailey as something like a fallacy or a singular argument tactic, not a description of a general behavior. The name makes much more sense now. Thank you. Also, you said it's called that "everywhere except the Scottosphere". Could you elaborate on that?
What does the tern "doctrine" mean in this context anyways? It's not exactly a belief or anything, just a type of argument. I've seen that it's called that but I don't understand why.
You cite the language's tendency to borrow foreign terms as a positive thing. Wouldn't that require an inconsistent orthography?
Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.
This is probably true. I think a lot of people feel uncomfortable with the possibility of us living in a simulation, because we'd be in a "less real" universe or we'd be under the complete control of the simulators, or various other complaints. But if such super-Turing machines are possible, then the simulated nature of the universe wou...
I always thought that the "most civilizations just upload and live in a simulated utopia instead of colonizing the universe" response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don't remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I...
I strongly suspect that the effectiveness of capitalism as a system of economic organization is proportional to how rational agents participating in it are. I expect that capitalism only optimizes against the general welfare when people in a capitalist society make decisions that go against their own long-term values. The more rational a capitalist society is, the more it begins to resemble an economist's paradise.
Thank you! That's the first in-depth presentation of someone actually benefiting from MBTI that I've ever seen, and it's really interesting. I'll mull over it. I guess the main thing to keep in mind is that other people are different from me.
I've noticed that a lot of my desire to be rational is social. I was raised as the local "smart kid" and continue to feel associated with that identity. I get all the stuff about rationality should be approached like "I have this thing I care about, and therefore become rational to protect it." but I just don't feel that way. I'm not sure how I feel about that.
Of the three reasons to be rational that are described, I'm most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at a...
I'd have to be stronger than the group in order to get more food than the entire group, but depending on their ability to cooperate I may be able to steal plenty for myself, an amount that would seem tiny compared to the large amount needed for the whole group.
The example I chose was a somewhat bad one I think though because the villagers would have a defender's advantage of protecting their food. You can substitute "food" for "abstract, uncontrolled resource" to clarify my point.
an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue.
Maybe that's still the same kind of status, but it is in regards to a different domain. Perhaps an effective understanding of status acknowledges that groups overlap and may be formed around different resources. In your example, there is group (raiders and natives) which forms around literal physical resources, perhaps food. In this group, status is determined by military might, s...
When making Anki cards, is it more effective to ask the meaning of a term, or to ask what term describes a concept?
Would a boxed AI be able to affect the world in any important way using the computer hardware itself? Like, make electrons move in funky patterns or affect air flow with cooling fans? If so, would it be able to do anything significant?
Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.
Thank you to those who commented here. It helped!
Hmm it seems obvious in retrospect, but it didn't occur to me that biochemistry would relate to nanotech. I suppose I compartmentalized "biological" from "super-cool high-tech stuff." Thank you very much for that point!
I'm at that point in life where I have to make a lot of choices about my future life. I'm considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I'm not sure if that's the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shou...
I have a nagging voice in my head saying that I shouldn't bother learning biochemistry, because it won't be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point?
Keeping in mind the biases (EDIT: but also the expertise) that my username indicates, I would say that is nearly exactly backwards - modifications and engineering of biochemistry and biochemistry-type systems will actually occur (and already are) while what most people around here think of when they say 'nanotech' is a pipe dream....
I guess what I'm saying is that since simpler ones are run more, they are more important. That would be true if every simulation was individually important, but I think one thing about this is that the mathematical entity itself is important, regardless of the number of times it's instituted. But it still intuitively feels as though there would be more "weight" to the ones run more often. Things that happen in such universes would have more "influence" over reality as a whole.
What I mean though, is that the more complicated universes can't be less significant, because they are contained within this simple universe. All universes would have to be at least as morally significant as this universe, would they not?
Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."
Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?
Ones I've noticed are "lazy" or "stupid" or other words that are used to describe people. Sure, it can be good to have such models so that one can predict the behavior of a person, like "This person isn't likely to do his work." or "She might have trouble understanding that." The thing is, these are often treated as fundamental properties of an ontologically fundamental thing, which the human mind is not.
Why is this person lazy? Do they fall victim to hyperbolic discounting? Is there an ugh field related to their wo...
I really would like a chronological order.
I really like the cute little story as you say, but agree that it isn't effective where it is. Maybe include it in the end as a sort of appendix?
Three shall be Peverell's sons and three their devices by which Death shall be defeated.
What is meant by the three sons? Harry, Draco, and someone else? Quirrell perhaps? Using the three Deathly Hallows?
I interpreted this to mean that long ago, there were 3 Peverell brothers, each of which created one of the Hallows. Harry is descended from this family. Note that it doesn't say that "Pevererll's sons" will necessarily be the ones to use their devices to defeat Death, only that the devices are theirs.
I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I'm beginning to see that things aren't so simple.
I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.
It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.
Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.
I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.
My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?
I would very much like to attend this, having never attended a meetup before. However, I am currently a minor who lacks transportation ability and have had little luck convincing my guardians to drive me to it. Is there anybody who is attending and is coming from the Birmingham, AL area who would be willing to drive me? I am willing to pay for the service.
I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
Is it probable for intelligent life to evolve?
Robin, or anyone who agrees with Robin:
What evidence can you imagine would convince you that AGI would go FOOM?
While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.
That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:
A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.
Long-run data showing AI systems gradually increasing in capability without any increase in complexity. Th
What evidence would convince you that AGI won't go FOOM?