Neutrinos may actually be the temporal kinetic equivalent of photons,
No, neutrinos have all sorts of different properties than protons (spin-1/2, three generations, take part in weak interactions) regardless of their energy-momentum.
Neutrinos may actually be the temporal kinetic equivalent of photons,
No, neutrinos have all sorts of different properties than protons (spin-1/2, three generations, take part in weak interactions) regardless of their energy-momentum.
Regardless of their spatial energy-momentum. But if I'm not mistaken, all these properties are associated with particles that have mass?
So, I mean equivalent in the sense that they could be packets of temporal kinetic energy (in the form of their mass), in the way that photons are packets of spatial kinetic energy. It's quite possible that because their kinetic energy is temporal rather than spatial, they should have different and complementary properties compared to photons.
Or maybe the hypothetical Axions are a better candidate.
Edit: Or for that matter, the Higgs Boson.
I have a cluster of related physics ideas that I'm currently trying to work out the equations for. For the record, I am not a physicist. My bachelors is in computing specializing in cognitive science, and my masters is in computer science, with my thesis work on neural networks and object recognition.
So with that, my crazy ideas are:
That the invariant rest mass is actually temporal kinetic energy, that is to say, kinetic energy that moves the object through the time dimension of spacetime rather than the spatial dimensions. This is how come a particle at rest is still moving through time.
The relationship between time and temporal energy is hyperbolic. The more temporal kinetic energy you have, the more frequently you appear in a given period of time (a higher frequency of existence according to E = hf). A photon, which has no mass, according to relativity, doesn't experience the passing of time, and hence moves through space at exactly the speed of light. This can be shown by calculating out the proper time interval (delta t0 = delta t sqrt(1-v^2/c^2)). An object travelling at the speed of light experiences a proper time interval of 0. So from the relative "perspective" of a photon, it actually seems like travel to any distance is instantaneous.
Now, consider a motionless black hole (a perfect blackbody), which can be defined entirely by its mass, and the photon gas (blackbody radiation) that is the Hawking radiation produced by the black hole. Together these can be defined as the simplest closed thermodynamic system. As the black hole emits the photon gas, it decreases in mass, suggesting that the mass aka the temporal kinetic energy can be converted into spatial kinetic energy, which is essentially what a photon is a packet of. When a black hole consumes a photon and increases in mass, the reverse process occurs.
Also, gravity is proportional to the entropy density at a given region of spacetime. A black hole for instance has infinite entropy density, while a photon has essentially none. The reason why gravity appears so weak compared to electromagnetism is because much of the force of gravity is spread throughout the temporal dimension and effects things moving at different temporal velocities, while the electromagnetic force affects only things moving at the same temporal velocity.
At least some dark matter may in fact be normal baryonic matter that is travelling through time at a different temporal velocity than we are.
Neutrinos may actually be the temporal kinetic equivalent of photons, and the reason why the expansion of the universe seems to have started accelerating 5 billion years ago is because that was when the sun formed and the neutrino emissions of the sun have caused a small steady acceleration in the temporal velocity of the solar system relative to the cosmic background radiation.
I'm really glad that you're interested in this subject.
I recommend the 2009 book for the argument it presents that a symbolic key-value-store memory seems to be necessary for a lot of what the brains of humans and various other animals do. You say it has 'nothing new', so I assume then that you're already familiar with this argument.
Link? Sounds uninteresting and not new. I'm beyond skeptical - especially given that a standard RNN (or a sufficiently deep feedforward net) is a universal approximator.
You're referring to the Cybenko theorem and other theorems, which only establish 'universality' for a very narrow definition of 'universal'. In particular, a feedforward neural net lacks persistent memory. RNNs do not necessarily solve this problem! In many (not all, but the most common) RNN formulations, what exists is simply a form of 'volatile' memory that is easily overwritten when new training data emerges. In contrast, experiments involving https://en.wikipedia.org/wiki/Eyeblink_conditioning show that nervous systems store persistent memories. In particular, if you train an individual to respond to a conditioning stimulus, and then later 'un-train' the individual, and then attempt to train the individual again, they will learn much faster than the first time. A persistent change to the neural network structure has occurred. There have been various ways of trying to get around this problem of RNNs such as https://en.wikipedia.org/wiki/Long_short_term_memory but they wind up being either incredibly large (the Cybenko theorem does not place a limit on the size of the net) and thus infeasible, or otherwise ineffective.
Why ineffective? Experiments show why. Hesslow's recent experiment on cerebellar Purkinje cells: http://www.pnas.org/content/111/41/14930.short shows that this mechanism of learning spatiotemporal behavior and storing it persistently can be isolated to a single cell. This is very significant. It shows that not only the perceptron model, but even the Hodgkin-Huxley model is woefully inadequate for describing neural behavior.
The entire argument around the difference between the 'standard' neural network way of doing things and the way the brain seems to do things revolves around symbolic processing, as I said. In particular, any explanation of memory must be able to explain its persistence, the fact that symbolic information (numbers, etc.) can be stored and retrieved, and this all occurs persistently. Especially, the property of retrieval is often misunderstood. Retrieval means that, given some 'key' or 'pointer' to a memory, we can retrieve that memory. Often, network/associative explanations of memory revolve around purely associative memories. That is, memories of the form where if you have part of the memory, the system gives you back the rest of the memory. This is all well and good, but to actually form a general-purpose memory you need to do something somewhat different: be able to recall the memory when all you have is just a pointer to the memory (as is done in the main memory of a computer). This can be implemented in an associative memory but it requires two additional mechanisms: A mechanism to associate a pointer with a memory, and a mechanism to integrate the memory and pointer together in an associative structure. We do not yet know what form such a mechanism takes in the brain.
Gallistel's other ideas - like using RNA or DNA to store memories - seem dubious and ill-supported by evidence. But he's generally right about the need for a compact symbolic memory system.
What do you actually think memories are? Memories are simply reconstructions of a prior state of the system. When you remember something, your brain literally returns at least partially to the neural state of activation that it was in which you originally perceived the event you are remembering.
What do you think the "pointer" or "key" to a memory in the human brain is? Generally, it involves priming. Priming is simply presenting a stimulus that has been associated with the prior state.
The "persistent change" you're looking for is exactly how artificial neural networks learn. They change the strength of the connections between the neurons.
Symbol processing is completely possible with an associative network system. The symbol is encoded as a particular pattern of neuronal activations. The visual letter "A" is actually a state in the visual cortex when a certain combination of neurons are firing in response to the pattern of brightness contrast signals that rod and cone cells generate when we see an "A". The sound "A" is similarly encoded and our brain learns to associate the two together. Eventually, there is a higher layer neuron, or pattern of neurons that activate most strongly when we see or hear an "A", and this "symbol" can then be combined or associated with other symbols to create words or otherwise processed by the brain.
You don't need some special mechanism. An associative memory can store any memory input pattern completely, assuming it has enough neurons in enough layers to reconstruct most of the possible states of input.
Key or Pointer based memory retrieval can be completely duplicated by just associating the key or pointer to the memory state, such that priming the network with the key or pointer reconstructs the original state.
Here, it goes without saying that each of these positions is wrong.
I am under the impression that many in this community are consequentialist and that all consequentialists are moral nihilists by default in that they don't believe in the existence of inherent moral truths (moral truths that don't necessarily affect utility functions).
Uh, I was under the impression that most consequentialists are moral universalists. They don't believe that morality can be simplified into absolute statements like "lying is always wrong", but do still believe in conditional moral universals such as "in this specific circumstance, lying is wrong for all subjects in the same circumstance".
This is fundamentally different from moral relativism that argues that morality depends on the subject, or moral nihilism that says that there are no moral truths at all. Moral universalism still believes there are moral truths, but that they depend on the conditions of reality (in this case, that the consequences are good).
Even then, most Utilitarian consequentialists believe in one absolute inherent moral truth, which is that "happiness is intrinsically good", or that "the utility function, should be maximized."
Admittedly some consequentialists try to deny that they believe this and argue against moral realism, but that's mostly a matter of metaethical details.
Defining happiness as "guaranteed increased utility" is questionable. It doesn't consider situations of blissful ignorance, where
For simplicity's sake, we could assume a hedonistic view that blissful ignorance about something one does not want is not a loss of utility, defining utility as positive conscious experiences minus negative conscious experiences. But I admit that not everyone will agree with this view of utility.
Also, Aristotle would probably argue that you can have Eudaimonic happiness or sadness about something you don't know about, but Eudaimonia is a bit of a strange concept.
Regardless, given that there is uncertainty about the claims made by the questioner, how would you answer?
Consider this rephrasing of the question:
If you were in a situation where someone (possibly Omega... okay let's assume Omega) claimed that you could choose between two options: Truth or Happiness, which option would you choose?
Note that there is significant uncertainty involved in this question, and that this is a feature, rather than a bug of the question. Given that you aren't sure what "Truth" or "Happiness" means in this situation, you may have to elaborate and consider all the possibilities for what Omega could be meaning (perhaps even assigning them probabilities...). Given this quandary, is it still possible to come up with a "correct" rational answer?
If it's not, what additional information from Omega would be required to make the question sufficiently well-defined to answer?
I don't think this question is sufficiently well-defined to have a true answer. What does it mean to have/lack truth, what does it mean to have/lack happiness, and what are the extremes of both of these?
If I have all the happiness and none of the truth, do I get run over by a car that I didn't believe in?
If I have all the truth but no happiness, do I just wish I would get run over? Is there anything to stop me from using the truth to make myself happy again? Failing that is there anything that could motivate me to sit down for an hour with Eliezer and teach him the secrets of FAI before I kill myself? This option at least seems like it has more loopholes.
I admit this version of the question leaves substantial ambiguity that makes it harder to calculate an exact answer. I could have constructed a more well-defined version, but this is the version that I have been asking people already, and I'm curious how Less Wrongers would handle the ambiguity as well.
In the context of the question, it can perhaps be better defined as:
If you were in a situation where you had to choose between Truth (guaranteed additional information), or Happiness (guaranteed increased utility), and all that you know about this choice is the evidence that the two are somehow mutually exclusive, which option would you take?
It's interesting that you interpreted the question to mean all or none of the Truth/Happiness, rather than what I assumed most people would interpret the question as, which is a situation where you are given additional Truth/Happiness. The extremes are actually an interesting thought experiment in and of themselves. All the Truth would imply perfect information, while all the Happiness would imply maximum utility. It may not be possible for these two things to be completely mutually exclusive, so this form of the question may well just be illogical.
I have a slate of questions that I often ask people to try and better understand them. Recently I realized that one of these questions may not be as open-ended as I'd thought, in the sense that it may actually have a proper answer according to Bayesian rationality. Though, I remain uncertain about this. The question is actually quite simple and so I offer it to the Less Wrong community to see what kind of answers people can come up with, as well as what the majority of Less Wrongers think. If you'd rather you can private message me your answer.
The question is:
Truth or Happiness? If you had to choose between one or the other, which would you pick?
I'm impressed they managed to get the Big Three of the Deep Learning movement (Geoffrey Hinton, Yann LeCun, and Yoshua Bengio). I remember at the 27th Canadian Conference on Artificial Intelligence 2014, I asked Professor Bengio what he thought of the ethics of machine learning, and he asked if I was a reporter. XD
As someone who's had a very nuanced view of abortion, as well as a recent EA convert who was thinking about writing about this, I'm glad you wrote this. It's probably a better and more well-constructed post than what I would have been able to put together.
Thanks! It took a long time - and was quite stressful. I'm glad you liked it.
The argument in your post though, seems to assume that we have only two options, either to totally ban or not ban all abortion,
I actually deliberately avoided discussing legal issues (ban or not ban) because I felt the purely moral issues were complicated enough already.
Actually, I can imagine that a manner of integrating EA considerations into my old ideas would be to weigh the value of the fetus not only by its "personhood", but also its "potential personhood given moral uncertainty".and its expected QALYs.
Yeah, if you want to do both you need a joint probability distribution, which seemed a little in-depth for this (already very long!) post.
I had another thought as well. In your calculation, you only factor in the potential person's QALYs. But if we're really dealing with potential people here, what about the potential offspring or descendants of the potential person as well?
What I mean by this is, when you kill someone, generally speaking, aren't you also killing all that person's future possible descendants as well? If we care about future people as much as present people, don't we have to account for the arbitrarily high number of possible descendants that anyone could theoretically have?
So, wouldn't the actual number of QALYs be more like +/- Infinity, where the sign of the value is based on whether or not the average life has more net happiness than suffering, and as such, is considered worth living?
Thus, it seems like the question of abortion can be encompassed in the question of suicide, and whether or not to perpetuate or end life generally.
Yes this is why I said you can implement general-purpose memory with associative memory. However, you need two additional mechanisms which the naive associative view doesn't address: You need the ability to create a pointer for a newly-generated memory and to associate this together with the memory. The basic RNN-based associative memory formulation does not have this mechanism, and we have no idea what form this mechanism takes in the brain. Also, you need the ability to work directly on pointers and to store pointers themselves in memory locations which can then be pointed to. However, this is more a processing constraint.
You're assuming that a Von Neumann Architecture is a more general-purpose memory than an associative memory system, when in fact, it's the other way around.
To get your pointer-based memory, you just have to construct a pointer as a specific compression or encoding of the memory in the associative network. For instance, you could mentally associate the number 2015 with a series of memories that have occurred in the last six months. In the future, you could then retrieve all memories that have been "hashed" to that number just by being primed with the number.
Remember that even on a computer, a pointer is simply a numerical value that represents the "address" of the particular segment of data that we want to retrieve. In that sense, it is a symbol that connects to and represents some symbols, not unlike a variable or function.
We can model this easily in an associative memory without any additional mechanisms, simply by having a multi-layer model that can combine and abstract different features of the input space into what are essentially symbols or abstract representations.
Von Neumann Architecture digital computers are nothing more than physical symbol processing systems. Which is to say that it is just one of many possible implementations of Turing Machines. According to Hava Siegelmann, a recurrent neural network with real precision weights would be, theoretically speaking, a Super Turing Machine.
If that isn't enough, there are already models called Neural Turing Machines that combine recurrent neural networks with the Von Neumann memory model to create networks that can directly interface with pointer-based memory.