Space dilation happens because the patterns caused by high speed travel cause the 3D grid pattern to become unstable and the illusion that dimensions exist breaks down.
In special relativity all inertial reference frames are equivalent, even those moving at 99% of lightspeed. None is more unstable than the other, all physics works exactly the same. Good luck reproducing that with any kind of grid. You'd be more likely to get some kind of waves that propagate at fixed speed along the grid, giving you a privileged rest frame, like in the old discredited theories of aether.
Quantum effects happen when patterns (particles) spread across nodes that still have connections between them besides those connections that make up the primary 3D grid.
Explain Grover's algorithm then. How the hell can I guess a black-box secret with n possibilities using only sqrt(n) attempts. Hidden connections aren't enough, quantum physics allows more computation as well.
Also hidden connections would make FTL communication possible, which it isn't. You'd better learn how entanglement works. Two distant observers can't use it to communicate, but when they come together later and compare notes, it makes them go "hmm that's spooky". It's a very delicate middle ground. I wrote up a short example sometime ago, maybe it'll help.
More generally, don't try to come up with physical theories if you don't wanna learn physics. Knowing physics will let you come up with ten such "new perspectives" before breakfast and do some useful work besides. They will be crazier too. How about the holographic universe? Or Wheeler's idea that all electrons have the same properties because they're the same electron going back and forth in time? In comparison, most ideas coming from non-physicists are painfully boring (not to mention wrong).
You'd be more likely to get some kind of waves that propagate at fixed speed along the grid, giving you a privileged rest frame, like in the old discredited theories of aether.
I'll try to steelman Florian_Dietz.
I don't know much anything about relativity, but waves on a grid in computational fluid dynamics (CFD for short) typically don't have the problem you describe. I do vaguely recall some strange methods that do in a Lagrangian CFD class I took, but they are definitely non-standard and I think were used merely as simple illustrations of a class of methods.
Plus, some CFD methods like the numerical method of characteristics discretize in different coordinates that follow the waves. This can resolve waves really well, but it's confusing to set up in higher dimensions.
CFD methods are just particularly well developed numerical methods for physics. From what I understand analogous methods are used for computational physics in other domains (even relativity).
I don't know much anything about relativity, but waves on a grid in computational fluid dynamics (CFD for short) typically don't have the problem you describe.
Not even for wavelengths not much longer than the grid spacing?
I don't see how that would be a problem. Perhaps I'm missing something, so if you could explain I'd be appreciative.
Usually the problem is that wavelengths smaller than the grid size obviously can't be resolved. A class of turbulence modeling approaches can help with that to a certain extent. This class of methods is called "large eddy simulation", or LES for short. You apply a low pass filter to the governing equations and then develop models for "unclosed" terms. In practice this is typically done less rigorously than I'd like, but it's a valid modeling approach in general that should see more use in other fields. (Turbulence modeling is an interesting field in itself that a rational person might be interested in studying simply for the intellectual challenge.)
This is not really a theory. I am not making predictions, I provide no concrete math, and this idea is not really falsifiable in its most generic forms. Why do I still think it is useful? Because it is a new way of looking at physics, and because it makes everything so much more easy and intuitive to understand, and makes all the contradictions go away.
Let's compare it with an alternative theory that there are invisible magical wee beasties all around who make the physics actually work by pushing, pulling, and dragging all the stuff. And "there are alternative interpretations for explaining relativity and quantum physics under this perspective" -- sometimes the wee beasties find magic mushrooms and eat them.
It's a tie! But the beasties are cuter, so they win.
On one hand, you're completely right. On the other hand, your comment leaves an obvious opening (models made of graphs are easier to compute than models made of beasties). When I reply to something, I usually steelman it in my mind first. That often leads to interesting ideas and makes my reply sound deeper, which leads to upvotes as you can see :-)
I only skimmed this post, but I want to point out that most computational physics (and engineering) uses discretized space and time much as you've described. This is not new, just how things are often computed in practice.
Whether or not reality is discrete in this sense is beyond my knowledge as an engineer, but I have had conversations with physicists about this. (As I recall, it's possible, but the spatial and temporal resolution would be very small.)
Also, there are some exact solutions for discretized physics like this, but in general it's harder to do. Plus, because physical laws tend to be written in continuous form, very few people look for exact solutions like this.
makes all the contradictions go away
Not really. In computational fluid dynamics, converting to discrete equations can introduce major problems. One important problem is conservation. Depending on how you formulate your discrete equations, mass, energy, etc., may be no longer conserved and might not even be approximately conserved. "Equivalent" continuous equations would not have the same problem. And I would not say solving this problem is trivial by any means, though I know at least one way to do it.
Seems to me you are trying to replace the "infinite precision" by something else, without actually addressing the things that made people assume "infinite precision" in the first place.
I mean that fact that this universe doesn't behave like one built on a grid, but instead if you rotate things by a random angle, move them by a random distance, or even put everything on a platform moving uniformly in a random direction, the experiments keep running the same way.
If your hypothesis is that on the particle level the grid is already too coarse-grained so that weird things happen, then particles moving differently in different angles is among those things that should happen. Also, particles behaving differently when they are on a moving platform, such as the planet Earth.
I think Wolfram proposes almost exactly this in "A New Kind of Science" (which also borders on crackpottishness).
That is to say, reality at a point of time
Nope. Time is inside reality, remember? You cannot have a single defined time for all reality.
I believe it's a beautiful idea that doesn't obey the second law of thermodynamics. I like it because it's a different paradigm on the link between information and reality, based on boolean projections and topology. However the thermodynamic cost of erasing de-entangles bits at quantum scale would be enormous. There is not enough noise into the system to cool it down. In that respect, it might have value as a pre Big-Bang hypothesis but from what I can tell it will release energy at plank scale instantly.
"There is no contradiction between quantum physics and relativity if the very concept of distance is unreliable", that is correct, but we exists at the "sweet spot" right between them. You could say distance can not be unreliable because Maxwell's equations are differential.
A section of three dimensional space can be modelled as a cubic grid with nodes where the edges intersect, up to some limited resolution for a cube of finite volume ( and I suppose the same holds true with more than three dimensions ). It sounds as if you're proposing this graph basically be flattened - you take a fully connected regular polygon of n^3 angles, map the nodes in your cube to your polygon and then delete all edges in the connected polygon that don't correspond to an edge present in the cube.
I have further questions but they hinge on whether or not I've understood you correctly., Is the above so far a fair summary?
You might like the book "The End of Time" by Julian Barbour. It's about an alternative view of physics where you rearrange all of the equations to not include time. The book describes the result sort of similarly to what you're suggesting, where the system is defined as the relationship between things and the evolution of those relationships and not precise locations and times.
This (at least your concrete description) seems inconsistent with the theory of relativity, since it has "points of time" with fully defined realities at those points. This is more like Newtonian absolute time.
(Note: this is anywhere between crackpot and inspiring, based on the people I talked to before. I am not a physicist.)
I have been thinking about a model of physics that is fundamentally different from the ones I have been taught in school and university. It is not a theory, because it does not make predictions. It is a different way of looking at things. I have found that this made a lot of things we normally consider weird a lot easier to understand.
Almost every model of physics I have read of so far is based on the idea that reality consists of stuff inside a coordinate system, and the only question is the dimensionality of the coordinate system. Relativity talks about bending space, but it still treats the existence of space as the norm. But what if there were no dimensions at all?
Rationale
If we assume that the universe is computable, then dimension-based physics, while humanly intuitive, are unnecessarily complicated. To simulate dimension-based physics, one first needs to define real numbers, which is complicated, and requires that numbers be stored with practically infinite precision. Occam's Razor argues against this.
A graph model in contrast would be extremely simple from a computational point of view: a set of nodes, each with a fixed number of attributes, plus a set of connections between the nodes, suffices to express the state of the universe. Most importantly, it would suffice for the attributes of nodes to be simple booleans or natural numbers, which are much easier to compute than real numbers. Additionally, transition functions to advance in time would be easy to define as well as they could just take the form of a set of if-then rules that are applied to each node in turn. (these transition functions roughly correspond to physical laws in more traditional physical theories)
Idea
Model reality as a graph structure. That is to say, reality at a point of time is a set of nodes, a set of connections between those nodes, and a set of attributes for each node. There are rules for evolving this graph over time that might be as simple as those in Conway's game of life, but they lead to very complex results due to the complicated structure of the graph.
Connections between nodes can be created or deleted over time according to transition functions.
What we call particles are actually patterns of attributes on clusters of nodes. These patterns evolve over time according to transition functions. Also, since particles are patterns instead of atomic entities, they can in principle be created and destroyed by other patterns.
Our view of reality as (almost) 3-dimensional is an illusion created by the way the nodes connect to each other: This can be done if a pattern exists that matches these criterions: change an arbitrarily large graph (a set of vertices, a set of edges), such that the following is true:
-There exists a mapping f(v) of vertices to (x,y,z) coordinates such that for any pair of vertices m,n: the euclidean distance of f(m) and f(n) is approximately equal to the length of the shortest path between m and n (inaccuracies are fine so long as the distance is small, but the approximation should be good at larger distances).
A dimensionless graph model would have no contradiction between quantum physics and relativity. Quantum effects happen when patterns (particles) spread across nodes that still have connections between them besides those connections that make up the primary 3D grid. This also explains why quantum effects exist mostly on small scales: the pattern enforcing 3D grid connections tends to wipe out the entanglements between particles. Space dilation happens because the patterns caused by high speed travel cause the 3D grid pattern to become unstable and the illusion that dimensions exist breaks down. There is no contradiction between quantum physics and relativity if the very concept of distance is unreliable. Time dilation is harder to explain, but can be done. This is left as an exercise to the reader, since I only really understood this graph-based point of view when I realised how that works, and don't want to spoiler the aha-moment for you.
Note
This is not really a theory. I am not making predictions, I provide no concrete math, and this idea is not really falsifiable in its most generic forms. Why do I still think it is useful? Because it is a new way of looking at physics, and because it makes everything so much more easy and intuitive to understand, and makes all the contradictions go away. I may not know the rules by which the graph needs to propagate in order for this to match up with experimental results, but I am pretty sure that someone more knowledgeable in math can figure them out. This is not a theory, but a new perspective under which to create theories.
Also, I would like to note that there are alternative interpretations for explaining relativity and quantum physics under this perspective. The ones mentioned above are just the ones that seem most intuitive to me. I recognize that having multiple ways to explain something is a bad thing for a theory, but since this is not a theory but a refreshing new perspective, I consider this a good thing.
I think that this approach has a lot of potential, but is difficult for humans to analyse because our brains evolved to deal with 3D structures very efficiently but are not at all optimised to handle arbitrary graph structures with any efficiency. For this reason, Coming up with an actual mathematically complete attempt at a graph-based model of physics would almost certainly require computer simulations for even simple problems.
Conclusion
Do you think the idea has merit?
If not, what are your objections?
Has research in something like this maybe already been done, and I just never heard of it?