We don't encounter writings that seek to change reality very often.
Vitaly Vanchurin, a physics professor at the University of Minnesota
Duluth, seeks to reframe reality in a particularly eye-opening way in a
preprint sent to arXiv this summer. He suggests that humans are part of a
vast brain network that controls everything in our environment. In other
words, it's "possible that the entire cosmos on its most fundamental level
is a neural network," he stated in the report.
Physicists have been working to make quantum physics and general relativity
compatible for a long time. The former contends that time is relative and
connected to the structure of space-time, whereas the latter asserts that
time is universal and absolute.
Vanchurin contends in his research that both universal theories can
"display approximation behaviors" in artificial neural networks. He adds
that since quantum mechanics "is a highly successful paradigm for
representing physical processes on a wide variety of scales, it is commonly
thought that on the most basic level the entire universe is regulated by the
principles of quantum mechanics and even gravity should somehow originate
from it."
In the paper's discussion, it is said that "we are not only claiming that
the artificial neural networks can be beneficial for evaluating physical
systems or for uncovering physical laws, we are saying that this is how the
world around us truly works." It may be argued that this is a proposal for
the theory of everything, and as such, it should be simple to
disprove.
Most of the physicists and machine learning specialists we contacted
declined to comment on the topic on the record due to their skepticism about
the paper's findings. However, Vanchurin dug into the debate and expanded on
his proposal in a Q&A with Futurism.
Futurism: According to your paper, the cosmos could be basically a neural
network. How would you justify your conclusions to someone who was
unfamiliar with physics or neural networks?
Theodore Vanchurin: Your question might be answered in two different
ways.
The first method is to begin with an accurate model of a neural network
before examining how the network behaves when there are many more neurons
present. What I've demonstrated is that quantum mechanical equations very
well capture the behavior of a system close to equilibrium, whereas
classical mechanical equations accurately capture the behavior of a system
farther from equilibrium. Coincidence? Perhaps, but as far as we are aware,
the physical world operates according to quantum and classical
mechanics.
Starting with physics is the second approach. We are aware that general
relativity operates rather effectively at vast sizes and that quantum
mechanics operates fairly well at tiny scales, but we have not yet been able
to integrate the two theories. This is referred to as the quantum gravity
issue. We obviously have a significant gap in our knowledge, and to make
matters worse, we have no idea how to deal with observers. In terms of
quantum mechanics and cosmology, this is referred to as the measurement
issue, respectively.
Then, one may contend that quantum physics, general relativity, and
observers are the three phenomena that require unification, not just two.
Most scientists (99%) would agree that quantum mechanics is the fundamental
theory and that all else should somehow flow from it, but no one is certain
how it can be accomplished. In this study, I explore a different hypothesis:
that everything, including quantum physics, general relativity, and
macroscopic observers, arises from a tiny neural network that serves as the
basic structure. Things seem fairly decent thus far.
I first produced a paper titled "Towards a theory of machine learning" in
order to simply understand deep learning better. The initial plan was to
analyze neural network activity using statistical mechanics techniques, but
it turned out that, within certain bounds, neural network learning (or
training) dynamics are extremely comparable to the quantum dynamics we
observe in physics. I chose to investigate the notion that the physical
universe is essentially a neural network at the time since I was (and still
am) on sabbatical leave. The notion is undoubtedly absurd, but is it truly
absurd enough to be true? That is still up in the air.
You stated in the paper that all it took to disprove the theory was to
identify a physical phenomenon that neural networks could not adequately
explain. Why do you say that, exactly? Why is it "easier said than done" to
do this?
There are a lot of "theories of everything," but the majority of them must
be false. According to my hypothesis, everything you see around you is a
neural network; thus, all that is required to disprove it is the discovery
of a phenomena that cannot be explained by a neural network. However, if you
stop to think about it, it is a really challenging undertaking, especially
given how little we truly understand about how neural networks operate and
how machine learning functions. That was why I sought to establish a theory
of machine learning on the first place.
The notion is undoubtedly absurd, but is it truly absurd enough to be true?
That is still up in the air.
Does your study take the observer effect into account, and how does it
relate to quantum mechanics?
Everett's (or many-worlds) interpretation and Bohm's (or hidden variables)
interpretation are the two primary schools of thought about quantum physics.
Regarding the many-worlds interpretation, I have nothing new to add, but I
think I can add to the ideas of hidden variables. The states of individual
neurons are the hidden variables in the emergent quantum mechanics that I
researched, whereas quantum variables (such as the bias vector and weight
matrix) are the trainable variables. The concealed variables may be
extremely non-local, which would violate Bell's inequalities. The system
does not need to be local since every neuron may link to every other neuron,
notwithstanding the assumption of an estimated space-time locality.
Could you please clarify the relationship between this concept and natural
selection? The evolution of intricate biological cells and structures is
influenced by natural selection.
I'm saying something really simple. There are more stable structures (or
subnetworks) of the microscopic neural network, and there are less stable
structures. Evolution would favor the more robust structures, causing the
less robust ones to disappear. I anticipate that natural selection will
produce extremely simple structures at the smallest scales, such as chains
of neurons, but more complex structures at larger scales. The claim is that
everything we see around us (particles, atoms, cells, observers, etc.) is
the outcome of natural selection since I see no reason why this process
should be restricted to a certain length scale.
I was interested when you said in your first email that you might not
understand everything yourself. What did you mean specifically? Were you
referring to the neural network's intricacy or something more
philosophical?
I am indeed referring to the complexity of neural networks. I didn't even
have time to think about how the results may be philosophically
significant.
Does this theory suggest that we are residing in a simulation, I have a
question?
No, we do not realize that we are part of a cerebral network.
Reference(s): arXiv