30.5.11

A Computational Explanation of the Universe and BigBang




"What gave these men the right to be considered philosophers, unlike the other astronomers, geographers and doctors who were active especially in the latter half of the period, was their common assumption that the world possessed some kind of integral unity and determinability which could be understood and explained in rational terms. A more important debt to myth appears in the central presupposition that the world is coherent and intelligible, is somehow a unity in spite of the diversity of its appearance."  (Friedrich Nietzsche, 1890 about the Msytic Philosophers)

http://ssrn.com/abstract=1855496

Von Neumann in 1940’s worked on the cellular automata as an abstraction of self replication. Von Neumann's ideas of propagation of information from parent cells to next cycles in a cellular automaton, could be an explanation of the geometry of space-time grid, limitation on the speed of light, Heisenberg’s Uncertainty principle, principles of Quantum theory, Relativity, elementary particles of physics, why universe is expanding,… and the list goes on. If this simple mechanism could explain so many things, why was it not a prominent field of research? The answer is very simple but at the same time quite unexpected: one of the applications of this post war study of Von Neumann on Cellular Automata was cryptography; therefore his results were classified and still kept as top secret by USA government. However it’s time to look at this subject from a different point of view: Can this mechanism be used to explain the physical universe? we are more interested in the secret of existence than encryption-decryption of text or data; where did all these galaxies, stars, cosmological objects come from, when did it start, what it was like at the beginning of time and space.
Some Constants
Some constants of physical universe have never changed since the beginning of the existence
·         Gravitational constant
·         Planck’s constant
·         The limiting velocity of propagation of information
·         The smallest distance of existence
·         The shortest moment of time
These all imply the simplicity of the initial structure of physical universe.
Before going into explanation of Cellular automaton concept which is the subject of this article, let’s give some figures which will help us understand the scales mentioned here:
Planck’s constant: The Planck constant was first described as the proportionality constant between the energy (E) of a photon and the frequency of its associated electromagnetic wave (f). E= h f.
In 1923, Louis de Broglie generalized this relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but any particle. This was confirmed by experiments soon afterwards.

Planck’s length: In physics, the Planck length, denoted λP, is to some scientists the smallest unit of length. It is a base unit in the system of Planck units. The Planck length can be defined from three fundamental physical constants: the speed of light in a vacuum, Planck's constant, and the gravitational constant.
Planck’s time: In physics, the Planck time, tP, is the unit of time in the system of natural units known as Planck units. It is the time required for light to travel, in a vacuum, a distance of 1 Planck length. The unit is named after Max Planck, who was the first to propose it.

Cellular Automaton Explanation of  Bigbang


Cellular automata have their origin in theoretical systems described by John von Neumann and Stanislaw Ulam in the 1940’s. The collaboration of von Neumann, considering the notion of self‐replicating robots, and Ulam, exploring patterns in crystal growth, would lead to the first formal description of a cellular automaton. In general, cellular automata (CA) constitute an arrangement of finite state automata (FSA) that sit in positional relationships between one-another, each FSA exchanging information with those other FSAs to which it is positionally adjacent. In von Neumann's cellular automaton, the finite state machines (or cells) are arranged in a two-dimensional Cartesian grid, and interface with the surrounding four cells. As von Neumann's cellular automaton was the first example to use this arrangement, it is known as the von Neumann neighbourhood.[1]
 A one-dimensional CA is illustrated in below Figure (based on [Mit96]) [3]


Illustration of a one-dimensional, 2-state CA (based on [Mit96]). Each cell can be in one of two states, denoted 0 and 1. The connectivity radius is r=1, meaning that each cell has two neighbors, one to its immediate left and one to its immediate right. Grid size is N=15. The rule table for updating the grid is shown on top. The grid configuration over one time step is shown at the bottom. Spatially periodic boundary conditions are applied, meaning that the grid is viewed as a circle, with the leftmost and rightmost cells each acting as the other's neighbor.
Another rule named Rule 30 is a one-dimensional binary cellular automaton rule introduced by Stephen Wolfram in 1983.[2] Using Wolfram's classification scheme, Rule 30 is a Class III rule, displaying aperiodic, chaotic behaviour. In all of Wolfram's elementary cellular automata, an infinite one-dimensional array of cellular automaton cells with only two states is considered, with each cell in some initial state. At discrete time intervals, every cell spontaneously changes state based on its current state and the state of its two neighbors. For Rule 30, the rule set which governs the next state of the automaton and the result of initial steps:
current pattern
111
110
101
100
011
010
001
000
new state for center cell
0
0
0
1
1
1
1
0


If we increase the number of cycles we see that parts of the picture stabilize but the overall complexity and unpredictability doesn’t decrease. Look at the complexity of the patterns even though we have only used a 1 dimensional CA.
 For space- time  explanation we need 2 more dimensions on the horizontal axis and we need to point out the correspondence of the Planck units with the cell dimensions of the automata. Until now we have not needed any randomness or roll of dice as Einstein described. All we needed was 3 dimensional cellular automaton with a simple rule to start. You may come up with the question that if the rule is too simple where does the complexity of the universe comes from? My answer is that the Big bang need not have happened once, since all the points of singularity at any coordinate in the space-time fabric is equivalent to Big bang and if it happened once why not others happen again, i.e. to stir up some action, or clear away some other action which would cancel some of the patterns.
We believe that the information contained in the ultimate point of singularity, was a limited set of rules which applied to the single cell which marked the beginning of time and space, a moment of time that we can not even think about before, a location in space where we can not imagine beyond, the seed of existence which caused the ultimate explosion, resulting into growth of space-time fabric (which is still growing) with the speed of light. All we need is a simple rule which results in the white noise that we observe when we turn on an untuned TV.
 All of these random like patterns are exactly what the universe was like in the first moments after the BigBang.  Our view of the formation of Spacetime fabric (which we think is still being formed) is nothing more than patterns of a cellular automaton with a tiny cell size with the computational power growing together with the number of cells. From this initial assumption we can easily conclude that the computation of cell states must have had taken place within the cells since immediately after the initial few trillion steps, each having duration of  1 “Planck’s time” (which is still ~1/10 million.trillion.trillionth of a second) the number of cells were so huge that only parallel processing can handle it. The idea that all space-time information was stored in the initial singularity seems not to be logical. The information needed for the complexity of the universe, as we know it now, indicates that after the initial “Big-Bang”, it took other phases of “Smaller-Bangs” to explain more elaborate and more sophisticated patterns of nature including human existence and self awareness.
Here is a possible continuation of the story which we may describe as “Quality Improvement” of the initial product called “Existence”:
In the second phase, nuclear reactions took place at the interactions of stars and galaxies formed in the first phase. These are responsible from formation of essential elements like hydrogen, carbon, oxygen and possibly iron, the basic building blocks of planets like Earth. Almost all elements that we know of are result of this phase.
Dominant activity of the third phase was chemical reactions, evolving into organic materials, and possibly DNA and polymerase enzyme. This pair of self replication may have come into existence on the surface of earth or may be carried by a comet or meteorite from outer space. In either case this was the start of another type of big-bang.
Fourth phase was evolution of life. This time, initial pair of DNA – Polymerase started a pattern of self replication giving way to first living organisms, multicellular ones, evolutionary processes resulting in complexity and diversity of life on earth.  In the mean time the previous phase activities didn’t die out or stop. One example of first phase activity which support the subsequent phases is that a huge planet appeared in Earth’s solar system, Jupiter, which stopped the Asteroid belt from being attracted by the pull of Sun, colliding and destroying the cradle of life: Earth.
The fifth phase was evolution of nervous systems, ability to think, react to external stimuli, to be able to survive in conditions not predictable before existence. Another interesting fact is that each formation of a brain is a form of “Big-Bang” which takes place in the embryos of developed species in which billions of cells are produced identical to the first cell differentiated from others which started to divide and form the brain. This is a phenomenon obeying exactly the same set of rules like the first Plank cell did in the in the First phase.  Brain gave an advantage to the organism, of ability to adapt to conditions that could not be programmed into the genes.
Sixth phase was evolution of consciousness, awareness of existence, ability of speech and communication, development of culture, philosophy and science. The religion was also the product of this stage and resulting from the fact that human mind is in constant search for an explanation for its existence. Therefore we may assume parallel universes being created like in the initial big-bang only as complicated as the brains in the centers of them, since these observers determine and at the same time are shaped by the complexities of their universe. Human consciousness has the ultimate complexity to perceive the common findings of a single miraculous reality. This is when the human asks the question “To be or not to be”
Seventh phase is the conquering the secret of universe and existence. This is where we are.
30.5.2011, Tirana, Albania
Mehmet Zirek



1 Weiss, Robin, http://marina.geo.umn.edu/rweiss/projects/GOLPaper.pdf; 20.4.2009
2-Wolfram, S. (1983). "Statistical mechanics of cellular automata". Rev. Mod. Phys. 55: 601–644. doi:10.1103/RevModPhys.55.601.
3-Sipper, Moshe, http://www.cs.bgu.ac.il/~sipper/ca.html

28.5.11

Project Proposal to NASA

If I were NASA I would think of sending a Lunar non human mission to install robotic lighting instruments to the visible (near) side  of the moon, for terrestrial telescopes to detect messages information and "even advertisements" especially when these instruments are shadowed by earth during the crescent phase. 

Possible project of this type may cost around 300 Million USD (almost cost of a bridge or a highway) which would have immense returns of different nature including:

1- Increased awareness of public about Space missions
2- Proof of Lunar landing
3- Advertisements via radio controlled light emissions (text or even simple video)
4- Research data on Lunar installation of renewable energy systems for a future moon station

Any of the above would be worth the cost, but more than all is
5- Having the honor of first permanently operating lunar system...

The project would involve activating solar panels to collect energy during the solar exposure phase and turning on the lighting instruments during the shadowed phase. All it would require is NASA radio control station to operate it (them) and telescope(s) for the public to observe the operation.