. .


Recent Comments

The Big Bang Experiment – An Overview


Over a very long time, scientists had a great deal of thought of how and when did the universe form, what types of matter and energy filled the universe etc. Cosmology is the scientific study of the large scale properties of the Universe as a whole. It endeavors to use the scientific method to understand the origin, evolution and ultimate fate of the entire Universe. Like any field of science, cosmology involves the formation of theories or hypotheses about the universe which make specific predictions for phenomena that can be tested with observations. Depending on the outcome of the observations, the theories will need to be abandoned, revised or extended to accommodate the data.

The Big Bang Theory is currently the dominant scientific explanation for the prevailing theory about the origin and evolution of our Universe. It was first proposed in 1927 by a Belgian priest named George Lemaitre. The Big Bang theory is an effort to explain what happened at the very beginning of the universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment. Scientists refer to the formation of the universe as the Big Bang Theory. Other theories have also been offered, but the Big Bang has the widest acceptance.

The age of the universe is the time elapsed between the Big Bang and the present day. NASA’s Wilkinson Microwave Anisotropy Probe(WMAP) project estimates the age of the universe to be between 13.5 and 14.0 billion years. The uncertainty range has been obtained by the agreement of a number of scientific research projects. Scientific instrument and methods have improved the ability to measure the age of the universe with a great accuracy. These projects included: background radiation measurements and more ways to measure the expansion of the universe. Background radiation measurements give the cooling time of the universe since the Big Bang. Expansion of the universe measurements give accurate data to calculate the age of the universe.

Artist’s depiction of the WMAP satellite gathering data to help scientists understand the Big Bang.
Artist’s depiction of the WMAP satellite gathering data to help scientists understand the Big Bang.

Scientists were trying to conduct an experiment on a miniature version of the “Big Bang” which will recreate the conditions a few moments after the Big Bang. Scientists hope to find answers to questions about black holes, dark matter and why the universe appears the way it does.


The term Big Bang generally refers to the idea that the universe has expanded from a primordial hot and dense initial condition at some finite time in the past, and continues to expand to this day. The Big Bang theory developed from observations of the structure of the universe and from theoretical considerations. In 1912 Vesto Slipher measured the first Doppler shift of a “spiral nebula” (spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from Albert Einstein’s equations of general relativity, showing that the universe might be expanding in contrast to the static universe model advocated by Einstein. In 1924, Edwin Hubble’s measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Hubble discovered a correlation between distance and recession velocity—now known as Hubble’s law. In 1927, Georges Lemaitre, a Belgian physicist and Roman Catholic priest, predicted that the recession of the nebulae was due to the expansion of the universe. In 1931 Lemaitre went further and suggested that the evident expansion in forward time required that the universe contracted backwards in time, and would continue to do so until it could contract no further, bringing all the mass of the universe into a single point, a “primeval atom”.The discovery and confirmation of the cosmic microwave background radiation in 1964 secured the Big Bang as the best theory of the origin and evolution of the cosmos. Huge strides in Big Bang cosmology have been made since the late 1990s as a result of major advances in telescope technology as well as the analysis of copious data from satellites such as COBE, the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise measurement of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating.

Hubble Deep Field Image: The famous "Deep Field Image" taken by the Hubble Space Telescope

Hubble Deep Field Image: The famous Deep Field Image taken by the Hubble Space Telescope


According to the standard Big Bang theory, our universe sprang into existence as “singularity” around 13.7 billion years i.e. there were no differentiated planets, stars, suns, galaxies. Singularities are thought to exist at the core of “black holes.” Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind).

After its initial appearance, five billion years ago, the compact hydrogen soup blasted apart with huge force, matter was hurled in all directions. The Big Bang theory predicts that the early universe was a very hot place. The early hot, dense phase is itself referred to as “the Big Bang”, and is considered the “birth” of our universe. The universe was filled homogeneously and isotropically with an incredibly high energy density, huge temperatures and pressures, and was very rapidly expanding and cooling.

Approximately 10?35 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially. After inflation stopped, the universe consisted of a quark-gluon plasma, as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle-antiparticle pairs of all kinds were being continuously created and destroyed in collisions leading to a very small excess of quarks and leptons over antiquarks and anti-leptons. This resulted in the predominance of matter over antimatter in the present universe.
The universe continued to grow in size and fall in temperature, hence the typical energy of each particle was decreasing. At about 10?6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was now no longer high enough to create new proton-antiproton pairs. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons.
A few minutes into the expansion, when the temperature was about a billion neutrons combined with protons to form the universe’s deuterium and helium nuclei in a process called Big Bang Nucleosynthesis (BBN).

Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest mass energy density of matter came to gravitationally dominate that of the photon radiation. After about 379,000 years the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is known as the cosmic microwave background (CMB) radiation.
WMAP image of the cosmic microwave background radiation


WMAP image of the cosmic microwave background radiation
Over a long period of time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The three possible types of matter are known as cold dark matter, hot dark matter and baryonic matter. The best measurements available show that the dominant form of matter in the universe is cold dark matter. The other two types of matter make up less than 18% of the matter in the universe.
About 3,00000 years later, when things cooled, light elements like hydrogen and helium were formed. This condition which catapulted in expanded direction that allowed the volume to increase and created the universe.
A pie chart indicating the proportional composition of different energy-density components of the universe

A pie chart indicating the proportional composition of different energy-density components of the universe
The gravitational force brought them together in clumps. These clumps became the seeds of galaxies. After a long time, stars formed including the Sun, our closest star. We can see the remnants of this hot dense matter as the now very cold cosmic microwave radiation (CMB) which still pervades the universe.


Foundations of the Big Bang Model
The Big Bang Model rests on two theoretical pillars: General Relativity and The Cosmological Principle. A key concept of General Relativity is that gravity is no longer described by a gravitational “field” but rather it is supposed to be a distortion of space and time itself. After the introduction of General Relativity a number of scientists, including Einstein, tried to apply the new gravitational dynamics to the universe as a whole. At the time this required an assumption about how the matter in the universe was distributed. The simplest assumption to make is that if you viewed the contents of the universe with sufficiently poor vision, it would appear roughly the same everywhere and in every direction. That is, the matter in the universe is homogeneous and isotropic. This is called the Cosmological Principle. This assumption is being tested continuously as we actually observe the distribution of galaxies on ever larger scales.
The Big Bang model was a natural outcome of Einstein’s General Relativity as applied to a homogeneous universe. However, in 1917, the idea that the universe was expanding was thought to be absurd. So Einstein invented the cosmological constant as a term in his General Relativity theory that allowed for a static universe. In 1929, Edwin Hubble announced that his observations of galaxies outside our own Milky Way showed that they were systematically moving away from us with a speed that was proportional to their distance from us. V=Ho x r where V is the recession speed (km/s), Ho is the Hubble’s constant (km/s/Mpc) and r is the distance (Mpc).

The more distant the galaxy, the faster it was receding from us. The universe was expanding after all, just as General Relativity originally predicted! Hubble observed that the light from a given galaxy was shifted further toward the red end of the light spectrum the further that galaxy was from our galaxy (Red Shift).

Supporting Evidence
We may not be able to exactly confirm whether the Big Bang Theory is accurate or not. But based on the observations in the past, the universe is indeed expanding up to this very moment. Edwin Hubble’s discovery supports the idea of the Big Bang. He observed that some galaxies are accelerating outward and becomes farther away from our vantage point. The main concept is that, if the materials in the universe are staying away from each other at an expanded direction, then everything must have been closer together initially.
The National Aeronautics and Space Administration’s (NASA) Cosmic Background Explorer (COBE) spacecraft mapped the cosmic background radiation between 1989 and 1993. It verified that the distribution of intensity of the background radiation precisely matched that of matter that emits radiation because of its temperature, as predicted for the big bang theory. It also showed that the cosmic background radiation is not uniform, that it varies slightly. These variations are thought to be the seeds from which galaxies and other structures in the universe grew.
Another possible concept that supports the theory is the presence of abundant elements around us and in space. Hydrogen and Helium are both abundant in the cosmos. This only tells us that everything must have come from a variety of these two elements which are initially the main components of that initial condition before the universe was born.
Refining the Theory
Evidence indicates that the matter that scientists detect in the universe is only a small fraction of all the matter that exists. For example, observations of the speeds with which individual galaxies move within clusters of galaxies show that there must be a great deal of unseen matter exerting gravitational forces to keep the clusters from flying apart.
Cosmologists now think that much of the universe – perhaps 99 percent – is dark matter, or matter that has gravity but that we cannot see or otherwise detect. Theorized kinds of dark matter include cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have given them names like Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be nonradiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects). An alternative model includes hot dark matter, where hot implies that the particles are moving very fast. The fundamental particles known as neutrinos are the prime example of hot dark matter. If the inflationary version of big bang theory is correct,

then the amount of dark matter that exists is just enough to bring the universe to the boundary between open and closed.
During the first few days of the universe, the universe was in full thermal equillibrium, with photons being continually emitted and absorbed, giving the radiation a blackbody spectrum. As the universe expanded, it cooled to a temperature at which photons could no longer be created or destroyed. The temperature was still high enough for electrons and nuclei to remain unbound, however, and photons were constantly “reflected” from these free electrons through a process called Thomson scattering. Because of this repeated scattering, the early universe was opaque to light.
When the temperature fell to a few thousand Kelvin, electrons and nuclei began to combine to form atoms, a process known as recombination. Since photons scatter infrequently from neutral atoms, radiation decoupled from matter when nearly all the electrons had recombined, at the epoch of last scattering, 379,000 years after the Big Bang. These photons make up the CMB that is observed today, and the observed pattern of fluctuations in the CMB is a direct picture of the universe at this early epoch. The energy of photons was subsequently red shifted by the expansion of the universe, which preserved the blackbody spectrum but caused its temperature to fall, meaning that the photons now fall into the microwave region of the electromagnetic spectrum. The radiation is thought to be observable at every point in the universe, and comes from all directions with (almost) the same intensity.
In 1964, Arno Penzias and Robert Wilson accidentally discovered the cosmic background radiation while conducting diagnostic observations using a new microwave receiver owned by Bell Laboratories. Their discovery provided substantial confirmation of the general CMB predictions. In 1989, NASA launched the Cosmic Background Explorer satellite(COBE), and the initial findings, released in 1990, were consistent with the Big Bang’s predictions regarding the CMB. COBE found a residual temperature of 2.726K and in 1992 detected for the first time the fluctuations (anisotropies) in the CMB, at a level of about one part in 105. In early 2003, the first results of the Wilkinson Microwave Anisotropy satellite (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. This satellite also disproved several specific cosmic inflation models, but the results were consistent with the inflation theory in general, it confirms too that a sea of cosmic neutrinos permeates the universe, a clear evidence that the first stars took more than a half-billion years to create a cosmic fog. Another satellite like it will be launched within the next few years, the Planck Surveyor, which will provide even more accurate measurements of the CMB anisotropies.
Big Bang IN THE laboratory
Recent research indicates that cosmology is connected to particle physics in the same way as astrophysics is connected to nuclear physics. For the first time, physicists have created a new form of matter by recreating the conditions thought to have existed 10 microseconds after the Big Bang at the start of the universe. The European Laboratory for Particle Physics (CERN), based outside Geneva, conducted a series of experiments which smashed together heavy lead ions in a fireball to prove a theory that had only existed on paper for years. For the tests, engineers built a giant contraption called Large Hadron Collider (LHC).
The Large Hadron Collider (LHC) is the world’s largest and highest-energy particle accelerator, intended to collide opposing beams of protons or lead ions, each moving at about 99.9999991% of the speed of light.
The large circle, 17 miles in circumference, shows the LEP tunnel which houses the massive machine buried under 100 metres of rock.

The large circle, 17 miles in circumference, shows the LEP tunnel which houses the massive machine buried under 100 metres of rock.
The LHC, which is the heart of this experiment, took two decades to construct. It is the largest particle accelerator the world has seen. The LHC was built by the European Organization for Nuclear Research (CERN) with the intention of testing various predictions of high energy physics, including the existence of the hypothesized Higgs boson. It lies in a 17 mile tunnel buried under 100 meters of rock in the borders of Switzerland and France between the Jura Mountains and Alps near Geneva, Switzerland.
An underground representation of the Large Hadron Collider, which stretches for 27 kilometres

An underground representation of the Large Hadron Collider, which stretches for 27 kilometres
The LHC is the world’s largest and highest-energy particle accelerator. The collider is contained in a circular tunnel, with a circumference of 27 kilometres (17 miles), at a depth ranging from 50 to 175 metres underground. The 3.8m wide concrete-lined tunnel, constructed between 1983 and 1988, was formerly used to house the Large Electron-Positron Collider. It crosses the border between Switzerland and France at four points, with most of it in France. The collider tunnel contains two adjacent parallel beam pipes that intersect at four points, each containing a proton beam, which travels in opposite directions around the ring.
An underground representation of the Large Hadron Collider, which stretches for 27 kilometres

Inside the Large Hadron Collider

Some 1,232 dipole magnets keep the beams on their circular path, while an additional 392 quadrupole magnets are used to keep the beams focused, in order to maximize the chances of interaction between the particles in the four intersection points, where the two beams will cross. In total, over 1,600 superconducting magnets are installed, with most weighing over 27 tonnes. Approximately 96 tonnes of liquid helium is needed to keep the magnets at their operating temperature of 1.9K, making the LHC the largest cryogenic facility in the world at liquid helium temperature.

As the protons are accelerated from 450 GeV to 7 TeV, the field of the superconducting dipole magnets will be increased from 0.54 to 8.3tesla (T). The protons will each have an energy of 7 TeV, giving a total collision energy of 14 TeV (2.2 ?J). At this energy the protons move at about 99.999999% of the speed of light. It will take less than 90microsecond(?s) for a proton to travel once around the main ring – a speed of about 11,000 revolutions per second. Rather than continuous beams, the protons will be bunched together, into 2,808 bunches, so that interactions between the two beams will take place at discrete intervals never shorter than 25 nanoseconds(ns) apart. Prior to being injected into the main accelerator, the particles are prepared by a series of systems that successively increase their energy. The first system is the linear particle accelerator LINAC 2 generating 50MeV protons, which feeds the Proton Synchrotron Booster (PSB). There the protons are accelerated to 1.4 GeV and injected into the Proton Synchrotron (PS), where they are accelerated to 26 GeV. Finally the Super Proton Synchrotron (SPS) is used to further increase their energy to 450 GeV before they are at last injected (over a period of 20 minutes) into the main ring. Here the proton bunches are accumulated, accelerated (over a period of 20 minutes) to their peak 7 TeV energy, and finally stored for 10 to 24 hours while collisions occur at the four intersection points.
Six detectors have been constructed at the LHC, located underground in large caverns excavated at the LHC’s intersection points. Two of them, the ATLAS (A Toroidal LHC Apparatus) experiment and the Compact Muon Solenoid (CMS), are large, general purpose particle detectors. A Large Ion Collider Experiment (ALICE) and LHCb (LHC-beauty) have more specific roles and the last two TOTEM (Total Cross Section, Elastic Scattering and Diffraction Dissociation) and LHCf (LHC-forward LHC ) are very much smaller and are for very specialized research.
The four main points of interest that will be utilised during the LHC experiment

The four main points of interest that will be utilised during the LHC experiment
The summary of the main detectors is:
ATLAS – one of two so-called general purpose detectors. It contains a series of ever-larger concentric cylinders around the central interaction point where the LHC’s proton beams collide. Atlas will be used to look for signs of new physics, including the origins of mass and extra dimensions.



Assembly and installation of the ATLAS

Assembly and installation of the ATLAS
CMS – the other general purpose detector will, like ATLAS, hunt for the Higgs boson and look for clues to the nature of dark matter.
View of the CMS

View of the CMS

ALICE – will study a “liquid” form of matter called quark-gluon plasma that existed shortly after the Big Bang.
View of ALICE

View of ALICE
LHCb – equal amounts of matter and anti-matter were created in the Big Bang. LHCb will try to investigate what happened to the “missing” anti-matter.
The aim of this experiment is to establish a better understanding of the origin of the Universe, as well as to shed light on concepts such as dark matter and antimatter. Scientists also hope to find the elusive Higgs Boson or so called “God Particle”.
On September 10, 2008, the Large Hadron Collider (LHC) switched on in Geneva. The first beam was circulated through the collider .CERN successfully fired the protons around the tunnel in stages, three kilometres at a time. The particles were fired in a clockwise direction into the accelerator and successfully steered around it. After a series of trial runs, two white dots flashed on a computer screen showing the protons traveled the full length of the collider. CERN next successfully sent a beam of protons in a counterclockwise direction, taking slightly longer at one and a half hours due to a problem with the cryogenics.
A simulated lead-ion collision within the LHC device

A simulated lead-ion collision within the LHC device

Beams of protons will be accelerated in opposite directions through the ring-shaped tunnel, which is supercooled to just 1.9 degrees above absolute zero (minus 271C), the lowest temperature allowed by nature. Reaching velocities of 99.99% the speed of light, each beam will pack as much energy as a Eurostar train travelling at 150 kilometres per hour. That is enough to melt 500 kilograms of copper. The particles will be brought together in four huge “detectors” placed along the ring. Each detector is like a giant microscope, designed to probe deeper into the heart of matter than has ever been possible before.A chain of smaller accelerators, built for earlier projects, are first used to speed up the proton beams to the point where they can be injected into the LHC. The start of the process is a bottle of hydrogen gas no bigger than a fire extinguisher. Hydrogen atoms are stripped of their electrons to produce streams of protons that are fed into accelerators of increasing size. The last link in the chain before the LHC, the Super Proton Synchrotron (SPS), is itself buried underground and covers a distance of seven kilometers.

Switching on the LHC will create conditions that existed a fraction of a second after the Big Bang. Timing between the SPS and the LHC has to be accurate to within a fraction of a nanosecond. The first ‘switch on’ involved transferring a beam from the SPS to the LHC so that it is circulating around the machine in a stable fashion. After this has been successfully accomplished another beam will be sent spinning in the opposite direction. The final step will be to boost the energy of each beam to a record five tera electron volts (Tev). One Tev is equal to a trillion (1,000,000,000,000) electron volts. Eventually the aim is to raise the energy level to seven Tev, probably by 2010.
The group said they could send the beams in opposite directions simultaneously within months. Once physicists stabilize the proton beams and calibrate detectors, they hope to fire protons through tunnels near the speed of light and force them to collide.
It is in these conditions that scientists hope to find fairly quickly a theoretical particle known as the Higgs Boson, named after Scottish scientist Peter Higgs who first proposed it in 1964, as the answer to the mystery of how matter gains mass. Without mass, the stars and planets in the universe could never have taken shape in the eons after the Big Bang, and life could never have begun – on Earth or, if it exists as many cosmologists believe, on other worlds either.
The collisions, in which both particle clusters will be travelling at the speed of light, will be monitored on computers at CERN and laboratories around the world by scientists looking for, among other things, a particle that made life possible.

The elusive particle, which has been dubbed the “Higgs Boson” after Scottish physicist Peter Higgs who first postulated nearly 50 years ago that it must exist, is thought to be the mysterious factor that holds matter together. It could also provide evidence of dark matter, or invisible matter in between galaxies.
The LHC physics program is mainly based on proton-proton collisions. However, shorter running periods, typically one month per year, with heavy-ion collisions are included in the program. While lighter ions are considered as well, the baseline scheme deals with lead ions. This will allow an advancement in the experimental program currently in progress at the Relativistic Heavy Ion Collider (RHIC). The aim of the heavy-ion program is to provide a window on a state of matter known as Quark-gluon plasma, which characterized the early stage of the life of the Universe.
A group of hackers popularly known as the Group 2600 almost broke into the computer systems of the historic Large Hadron Collider project. Though no major damage was done but they deleted only one file from the system, Compact Muon Solenoid Experiment (CMS), which controls equipment crucial for the experiment. If the hackers had managed to enter the second computer network, they could have turned off many parts of the experimental machine. Sources said that hackers breached CMSMON system that monitors data during the experiment.
Scientists at CERN had to shut down the huge particle collider, just 10 days after the project kicked off, due to some technical glitch of leaking helium into the tunnel housing the machine. After a technical fault in the LHC, the CERN reported that the collider will not be able to start until spring 2009. According to CERN, the reason behind the helium leakage into the collider’s tunnel was probably due to a faulty electrical connection between two of the accelerator’s massive magnets. However, to understand the whole fault the team of scientists will now have to raise those sections of the tunnel back from its operating temperature of minus 271.3 degrees Celsius to room temperature and open up the massive magnets for inspection. The investigation and repairs may take three to four weeks time, which will be followed by CERN’s winter maintenance period. The whole repairing process may take several weeks; hence the CERN is hopeful to restart the complex project by early spring 2009.

Once the LHC starts functioning again, CERN will resume sending particle-beams around the 27 km long tunnel. The next step of the experiment is to smash beams travelling in opposite directions into each other at a speed almost close to that of light. The beams would recreate heat and energy of the ‘Big Bang’ on a miniature scale. When at a full speed, the collider will generate 600 million collisions every second of subatomic particles called protons that will detonate in a burst of new and formerly unknown types of particles. With the experiment, the CERN scientists are expecting to come across the phenomena of ‘Big Bang’, which is believed by cosmologists to be behind the origin of our expanding universe.
In fact,we are all curious to know the origin and formation of our universe, and we hope these experiments and theories will delight those hidden truths through our scientists one day.








A special thanks to Dr. Keshav Mohan, Director, SVN College of Engineering, Mavelikara for suggesting this topic.

7 Responses to The Big Bang Experiment – An Overview

  1. This is a excellent summary of the current standard cosmological theory.
    However it does assume that the Hubble redshift is due to expansion and
    not due to some other tired-light model. It also assumes that the Big Bang
    model for the cosmic microwave background radiation is the only one possible.
    I disagree in that there is considerable evidence that the Big Bang model may be wrong. Here is a brief outline of some of the evidence.
    For all references, caveats and full details see arXiv 1009.0953:
    http://arxiv.org/abs/1009.0953 (it includes a table of contents,
    hyperlinks and several minor corrections) or see the JCos papers.

    A major difference between cosmologies in an expanding universe
    and that in a static universe is time dilation. Whereas a tired
    light process could explain the energy loss of photons it cannot
    produce the effect of time dilation on the rate of arrival of photons.
    In an expanding universe cosmology the equations for the distance
    modulus and for the angular size include a term, (1+z), to allow for
    time dilation. Since the similar equations for a static-universe
    cosmology do not include this term its presence (or absence) makes
    a suitable test for determining whether the universe is expanding.
    It is assumed that the static universe obeys the perfect cosmological
    principle. The same everywhere and at all times.

    Tolman surface brightness.
    Sandage and Lubin analyzed the surface brightness of early-type
    galaxies. A re-analysis using current Big Bang (BB) equations and
    combining the two color bands (and for the Sersic radius 2.0) gives
    an exponent of 2.16+/-0.13. The expected exponent is 4.
    The difference is attributed to luminosity evolution. A critical part of this
    analysis is the calibration of the absolute luminosity (and hence the SB) for
    the absolute radii of the galaxies. Thus BB is used to compute the
    radii of the distant galaxies. The surface brightness has a dependence
    on the radius of SB = 9.29 + 2.83log(absolute radius).
    Assuming that for a static universe the radii are all larger by a
    factor (1+z) then the static universe exponent is
    2.16 – 2.83/2.5 = 1.03(+/-0.14)
    which is in excellent agreement with the expected value of 1.
    Note Lubin and Sandage claim that their results are inconsistent
    with a static universe. However they used their own tired-light model
    which is different to the simple model used here.

    Angular size.
    Recently Lopez-Corredoira (2010) used 393 galaxies with redshift
    range of 0:2 < z < 3:2 and found that in agreement with much earlier
    work the data was consistent with a Euclidean geometry and was
    totally unable to fit the data to an expanding universe.

    Type 1a supernovae.
    Here the analysis is more complex and is based on the assumption that
    these supernovae have constant energy and not constant peak
    luminosity. There is no observational difference between peak luminosity and
    total energy for nearby supernovae. The total energy is a product
    of the peak luminosity and the width of the light curve.
    The critical part of the analysis is that the distant supernovae have
    been selected to have a very small variation in their peak luminosity
    computed with BB. In a static universe this means that the selected
    supernovae are biased to a lower luminosity (by a factor of 1+z).
    Then if on average their total energy is constant then their widths
    are biased to larger values. On average a selection bias of (1+z) to
    lower luminosity corresponds to a selection bias of (1+z) in width.
    Exactly what is observed. A fit of total energy verses redshift has a
    function (19.070+\-042) + (0.047+\-0.089)2.5log(1 + z) which is
    consistent with zero slope. Thus no evidence of dark energy!

    Gamma ray bursts.
    A remarkable characteristic of gamma ray bursts is that the raw
    observations of the various time measures (burst duration, spike rise
    time and spike rate) do not show any significant variation with
    redshift (out to z=6). The standard explanation is that there is an inverse
    relationship between absolute luminosity and the time measures and
    the lack of variation in the time measures is due to selection effects.
    In a static universe the lack of variation is expected and the
    relationship with absolute luminosity is spurious and due to the
    use of an incorrect distance modulus.

    Galaxy luminosity function.
    It is shown that E-S_a galaxies have a well defined luminosity
    distribution with a peak that has essentially the same shape at all
    redshifts but the position of the peak varies with redshift.
    When analyzed for a static cosmology the magnitude of this peak has a
    constant value independent of redshift with a Chi^2 of 6.1 for
    3 degrees of freedom.

    Quasar luminosity distribution.
    At a fixed redshift the SDSS quasars essentially have a power law
    distribution (exponential in magnitude). Since the distance modulus
    is additive and for a small range of redshifts is essentially constant
    it can be derived from the distribution of magnitudes within that
    redshift range. The sum of the probability of detection for each
    quasar in the range multiplied by the exponential of the luminosity function
    is set equal to the expected number of quasars. The only complication
    is the co-moving volume and density of the quasars. Assuming the
    reasonable assumption that the the static universe has the same
    volume as a function of z as BB and that the quasar density is
    constant the analysis shows a well defined preference for a static universe.
    A BB model can only fit the data if it has a density evolution.

    Quasar variability in time.
    Hawkins has analyzed the time variability of 800 quasars over time
    scales from 50 days to 28 years. He finds that there is no
    dependence of the time variability on redshift.

    The Butcher-Oemler effect.
    They observed that the fraction of blue galaxies in galactic clusters
    appears to increase with redshift. Andreon, Lobo & Iovino (2004)
    examined three clusters around z=0.7 and did not find clear-cut
    evidence for the effect. To quote one of their conclusions:
    "Twenty years after the original intuition by Butcher & Oemler,
    we are still in the process of ascertaining the reality of the
    Butcher-Oemler effect".

    David F. Crawford

    • Anjana Prasad says:

      @David F Crawford
      Thanks David for reviewing the article and giving me this important and detailed information.

      I made this study originally around 2-3 years back because of the interest and curiosity about this Big Bang Experiment, might be there are more advanced information about this topic even.


  2. Arun says:

    Excellent summary for Big Bang experiment.



  3. V Prasad says:

    V Good. Keep it up. Will be able to update this & attempt new such ventures

  4. dr.keshav mohan says:

    happy that you attracted comments from david. congrats. keep it up.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>