A Review of Schmidt Accelerating Universe

DOI : 10.17577/IJERTV2IS110891

Download Full-Text PDF Cite this Publication

Text Only Version

A Review of Schmidt Accelerating Universe

Manash Kumar Sharma

Department of physics, Gargaon College, Sibsagar, Assam, India 785 686

Abstract

In this study we report a review of Schmidt discovery accelerating universe. The connection with general relativity and cosmology is truly essential to predict the other face of gravitation that it can be repulsive. The black hole connection can be related to the accelerating universe with hubble parameter, red shift and hawking radiation. In this review, we report a comprehensive study of the accelerating universe, how to measure the expansion, GTR and cosmology background and black hole connection at early universe.

Keywords:Accelerating universe, general relativity, hubble parameter, red shift, hawking radiation, black hole

1. Introduction

The accelerating universe is the observation that the universe appears to be expanding at an increasing rate. In formal terms, this means that the cosmic scale factor has a positive second derivative, so that the velocity at which a distant galaxy is receding from us should be continuously increasing with time. The first suggestion for an accelerating universe from observed data was in 1992, by Paálet al. In 1998, observations of type Ia supernovae also suggested that the expansion of the universe has been accelerating since around redshift of z~0.5. The 2006 Shaw Prize in Astronomy and the 2011 Nobel Prize in Physics were both awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for the 1998 discovery of the accelerating expansion of the Universe through observations of distant supernovae.

As the Universe expands, the density of radiation and ordinary and dark matter declines more quickly than the density of dark energy (see equation of state) and, eventually, dark energy dominates. Specifically, when the scale of the universe doubles, the density of matter is reduced by a factor of 8, but the density of dark

energy is nearly unchanged (it is exactly constant if the dark energy is a cosmological constant).

Current observations indicate that the dark energy density is already greater than the mass-energy density of radiation and matter (including dark matter). In models where dark energy is a cosmological constant, the universe will expand exponentially with time from now on, coming closer and closer to a de Sitter spacetime. In this scenario the time it takes for the linear size scale of the universe to expand to double its size is approximately 11.4 billion years. Eventually all galaxies beyond our own local supercluster will redshift so far that it will become hard to detect them, and the distant universe will turn dark.

The distribution of galaxies in the pencil-beam surveys of Broadhurstet al. which proved periodical across 8 10 consecutive steps in a flat dust model withq0=0.5 is found to reveal extended periodicity up to 1617 phase-coherent steps, covering the total sample, in a flat, moderately inflationary model withq0=0.5 (vacuum/dust ratio 2/1). In the latter model the vacuum component helps to reach the critical density and lengthens the expansion time-scale. It is shown that the explanation of the found periodicity as a consequence of space compactification as suggested by G. Paál twenty years ago in connection with apparent quasar periodicities is still possible [1].

  1. Flat dust model

    Even more sinister could be the effects of cosmic dust, which could absorb light from distant supernovae, and lead to their apparent faintness. However the interstellar dust in our own galaxy absorbs more blue light than red, so it leaves a distinct reddening signature that the two-filter observations should detect. The High-Z Team corrects for both the nearby and distant supernovae in the same way by using these color measurements, which should eliminate the effects of interstellar dust from Fig. 1. The Supernova Cosmology Project argues that the dust effect is small and similar in the high and low redshift samples, so no net correction is needed. It is possible to imagine special dust that is not noticed nearby and that has the

    right size distribution to absorb all wavelengths equally [2]. Such gray dust would have to be smoothly distributed, because we do not see the increased scatter that patchy dust thick enough to produce the observed dimming, would introduce. If this material exists, there is a powerful test that could discriminate between a cosmology that is dominated by and one in which specially constructed dust produces the dimming at redshift 0.5. Since a cosmological constant is a constant energy density, while the density of matter has been declining as (1+z)3, by looking back to z = 1, we could observe the era (not so long ago) when matter was the most important constituent of the universe and the universe was decelerating. At those redshifts, the relation between redshift and flux would bend back toward brighter fluxes, while the effects of gray dust presumably would grow, or at least remain constant. To make accurate measurements of this effect will require discovering and making good measurements of redshift 1 supernovae whose light is redshifted into the infrared. The Next Generation Space Telescope may play an important role in this decisive test [3].

    Accelerating expansion was announced less than 14 years ago by both the Supernova Cosmology Project (SCP) based at Berkeley Lab and the competing High- z Supernova Search Team, a discovery that resulted in 2011 Nobel Prizes for the SCPs Saul Perlmutter and High-z Team members Brian Schmidt and Adam Riess. Acceleration may result from an unknown something dubbed dark energy or, dark energy may be just a way of saying we dont understand how gravity really works.

    Figure1. BOSS measures the three-dimensional clustering of galaxies at various redshifts, revealing their precise distance, the age of the universe at that redshift, and how fast the universe has expanded. The measurement uses a "standard ruler" based on the regular variations of the temperature of the cosmic microwave background (CMB), which reveal

    variations in the density of matter in the early universe that gave rise to the later clustering of galaxies and large-scale structure of the universe today. (Click on image for best resolution. Credit: Eric Huff, the SDSS- III team, and the South Pole Telescope team. Graphic by ZosiaRostomian) (Coutesy: News Centre, Berkley Lab March 2011,http://newscenter.lbl.gov/news- releases/2012/03/30/boss-first-results) [1]

    The standard Friedmannuniverse embedded in a five- dimensional bulk with constant curvature is examined as a brane-world with the extrinsic curvature derived directly from Codazzi's equation. It is shown that the accelerated expansion of the universe can be described as an extrinsic geometrical property, as an alternative to dark energy [2].

    In Riemannian geometry, the GaussCodazzi Mainardi equations are fundamental equations in the theory of embedded hypersurfaces in a Euclidean space, and more generally submanifolds of Riemannian manifolds. They also have applications for embedded hypersurfaces of pseudo-Riemannian manifolds: see GaussCodazzi equations (relativity).

  2. Mathematical Foundation

    In classical differential geometry of surfaces, the CodazziMainardi equations are expressed via the second fundamental form (L, M, N):

    Derivation of classical equations

    Consider a parametric surface in Euclidean space,

    where the three component functions depend smoothly on ordered pairs (u,v) in some open domain U in the uv-plane. Assume that this surfaceis regular, meaning that the vectors ru and rv are linearly independent. Complete this to a basis{ru,rv,n}, by selecting a unit vector n normal to the surface. It is possible to express the second partial derivatives of r using the Christoffel symbols and the second fundamental form.

    Clairaut's theorem states that partial derivatives commute:

    If we differentiate ruu with respect to v and ruv with respect to u, we get:

    Now substitute the above expressions for the second derivatives and equate the coefficients of n:

    Rearranging this equation gives the first Codazzi Mainardi equation.

    The second equation may be derived similarly.

  3. Mean curvature

    Let M be a smooth m-dimensional manifold immersed in the (m + k)-dimensional smooth manifold P. Let be a local orthonormal frame of vector

    fields normal to M. Then we can write,

    If, now, is a local orthonormal frame (of tangent vector fields) on the same open subset of M, then we can define the mean curvatures of the immersion by

    In particular, if M is a hypersurface of P, i.e. , then there is only one mean curvature to speak of. The immersion is called minimal if all the are identically zero.Observe that the mean curvature is a trace, or average, of the second fundamental form, for any given component. Sometimes mean curvature is defined by multiplying the sum on the right-hand side by .We can now write the GaussCodazzi equations as

    Contracting the components gives us

    Observe that the tensor in parentheses is symmetric and nonnegative-definite in . Assuming that M is a hypersurface, this simplifies to

    where and and . In that case, one more contraction yields,

    where and are the respective scalar curvatures, and

    If , the scalar curvature equation might be more complicated.

    We can already use these equations to draw some conclusions. For example, any minimal immersion into the round sphere

    must be of the form

    where runs from 1 to and

    is the Laplacian on M, and is a positive constant. [3]

    Locally rotationally symmetric (L.R.S.) Bianchi type V bulk viscous tilted stiff fluid cosmological model is investigated. To get the deterministic model of the

    universe, we have also assumed a condition A=Bn between metric potentials A, B where n is the constant. The behaviour of the model in presence and absence of bulk viscosity and singularities in the model are also discussed. In general, the models represent accelerating, shearing, tilted and non- rotating universe. The models have point type singularity in presence and absence of bulk viscosity both [4].

    It was shown that the phase transition from the decelerating universe to the acceleratinguniverse, which is of relevance to the cosmological coincidence problem, is possible in the semiclassically quantized two-dimensional dilaton gravity by taking into account the noncommutative field variables during the finite time. Initially, the quantum-mechanically induced energy from the noncommutativity among the fields makes the early universe decelerate and subsequently the universe is accelerating because the dilaton driven cosmology becomes dominant later [5].

  4. Cosmic Scale Factor a(t)

    The scale factor, cosmic scale factor or sometimes the Robertson-Walker scale factor parameter of the Friedmann equations is a function of time which represents the relative expansion of the universe. It relates the proper distance (which can change over time, unlike the comoving distance which is constant) between a pair of objects, e.g. two galaxies, moving with the Hubble flow in an expanding or contracting FLRW universe at any arbitrary time to their distance at some reference time . The formula for this is:

    where is the proper distance at epoch , is the distance at the reference time and is the scale factor. Thus, by definition, .

    The scale factor could, in principle, have units of lengthor be dimensionless. Most commonly in modern usage, it is chosen to be dimensionless, with counted from the birth of the universe and set to the present age of the universe: giving the current value of as or .

    The evolution of the scale factor is a dynamical question, determined by the equations of general relativity, which are presented in the case of a locally isotropic, locally homogeneous universe by the Friedmann equations.

    The Hubble parameter is defined:

    where the dot represents a time derivative. From the previous equation one can see that

    , and also that , so combining

    these gives , and substituting the above definition of the Hubble parameter gives which is just Hubble's law.

    Current evidence suggests that the expansion rate of the universe is accelerating, which means that the second derivative of the scale factor is positive, or equivalently that the first derivative is increasing

    over time. This also implies that any given galaxy recedes from us with increasing speed over time, i.e. for that galaxy is increasing with time. In contrast, the Hubble parameter seems to be decreasing with time, meaning that if we were to look at some fixed distance d and watch a series of different galaxies pass that distance, later galaxies would pass that distance at a smaller velocity than earlier ones.According to the FriedmannLemaîtreRobertsonWalker metric which is used to model the expanding universe, if at the present time we receive light from a distant object with a red shift of z, then the scale factor at the time the object originally emitted that light is given by theequation

    .

    The FriedmannLemaîtreRobertsonWalker (FLRW) metric is an exact solution of Einstein's field equations of general relativity; it describes a homogeneous, isotropicexpanding or contracting universe that may be simply connected or multiply connected.(If multiply connected, then each event in spacetime will be represented by more than one tuple of coordinates.) The general form of the metric follows from the geometric properties of homogeneity and isotropy.

    The FLRW metric starts with the assumption of homogeneity and isotropy of space. It also assumes that the spatial component of the metric can be time- dependent. The generic metric which meets these conditions is

    .(1) Where ranges over a 3-dimensional space of uniform curvature, that is, elliptical space, Euclidean space, or hyperbolic space. It is normally written as a function of three spatial coordinates, but there are several conventions for doing so, detailed below. does not depend on t all of the time dependence is in the function a(t), known as the "scale factor".

    Where

    .

    Einstein's field equations are not used in deriving the general form for the metric: it follows from the geometric properties of homogeneity and isotropy. However, determining the time evolution of does require Einstein's field equations together with a way of calculating the density,

    This metric has an analytic solution to Einstein's field equations.

    The Hubble parameter is defined:

    Black holes are commonly classified according to their mass, independent of angular momentum J or electric charge Q. The size of a black hole, as determined by the radius of the event horizon, or Schwarzschild radius, is roughly proportional to the mass M through

    Wherersh is the Schwarzschild radius and MSun is the mass of the Sun.This relation is exact only for black holes with zero charge and angular momentum; for more general black holes it can differ up to a factor of 2.

  5. Primordial black holes in the Big Bang Gravitational collapse requires great density. In the current epoch of the universe these high densities are only found in stars, but in the early universe shortly after the big bang densities were much greater, possibly allowing for the creation of black holes. The high density alone is not enough to allow the formation of black holes since a uniform mass distribution will not allow the mass to bunch up. In order fo primordial black holes to form in such a dense medium, there must be initial density perturbations that can then grow under their own gravity.

    A primordial black hole is a hypothetical type of black hole that is formed not by the gravitational collapse of a large star but by the extreme density of matter present during the universe's early expansion.

    According to the Big Bang Model, during the first few moments after the Big Bang, pressure and temperature were extremely great. Under these conditions, simple fluctuations in the density of matter may have resulted in local regions dense enough to create black holes. Although most regions of high density would be quickly dispersed by the expansion of the universe, a primordial black hole would be stable, persisting to the present.

    It has been proposed that primordial black holes could be a candidate for dark matter. Specifically those forming in the mass range of 1014 kg to 1023 kg.

    One way to detect primordial black holes is by their Hawking radiation.Hawking radiation is black body radiation that is predicted to be emitted by black holes, due to quantum effects near the event horizon. It is named after the physicist Stephen Hawking, who provided a theoretical argument for its existence in 1974, and sometimes also after the physicistJacobBekensteinwho predicted that black

    holes should have a finite, non-zero temperature and entropy.

    A Schwarzschild black hole has a metric

    The black hole is the background spacetime for a quantum field theory.The field theory is defined by a local path integral, so if the boundary conditions at the horizon are determined, the state of the field outside will be specified. To find the appropriate boundary conditions, consider a stationary observer just outside the horizon at position . The local metric to lowest order is:

    which is Rindler in terms of and . The metric describes a frame that is accelerating to keep from falling into the black hole. The local acceleration diverges as .

    The horizon is not a special boundary, and objects can fall in. So the local observer should feel accelerated in ordinary Minkowski space by the principle of equivalence. The near-horizon observer must see the field excited at a local inverse temperature

    ,

    the Unruh effect.

    The gravitational redshift is by the square root of the time component of the metric. So for the field theory state to consistently extend, there must be a thermal background everywhere with the local temperature redshift-matched to the near horizon temperature:

    The inverse temperature redshifted to r' at infinity is

    And is the near-horizon position, near 2 , so this is really:

    So a field theory defined on a black hole background is in a thermal state whose temperature at infinity is:

    which can be expressed more cleanly in terms of the surface gravity of the black hole, the parameter that determines the acceleration of a near-horizon observer.

    innatural units with , , and equal to 1, and where is the surface gravity of the horizon. So a black hole can only be in equilibrium with a gas of radiation at a finite temperature. Since radiation incident on the black hole is absorbed, the black hole must emit an equal amount to maintain detailed balance. The black hole acts as a perfect blackbody radiating at this temperature.

    In engineering units, the radiation from a Schwarzschild black hole is black-body radiation with temperature:

    where is the reduced Planck constant, c is the speed of light, kb is the Boltzmann constant, G is the gravitational constant, and M is the mass of the black hole.

    From the black hole temperature, it is straightforward to calculate the black hole entropy. The change in entropy when a quantity of heat dQ is added is:

    the heat energy that enters serves increase the total mass:

    .

    The radius of a black hole is twice its mass in natural units, so the entropy of a black hole is proportional to its surface area:

    .

    Assuming that a small black hole has zero entropy, the integration constant is zero. Forming a black hole is the most efficient way to compress mass into a region, and this entropy is also a bound on the information content of any sphere in space time. The form of the result strongly suggests that the physical description of a gravitating theory can be somehow encoded onto a bounding surface.

    Schwarzschild metric for black holes of different sizes

    Using the thermodynamic relationship between energy, temperature and entropy, Hawking was able to confirm Bekenstein's conjecture and fix the constant of proportionality at ¼.

    whereA is the area of the event horizon, calculated at 4R2,k is Boltzmann's constant, and is the Planck length. The subscript BH either stands for "black hole" or "Bekenstein-Hawking". The black hole entropy is proportional to the area of its event horizon

    . The fact that the black hole entropy is also the maximal entropy that can be obtained by the Bekenstein bound (wherein the Bekenstein bound becomes an equality) was the main observation that led to the holographic principle.

    The radius of a black hole is twice its mass in natural units, so the entropy of a black hole is proportional to its surface area:

    So we can calculate the entropy of black holes in the near vicinity of the accelerating universe, will the

    acceleration effect the the entropy of black holes. Certainly not, because entropy is independent of acceleration term. However, the theory may be different if we close to FLRW metric and try to find the hubble parameter and add it in the equation.

    The FLRW metric from equation (1)

    Where ranges over a 3-dimensional space of uniform curvature, that is, elliptical space, Euclidean space, or hyperbolic space. It is normally written as a function of three spatial coordinates, but there are several conventions for doing so, detailed below. does not depend on t all of the time dependence is in the function a(t), known as the "scale factor".

  6. The First Law

    Change of mass is related to change of area, angular momentum, and electric charge by:

    dM = /8pi dA + d + dQ

    where is the mass, is the surface gravity, is the horizon area, is the angular velocity, is the angular momentum, is the electrostatic potential and is the electric charge.

    The left hand side, dM, is the change in mass/energy. Although the first term does not have an immediately obvious physical interpretation, the second and third terms on the right hand side represent changes in energy due to rotation and electromagnetism. Analogously, the first law of thermodynamics is a statement of energy conservation, which contains on its right hand side the term T dS.

    dM = /8pi dA + d + dQ = T dS

    However, when quantum mechanical effects are taken into account, one finds that black holes emit thermal radiation (Hawking radiation) at temperature.

  7. Conclusions

    A review of Schmidt accelerating universe is reported. The connection with general relativity and cosmology is truly essential to predict the other face of gravitation that it can be repulsive. The black hole connection can be related to the accelerating universe with hubble parameter, red shift and hawking radiation.

  8. References

  1. Berkeley Lab LBNL, March 30 2011, http://newscenter.lbl.gov/news- releases/2012/03/30/boss-first-results/

  2. M.D. Maia, E. M. Monte, J.M.F. Maia,Theacceleratinguniverse in brane-world cosmology,Physics Letters B,Volume 585, Issues 1 2, 8 April 2004, pp. 1116 [3]Wikipediahttp://en.wikipedia.org/wiki/Gauss%E2% 80%93 Codazzi_equations

  1. R. Bali,P,Kumawat, Bulk viscous L.R.S. Bianchi type V tilted stiff fluid cosmological model in general relativity,Physics Letters B,Volume 665, Issue 5, 31 July 2008, pp. 332337

  2. W. Kim, M. S. Yoon,Acceleratinguniverse in two-dimensional noncommutatiedilatoncosmology,Physics Letters B,Volume 645, Issue 1, 1 February 2007, pp. 8287

Leave a Reply