![]() ![]() But this troubled history certainly also reflects how tedious and delicate the task of measuring the Hubble constant is, and, as a result, how difficult it has been to assess the accuracy of its past measures. Sandage, modern cosmology can be considered as the “search of two numbers”: the values of the Hubble constant and of the q 0 parameter, which characterizes the deceleration in the expansion (Sandage 1970). ![]() As one may remember from the famous words of A. The agitated history of the Hubble constant mirrors how fundamental this parameter has been for the development of our modern, precision cosmology. Among the most remarkable episodes of this track: the Hubble war in the 1970s, opposing Sandage and de Vaucouleurs, arguing both about the correct methodology to adopt for measuring the Hubble constant and about its actual value Footnote 1 the dispute between Sandage and his colleague Wendy Freedman at the Carnegie Observatories of Pasadena in the 1980s, the latter defending a much higher value than the former, probably lying in the middle of the range spanned by Sandage and de Vaucouleurs the disagreement since 2014 between two opponents that nobody had seen coming: the distant and the local universe, the former with a Hubble value of 67.4 km s −1 Mpc −1, the latter one approaching 75, and finally the so-called and on-going Hubble ‘crisis’ that this persisting disagreement and its apparent confirmation by the publication of new local measures in 2019 have seeded. Based on the lessons drawn from the so-called Hubble crisis, I sketch a methodological guide for identifying, quantifying and reducing uncertainties in astrophysical measurements, hoping that such a guide can not only help to re-frame the current Hubble tension, but serve as a starting point for future fruitful discussions between astrophysicists, astronomers and philosophers.įrom the realization in the end of the 1920s by Edwin Hubble that a relation of proportionality exists between the recessional velocities of galaxies and their distance to the crisis around the Hubble constant that currently undermines the standard model of cosmology, the history of this constant has been that of the chase of a fleeing number that kept escaping the scientists’ net. ![]() Astrophysical measurements, such as the measure of the Hubble constant, require a methodology that permits both to reduce the known uncertainties and to track the unknown unknowns. Should astronomers focus only on their best indicators, e.g., the Cepheids, and improve the precision of this measurement based on a unique object to the best possible? Or should they “spread the risks”, i.e., multiply the indicators and methodologies before averaging over their results? Is a robust agreement across several uncertain measures, as is currently argued to defend the existence of a ‘Hubble crisis’ more telling than a single 1% precision measurement? This controversy, I argue, stems from a misconception of what managing the uncertainties associated with such experimental measurements require. As early as the 1970s, Sandage and de Vaucouleurs have been arguing about the adequate methodology for such a measurement. Measuring the rate at which the universe expands at a given time–the ‘Hubble constant’– has been a topic of controversy since the first measure of its expansion by Edwin Hubble in the 1920s. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |