Chapter

Experimentation in Physics

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This chapter presents the different purposes of observation and experiment in physics using examples that allow us to grasp the historical transformations linked to the development of instrumentation. We cover both the observational and experimental aspects of this discipline, which range from astronomy and astrophysics to nuclear and particle physics, including optics and solid-state physics.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This paper defends the naïve thesis that the method of experiment has per se an epistemic superiority over the method of computer simulation, a view that has been rejected by some philosophers writing about simulation, and whose grounds have been hard to pin down by its defenders. I further argue that this superiority does not depend on the experiment’s object being materially similar to the target in the world that the investigator is trying to learn about, as both sides of dispute over the epistemic superiority thesis have assumed. The superiority depends on features of the question and on a property of natural kinds that has been mistaken for material similarity. Seeing this requires holding other things equal in the comparison of the two methods, thereby exposing that, under the conditions that will be specified, the simulation is necessarily epistemically one step behind the corresponding experiment. Practical constraints like feasibility and morality mean that scientists do not often face an other-things-equal comparison when they choose between experiment and simulation. Nevertheless, I argue, awareness of this superiority and of the general distinction between experiment and simulation is important for maintaining motivation to seek answers to new questions.
Book
Full-text available
En juin 1988, paraît dans Nature un article où est affirmée la possibilité d’un effet moléculaire sans présence physique de molécule ; l’eau se comporterait comme un support liquide sur laquelle des signaux moléculaires pourraient être enregistrés. Cette thèse est soutenue par Jacques Benveniste, un chercheur de l’Inserm alors reconnu pour ses travaux sur les médiateurs de l’allergie. Le jour de la parution de l’article, le journal Le Monde parle d’une découverte qui « pourrait bouleverser les fondements de la physique ». C’est le début d’une immense polémique à laquelle Luc Montagnier, prix Nobel de médecine en 2008, a redonné récemment une certaine actualité. L’objectif de ce livre est de proposer un éclairage sociologique sur cette controverse. Après une description des étapes de la controverse, l’auteur s’attache à démontrer que le contenu des arguments et des contre-arguments qui font la trame de la dispute renvoie à des conceptions divergentes des modalités de mise en œuvre des normes au principe du jugement scientifique. Aucun des protagonistes ne remet complètement en cause ces normes, mais tous s’affrontent sur la façon dont il convient de les mettre en œuvre. C’est à la découverte des coulisses du processus de légitimation d’une thèse scientifique que le lecteur est convié à partir de l’étude de cette controverse qui a notamment contribué à relancer les débats sur l’homéopathie.
Article
Full-text available
Summary There is an uncanny unanimity about the founding role of Kepler’s Dioptrice in the theory of optical instruments and for classical geometric optics generally. It has been argued, however, that for more than fifty years optical theory in general, and Dioptrice in particular, was irrelevant for the purposes of telescope making. This article explores the nature of Kepler’s achievement in his Dioptrice. It aims to understand the Keplerian ‘theory’ of the telescope in its own terms, and particularly its links to Kepler’s theory of vision. It deals first with Kepler’s way to circumvent his ignorance of the law of refraction, before turning to Kepler’s explanations of why lenses magnify and invert vision. Next, it analyses Kepler’s account of the properties of telescopes and his suggestions to improve their designs. The uses of experiments in Dioptrice, as well as the explicit and implicit references to della Porta’s work that it contains, are also elucidated. Finally, it clarifies the status of Kepler’s Dioptrice vis-a`-vis, classical geometrical optics and presents evidence about its influence in treatises about the practice of telescope making during roughly the first two-thirds of the seventeenth century. Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 2. The problem of refractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 3. Why do lenses magnify? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 4. When and why do things appear inverted through a lens? . . . . 116 5. Combining two convex lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6. The Galilean telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7. Measuring magnifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8. Kepler’s ‘cryptical’ instruments . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9. Kepler and Giambattista della Porta . . . . . . . . . . . . . . . . . . . . . . 129 10. The uses of experiment in Kepler’s Dioptrice . . . .. . . . . . . . . . . 130 11. The legacy of Kepler’s Dioptrice . . . . . . . . . . . .. . . . . . . . . . . . . . 131 Appendix: combinations of two and three convex lenses . . . . . . . 134
Article
Full-text available
Argument In the theory-dominated view of scientific experimentation, all relations of theory and experiment are taken on a par; namely, that experiments are performed solely to ascertain the conclusions of scientific theories. As a result, different aspects of experimentation and of the relations of theory to experiment remain undifferentiated. This in turn fosters a notion of theory-ladenness of experimentation (TLE) that is too coarse-grained to accurately describe the relations of theory and experiment in scientific practice. By contrast, in this article, I suggest that TLE should be understood as an umbrella concept that has different senses. To this end, I introduce a three-fold distinction among the theories of high-energy particle physics (HEP) as background theories, model theories, and phenomenological models. Drawing on this categorization, I contrast two types of experimentation, namely, “theory-driven” and “exploratory” experiments, and I distinguish between the “weak” and “strong” senses of TLE in the context of scattering experiments from the history of HEP. This distinction enables identifying the exploratory character of the deep-inelastic electron-proton scattering experiments – performed at the Stanford Linear Accelerator Center (SLAC) between the years 1967 and 1973 – thereby shedding light on a crucial phase of the history of HEP, namely, the discovery of “scaling,” which was the decisive step towards the construction of quantum chromo-dynamics as a gauge theory of strong interactions.
Book
The discovery of high-temperature superconductivity was hailed as a major scientific breakthrough, inducing an unprecedented excitement and expectation among the scientific community and in the international press. This book sets this research breakthrough in context, and reconstructs the history of the discovery. The authors analyse the emergence of this new research field and the way its development was shaped by scientists and science policy-makers. They also examine the various settings in which the research was undertaken, as well as considering the scientific backgrounds and motivations of researchers who entered the field following the original discovery. The industrial connection and the general belief in promises of future applications were important elements in strategies devised to obtain funding. A remarkable factor in this process was the media's role. The sustained attention that followed the discovery of high-temperature superconductivity resulted in it being seen as the symbol of a new technological frontier.
Book
What role have experiments played, and should they play, in physics? How does one come to believe rationally in experimental results? The Neglect of Experiment attempts to provide answers to both of these questions. Professor Franklin's approach combines the detailed study of four episodes in the history of twentieth century physics with an examination of some of the philosophical issues involved. The episodes are the discovery of parity nonconservation ( or the violation of mirror symmetry) in the 1950s; the nondiscovery of parity nonconservation in the 1930s, when the results of experiments indicated, at least in retrospect, the symmetry violation, but the significance of those results was not realized; the discovery and acceptance of CP ( combined parity-charge conjugations, paricle-antiparticle) symmetry; and Millikan's oil-drop experiment. Franklin examines the various roles that experiment plays, including its role in deciding between competing theories, confirming theories, and calling fo new theories. The author argues that one can provide a philosophical justification for these roles. He contends that if experiment plays such important roles, then one must have good reason to believe in experimental results. He then deals with deveral problems concerning such reslults, including the epistemology of experiment, how one comes to believe rationally in experimental results, the question of the influence of theoretical presuppositions on results, and the problem of scientific fruad. This original and important contribution to the study of the philosophy of experimental science is an outgrowth of many years of research. Franklin brings to this work more than a decade of experience as an experimental high-energy physicist, along with his significant contributions to the history and philosophy of science.
Article
Our knowledge of the fundamental particles of nature and their interactions is summarized by the standard model of particle physics. Advancing our understanding in this field has required experiments that operate at ever higher energies and intensities, which produce extremely large and information-rich data samples. The use of machine-learning techniques is revolutionizing how we interpret these data samples, greatly increasing the discovery potential of present and future experiments. Here we summarize the challenges and opportunities that come with the use of machine learning at the frontiers of particle physics.
Article
Einstein is widely understood as regarding “principle theories” (such as the theory of relativity) as explanatorily powerless. This brief paper shows that Einstein’s remarks admit of another interpretation, according to which principle theories possess explanatory power. This interpretation is motivated primarily by showing that James Jeans made remarks very similar to Einstein’s at nearly the same time, but Jeans reconciled those remarks with holding principle theories to be explanatory. Einstein’s remarks could well be getting at the same point as Jeans’s. This view of principle and constructive theories is independently valuable. It undermines Salmon’s “friendly physicist” example as an argument for the view that there are facts that can be explained by both principle and constructive theories.
Article
Analytical table of contents Preface Introduction: rationality Part I. Representing: 1. What is scientific realism? 2. Building and causing 3. Positivism 4. Pragmatism 5. Incommensurability 6. Reference 7. Internal realism 8. A surrogate for truth Part II. Intervening: 9. Experiment 10. Observation 11. Microscopes 12. Speculation, calculation, models, approximations 13. The creation of phenomena 14. Measurement 15. Baconian topics 16. Experimentation and scientific realism Further reading Index.
Article
It is argued that the usual account of the discovery and subsequent rejection, or criticism, of Ohm's law is both a misleading and an inadequate explanation. A close logical examination of Ohm's experimental work reveals a conceptual structure quite different from that of the electrical science of his time. As a result of this analysis, it is claimed that the conceptual shift in Ohm's experimental work was the basis for the reaction of his contemporaries. © 1963, American Association of Physics Teachers. All rights reserved.
Article
Starting with some illustrative examples, I develop a systematic account of a specific type of experimentation-an experimentation which is not, as in the "standard view" driven by specific theories. It is typically practiced in periods in which no theory or-even more fundamentally-no conceptual framework is readily available. I call it exploratory experimentation and I explicate its systematic guidelines; From the historical examples I argue furthermore that exploratory experimentation may have an immense, but hitherto widely neglected, epistemic significance.
Article
Visual observations by Robert Evans over the period 1980 - 1988 have been used to derive a mean rate of 1.3×(H0/75)2 supernovae per century per 1010Lsun(B) in spirals of types Sab - Sd. From this value the total Galactic supernova rate is found to be 3±1 per century. A comparable rate of 2.6±1.3 supernovae per century is derived from historical supernovae that exploded within 4 kpc of the Sun. These values are marginally consistent with the total rate of 1.2 (+1.7, -0.7) supernovae per century that Ratnatunga and van den Bergh predicted from the Scalo luminosity function of stars and a Galactic population model. It is puzzling that no Galactic supernovae have been observed during the last 389 years. The Galactic supernova rate is not high enough to account for the mass extinctions observed in the geological record.
Article
"I want to get at the blown glass of the early cloud chambers and the oozing noodles of wet nuclear emulsion; to the resounding crack of a high-voltage spark arcing across a high-tension chamber and leaving the lab stinking of ozone; to the silent, darkened room, with row after row of scanners sliding trackballs across projected bubble-chamber images. Pictures and pulses—I want to know where they came from, how pictures and counts got to be the bottom-line data of physics." (from the preface) Image and Logic is the most detailed engagement to date with the impact of modern technology on what it means to "do" physics and to be a physicist. At the beginning of this century, physics was usually done by a lone researcher who put together experimental apparatus on a benchtop. Now experiments frequently are larger than a city block, and experimental physicists live very different lives: programming computers, working with industry, coordinating vast teams of scientists and engineers, and playing politics. Peter L. Galison probes the material culture of experimental microphysics to reveal how the ever-increasing scale and complexity of apparatus have distanced physicists from the very science that drew them into experimenting, and have fragmented microphysics into different technical traditions much as apparatus have fragmented atoms to get at the fundamental building blocks of matter. At the same time, the necessity for teamwork in operating multimillion-dollar machines has created dynamic "trading zones," where instrument makers, theorists, and experimentalists meet, share knowledge, and coordinate the extraordinarily diverse pieces of the culture of modern microphysics: work, machines, evidence, and argument.
Article
This fascinating study in the sociology of science explores the way scientists conduct, and draw conclusions from, their experiments. The book is organized around three case studies: replication of the TEA-laser, detecting gravitational rotation, and some experiments in the paranormal. "In his superb book, Collins shows why the quest for certainty is disappointed. He shows that standards of replication are, of course, social, and that there is consequently no outside standard, no Archimedean point beyond society from which we can lever the intellects of our fellows."—Donald M. McCloskey, Journal of Economic Psychology "Collins is one of the genuine innovators of the sociology of scientific knowledge. . . . Changing Order is a rich and entertaining book."—Isis "The book gives a vivid sense of the contingent nature of research and is generally a good read."—Augustine Brannigan, Nature "This provocative book is a review of [Collins's] work, and an attempt to explain how scientists fit experimental results into pictures of the world. . . . A promising start for new explorations of our image of science, too often presented as infallibly authoritative."—Jon Turney, New Scientist
Article
Harry Collins’ central argument about experimental practice revolves around the thesis that facts can only be generated by good instruments but good instruments can only be recognized as such if they produce facts. This is what Collins calls the experimenters’ regress. For Collins, scientific controversies cannot be closed by the ‘facts’ themselves because there are no formal criteria independent of the outcome of the experiment that scientists can apply to decide whether an experimental apparatus works properly or not. No one seems to have noticed that the debate is in fact a rehearsal of the ancient philosophical debate about skepticism. The present article suggests that the way out of radical skepticism offered by the so-called mitigated skeptics is a solution to the problem of consensus formation in science.
Traité d’Aristarque de Samos sur les Grandeurs et les Distances du Soleil à la Lune
  • Aristarque De
Kepler, Elliptical Orbits, and Celestial Circularity: A Study in the Persistence of Metaphysical Commitment, Part I and II
  • J Brackenridge
  • Bruce
Roemer and the First Determination of the Velocity of Light (1676)
  • M Romer
  • Bernard Ierome Cohen